title
sequencelengths 0
16
| author
sequencelengths 0
109
| authoraffiliation
sequencelengths 0
68
| venue
sequencelengths 0
3
| abstract
stringlengths 12
16.7k
| doi
stringlengths 13
39
⌀ | pdfurls
sequencelengths 1
1
⌀ | corpusid
int64 148
259M
| arxivid
stringlengths 9
15
| pdfsha
stringlengths 40
40
| text
stringlengths 2.47k
723k
| github_urls
sequencelengths 0
22
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus",
"Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus"
] | [
"Jesse Dodge jessed@allenai.org \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"Sap ♣♥♣ Maarten \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"Ana Marasović \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"♣ \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"William Agnew \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"Gabriel Ilharco \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"Dirk Groeneveld \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"Margaret Mitchell \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"Matt Gardner \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"♣ \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n",
"Paul G Allen \nSchool of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n\n"
] | [
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n",
"School of Computer Science & Engineering\nUniversity of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence\n"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Large language models have led to remarkable progress on many NLP tasks, and researchers are turning to ever-larger text corpora to train them. Some of the largest corpora available are made by scraping significant portions of the internet, and are frequently introduced with only minimal documentation. In this work we provide some of the first documentation for the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set of filters to a single snapshot of Common Crawl. We begin by investigating where the data came from, and find a significant amount of text from unexpected sources like patents and US military websites. Then we explore the content of the text itself, and find machine-generated text (e.g., from machine translation systems) and evaluation examples from other benchmark NLP datasets. To understand the impact of the filters applied to create this dataset, we evaluate the text that was removed, and show that blocklist filtering disproportionately removes text from and about minority individuals. Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet. | 10.18653/v1/2021.emnlp-main.98 | [
"https://www.aclanthology.org/2021.emnlp-main.98.pdf"
] | 237,568,724 | 2104.08758 | 1b634338d88dc298475bcbb24160bb471843e248 |
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Jesse Dodge jessed@allenai.org
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Sap ♣♥♣ Maarten
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Ana Marasović
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
♣
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
William Agnew
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Gabriel Ilharco
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Dirk Groeneveld
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Margaret Mitchell
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Matt Gardner
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
♣
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Paul G Allen
School of Computer Science & Engineering
University of Washington ♠ Hugging Face ♣ Allen Institute for Artificial Intelligence
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 2021
Large language models have led to remarkable progress on many NLP tasks, and researchers are turning to ever-larger text corpora to train them. Some of the largest corpora available are made by scraping significant portions of the internet, and are frequently introduced with only minimal documentation. In this work we provide some of the first documentation for the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set of filters to a single snapshot of Common Crawl. We begin by investigating where the data came from, and find a significant amount of text from unexpected sources like patents and US military websites. Then we explore the content of the text itself, and find machine-generated text (e.g., from machine translation systems) and evaluation examples from other benchmark NLP datasets. To understand the impact of the filters applied to create this dataset, we evaluate the text that was removed, and show that blocklist filtering disproportionately removes text from and about minority individuals. Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.
Introduction
Models pretrained on unlabeled text corpora are the backbone of many modern NLP systems (Devlin et al., 2019;Liu et al., 2019;Raffel et al., 2020;Brown et al., 2020, inter alia). This paradigm incentivizes the use of ever larger corpora (Kaplan et al., 2020;Henighan et al., 2020), with the biggest models now training on a substantial fraction of the publicly-available internet (Raffel et al., 2020;Brown et al., 2020). Of course, as with all machine learning systems, the data such models are trained on has a large impact on their behavior. For structured, task-specific NLP datasets, best practices have emerged around documenting the collection process, composition, intended uses, and other Figure 1: We advocate for three levels of documentation when creating web-crawled corpora. On the right, we include some example of types of documentation that we provide for the C4.EN dataset.
characteristics (Bender and Friedman, 2018;Gebru et al., 2018;Hutchinson et al., 2021). However, given the challenges of applying these practices to massive collections of unlabeled text scraped from the web, thorough documentation is typically not done. This leaves consumers of pretrained language models in the dark about the influences of pretraining data on their systems, which can inject subtle biases in downstream uses (Li et al., 2020;Gehman et al., 2020;Groenwold et al., 2020).
In this work we provide some of the first documentation of a web-scale dataset: the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020). C4 is one of the largest language datasets available, with more than 156 billion tokens collected from more than 365 million domains across the internet (Table 1). 1 C4 has been used to train models such as T5 and the Switch Transformer (Fedus et al., 2021), two of the largest pretrained English language models. While Raffel et al. (2020) provided scripts to recreate C4, simply running the available scripts costs thousands of dollars. Reproducible science is only possible when data is broadly ac-cessible, and web-scale corpora are no different in this regard. With that in mind, we provide a downloadable copy of this dataset. 2 Documenting massive, unlabeled datasets is a challenging enterprise. Some suggestions from previous work are naturally appropriate, such as reporting the number of examples and a link to a downloadable version of the dataset. 3 However, many recommendations-like reporting information about the authors of the text-are not easily applicable, since often the required information is not available in web-crawled text.
We advocate for documentation of web-scale corpora to include three views of the data, as illustrated in Figure 1. First, the metadata, including the internet domains from which the data was collected. At the highest level, internet top-level domains like .edu likely contain significantly different text than .mil, the top-level domain reserved for US government military websites; text from both exist in C4.
Following the metadata, we examine the text itself. We find significant amounts of machinegenerated text (e.g., from machine translation systems), the proportion of which will likely only increase over time. We also find some evidence of contamination (the presence of test examples from other datasets that exist in C4), and argue that new datasets should properly account for the existence of such phenomenon.
Finally, as web-crawled datasets typically filter out significant portions of text, we argue for more thorough documentation of what is not in the data. Some filters are relatively straightforward, such as removing Lorem ipsum placeholder text. However, we find that another filter which removes documents that contain a token from a banned word list, disproportionately removes documents in dialects of English associated with minority identities (e.g., text in African American English, text discussing LGBTQ+ identities).
In addition to our set of recommendations and analyses, we publicly host three versions of the data with different levels of filtering, along with an indexed version for easy searching 4 , and a repository 2 https://github.com/allenai/c4documentation 3 NLP Reproducibility Checklist https://2020.emnlp.org/blog/2020-05-20reproducibility 4 https://c4-search.apps.allenai.org/ this index will only be hosted until 2021-12-31 Table 1: Statistics for the three corpora we host. One "document" is the text scraped from a single URL. Tokens are counted using the SpaCy English tokenizer. Size is compressed JSON files.
for public discussion of findings. 5
The English Colossal Clean Crawled
Corpus (C4) C4 is created by taking the April 2019 snapshot of Common Crawl 6 and applying a number of filters with the intention of removing text that is not natural English. This includes filtering out lines which don't end in a terminal punctuation mark or have fewer than three words, discarding documents with less than five sentences or that contain Lorem ipsum placeholder text, and removing documents which contain any word on the "List of Dirty, Naughty, Obscene, or Otherwise Bad Words". 7 Additionally, langdetect 8 is used to remove documents which weren't classified as English with probability at least 0.99, so C4 is primarily comprised of English text. We call this "cleaned" version of C4 (created by applying all filters) C4.EN. For brevity we refer readers to Raffel et al. (2020) for a full list of the filters. In addition to C4.EN, we host the "uncleaned" version (C4.EN.NOCLEAN), which is the snapshot of Common Crawl identified as English (with no other filters applied), and C4.EN.NOBLOCKLIST, which is the same as C4.EN but without filtering out documents containing tokens from a blocklist of words (see §5 for more details). Table 1 contains some statistics for the three corpora.
Corpus-level statistics
Understanding the provenance of the texts that comprise a dataset is fundamental to understanding the dataset itself, so we begin our analysis of the metadata of C4.EN by characterizing the prevalence of 5 https://github.com/allenai/c4documentation/discussions 6 https://commoncrawl.org/, where monthly "snapshots" are created by crawling and scraping the web, each typically containing terabytes of text 7 https://git.io/vSyEu 8 https://pypi.org/project/langdetect/ Websites In Figure 2 (right), we show the top 25 most represented websites in C4.EN, ranked by total number of tokens. Surprisingly, the cleaned corpus contains substantial amounts of patent text 9 https://en.wikipedia.org/wiki/List_ of_Internet_top-level_domains 10 https://spacy.io/api/tokenizer 11 We use the TLDExtract (https://pypi.org/ project/tldextract/) package to parse the URLs. documents, with the single-most represented website in the corpus is patents.google.com and patents.com being in the top 10. We discuss the implications of this in §4.1.
Two well-represented domains of text are Wikipedia and news (NYTimes, LATimes, Al-Jazeera, etc.). These have been extensively used in the training of large language models (Devlin et al., 2019;Liu et al., 2019;Brown et al., 2020, e.g., BERT, RoBERTa, GPT-3). Some other noteworthy websites that make up the top 25 include openaccess publications (Plos, FrontiersIn, Springer), the book publishing platform Scribd, the stock analyses and advice website Fool.com, and the distributed file system ipsf.io. 12
Utterance Date
Language changes over even short timescales, and the truth or relevance of many statements depends on when they were made. While the actual utterance date is often impossible to obtain for web documents, we use the earliest date a URL was indexed the Internet Archive as a proxy. We note that using the Internet Archive is not perfect, as it will sometimes index webpages many months after their creation, and only indexed approximately 65% of URLs in C4.EN. In Figure 3, we present the dates the Internet Archive first indexed 1,000,000 randomly sampled URLs from C4.EN. We found that 92% are estimated to have been written in the last decade (2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019). However, the distribution is long-tailed-there is a non-trivial amount of data that was written between 10-20 years before data collection.
Geolocation
We aim to assess which countries are represented in C4.EN, which we estimate using the location where a webpage is hosted as a proxy for the location of its creators. There are several caveats to working with geolocations of IP addresses, including that many websites are not hosted locally, instead being hosted in data centers, or that ISPs may store a website in different locations around the world, so a user can load a version from a nearby datacenter rather than from the original hosting location. We use an IP-country database 14 and present country-level URL frequencies from 175,000 randomly sampled URLs.
As shown in Figure 4 in the appendix, 51.3% pages are hosted in the United States. The countries with the estimated 2nd, 3rd, 4th largest English speaking populations 15 -India, Pakistan, Nigeria, and The Philippines-have only 3.4%, 0.06%, 0.03%, 0.1% the URLs of the United States, despite having many tens of millions of English speakers.
What is in the text?
We expect our trained models to exhibit behavior based on the data they are trained on. In this section 14 https://lite.ip2location.com/ database/ip-country 15 https://en.wikipedia.org/wiki/List_ of_countries_by_English-speaking_ population we examine machine-generated text, benchmark contamination, and demographic biases.
Machine-generated text
As the use of models which can generate natural language text proliferates, web-crawled data will increasingly contain data that was not written by humans. Here we look for machine-generated text in the Internet domain from which we get the most tokens: patents.google.com.
Patent offices have requirements around the language in which patents are written (e.g., the Japanese patent office requires patents be in Japanese). patents.google.com uses machine translation to translate patents from patent offices around the world into English. 16 Table 3 in Appendix A.3 includes the number of patents in C4.EN from different patent offices, and the official language of those patent offices. While the majority of the patents in this corpus are from the US patent office, more than ten percent are from patent offices which require patents be submitted in a language other than English. 17 While some patents in this corpus are native digital documents, many were physical documents scanned through Optical Character Recognition (OCR). Indeed, some older documents from non-English patent offices are first run through OCR then machine translation systems (see Appendix A.3). OCR systems are imperfect, and thus generate text that is different in distribution from natural English (often OCR systems make mistakes in predictable ways, such as spelling errors and entirely missed words). Quantifying the number of documents that are machine-generated is an active area of research (Zellers et al., 2019); our findings motivate further work.
Benchmark data contamination
In this section, we study benchmark data contamination (Brown et al., 2020), i.e., to what extent training or test datasets from downstream NLP tasks appear in the pretraining corpus. There are generally two ways datasets can end up in a snapshot from Common Crawl: either a given dataset is built from text on the web, such as the IMDB 16 "Patents with only non-English text have been machinetranslated to English and indexed", from https:// support.google.com/faqs/answer/7049585 17 Many patent offices require a patent be filed in a particular language, but also allow translations into other languages be submitted, so this is an upper bound on the number of translated documents. dataset (Maas et al., 2011) and the CNN/DailyMail summarization dataset (Hermann et al., 2015;Nallapati et al., 2016), or it is uploaded after creation (e.g., to a github repository, for easy access). In this section, we explore both input and input-and-label contaminations of popular datasets.
Unlike Brown et al. (2020), who measure contamination using n-gram overlap (n between 8 and 13) between pretraining data and benchmark examples, we measure exact matches, normalized for capitalization and punctuation. 18
Input-and-label contamination If task labels are available in the pretraining corpus, a valid traintest split is not made and the test set is not suitable for evaluating the model's performance. For tasks similar to language modeling (e.g., abstractive summarization) the task labels are target tokens. If target text occurs in the pretraining corpus, the model can learn to copy the text instead of actually solving the task (Meehan et al., 2020;Carlini et al., 2020).
We examine contamination of target text in test sets of datasets for three generation tasks: (i) abstractive summarization (TIFU, Kim et al., 2019;XSum, Narayan et al., 2018), (ii) tableto-text generation (WikiBio, Lebret et al., 2016), and (iii) graph-to-text generation (AMR-to-text, LDC2017T10). In the upper part of Table 2, we show that 1.87-24.88% target texts appear in C4.EN. The matching rate is higher for datasets that (mostly) contain single-sentence target texts (XSum, TIFU-short, AMR-to-text) than for those with multi-sentence outputs (TIFU-long, WikiBio). That said, matching XSum summaries are not trivial sentences (see Table 5 in the appendix), and developing a model that generates them automatically is a notable achievement.
We also examine two subsets of the LAMA dataset for probing of knowledge completion: LAMA T-REx and Google-RE. LAMA evaluation examples are comprised of template-generated sentences with a masked token that we fill in, and we find 4.6% and 5.7% of the examples in the T-REx and Google-RE sets, respectively, exist verbatim in C4.EN. While this is a tiny fraction of the C4.EN dataset, a language model pretrained on C4.EN can simply retrieve the matching training instance to get these examples correct.
We do not observe input-and-label contamina- tion due to hosting datasets on the web (see Appendix A.5).
Input contamination Input contamination of evaluation examples that does not include labels can also lead to downstream problems. We examine input contamination for test examples in the GLUE benchmark (Wang et al., 2019b, individual citations in Appendix A.4), a common test bed for language models. If a dataset has multiple components (e.g. sentence and question on QNLI), we report them separately. In Table 2, we show that the percentage of inputs found in C4.EN varies widely, from less than 2% to over 50%. Interestingly, both the smallest and largest contamination proportions come from QNLI (built from Wikipedia), where models are tasked to determine whether a sentence contains the answer to a question.
Although train set contamination is generally not problematic for classification tasks if it does not include labels-Gururangan et al. (2020) even recommend continued pretraining on the task's unlabeled training data-it could be misleading in few-shot and zero-shot learning. The LAMA dataset is one which is often used to evaluate zeroshot performance and GLUE datasets for few-shot performance (Gao et al., 2021), and perhaps this practice should be considered carefully.
Demographic Biases in C4.EN
Much recent work has found various kinds of bias in fine-tuned models (e.g., Sheng et al., 2019;Gehman et al., 2020;Li et al., 2020), where the bias is assumed to derive from pretraining data, though this has not previously been easy to verify. We present evidence that corroborates this widely-held hypothesis, reproducing the ethnicity bias results from Li et al. (2020) and showing that this bias correlates with sentiment expressed in C4.
To reproduce the ethnicity bias results of Li et al. (2020), we used their script to generate questions for testing implicit bias towards various ethnicities. This resulted in 294,000 questions about 15 different ethnicities, with each question comparing two. An example question is "An Arab woman was taking classes with a Jewish woman. Who was a bad driver?" These questions come in pairs, with one having negative sentiment expressed about the answer to the question (as above), and one expressing positive sentiment about the answer.
We took the pretrained UnifiedQA model (Khashabi et al., 2020), 19 distributed by Hugging Face's transformers library (Wolf et al., 2020), and evaluated it on these 294,000 questions formatted as multiple choice, so the model had to pick one of the two ethnicities in the question. We then counted the proportion of times each ethnicity was associated with positive sentiment by the model; i.e., the model selected the ethnicity as the answer for a positive-sentiment question, or selected the opposite ethnicity as the answer for a negative-sentiment question. The resulting proportions are shown in Table 7 in §A.7.
We find that "Jewish" and "Arab" are among the most polarized ethnicities, with a positive bias towards "Jewish" and a negative bias towards "Arab". We then look for evidence that C4 could be the source of this bias. We compute a sentiment lexicon by averaging the various social lexicons of Hamilton et al. (2016), and count sentimentbearing words that occur in the same paragraph as either ethnicity. We find that "Jewish" has a significantly higher percentage of positive sentiment tokens (73.2% of 3.4M tokens) than "Arab" does (65.7% of 1.2M tokens) (for more detail, see §A.7). This is an example of representational harms (Baro- 19 UnifiedQA is a fine-tuned version of T5 (Raffel et al., 2020), which was pretrained on C4. cas et al., 2017).
C4.EN is a heterogenous and complex collection of text from many different sources, and this can be seen by measuring such biases in text from different internet domains that the text is from. Specifically, we find New York Times articles in C4.EN have a smaller sentiment spread between "Jewish" and "Arab" (4.5%, where we observed a 7.5% spread in overall C4), while there is no gap between sentiment expressed in the context of these two ethnicities in articles from Al Jazeera.
What is excluded from the corpus?
To understand a dataset built by first scraping the web then applying filters to remove some portion of the scraped text, one must understand the impact of the filters themselves. Such filters are often designed to "clean" the text (e.g., through deduplication, length-based filtering, etc.). We characterize the effect of one specific step in the creation of C4.EN: the exclusion of documents that contain any word from a blocklist of "bad" words 20 with the intent to remove "offensive language" (Raffel et al., 2020), i.e., hateful, toxic, obscene, sexual, or lewd content. This blocklist was initially created to avoid "bad" words in autocompletions for a search engine (Simonite, 2021) and contains words such as "porn," "sex," "f*ggot," and "n*gga."
We first characterize the topic of documents that were excluded (i.e., that are in C4.EN.NOBLOCKLIST but not in C4.EN) using clustering ( §5.1). Then, we examine whether blocklist filtering disproportionately excludes documents that contain minority identity mentions ( §5.2) or documents that are likely written in non-white English dialects ( §5.3).
Characterizing the excluded documents
We examine a random sample of 100,000 documents excluded by the blocklist. Using PCA projections of TF-IDF embeddings, we categorize those documents into k = 50 clusters using the k-means algorithm. As illustrated in Fig. 6 in the appendix, we find only 16 clusters of excluded documents that are largely sexual in nature (31% of the excluded documents). For example, we find clusters of documents related to science, medicine, and health, as well as clusters related to legal and political documents.
Which demographic identities are excluded?
Next, we explore whether certain demographics identity mentions are more likely to be excluded due to the blocklist filtering. We extract the frequencies of a set of 22 regular expressions related to identity mentions, 21 and compute the pointwise mutual information (PMI; Church and Hanks, 1990) between the likelihood of an identity mention occurring versus being filtered out by the blocklist. As illustrated in Fig. 5 in the appendix, we find that mentions of sexual orientations (lesbian, gay, heterosexual, homosexual, bisexual) have the highest likelihood of being filtered out, compared to racial and ethnic identities. Upon manual inspection of a random sample of 50 documents mentioning "lesbian" and "gay," we find that non-offensive or non-sexual documents make up 22% and 36%, respectively. Corroborating findings in §5.1, several of these excluded documents are on the topic of same-sex relationships (marriage, dating, etc).
Whose English is included?
Finally, we investigate the extent to which minority voices are being removed due to blocklist filtering. Because determining the (potentially minority) identity of a document's author is both infeasible and ethically questionable (Tatman, 2020), we instead focus on measuring the prevalence of different varieties or dialects of English in C4.EN and C4.EN.NOBLOCKLIST. We use a dialect-aware topic model from Blodgett et al. (2016), which was trained on 60M geolocated tweets and relies on US census race/ethnicity data as topics. The model yields posterior probabilities of a given document being in African American English (AAE), Hispanic-aligned English (Hisp), White-aligned English (WAE), 22 and an "other" dialect category (initially intended by the model creators to capture Asian-aligned English). We extract the posterior probabilities of the four dialects for each document, and assign it a dialect based on which has the highest probability.
Our results show that African American English and Hispanic-aligned English are disproportionately affected by the blocklist filtering. Using the most likely dialect of a document, we find that AAE 21 We investigate mentions related to gender identity, sexual orientation, race, and religion. See Tab. 6 for the full list. 22 We acknowledge that there is disagreement on the choice of terminology to refer to different varieties of English. Here, we use the terms from Blodgett et al. (2016). and Hispanic-aligned English are removed at substantially higher rates (42% and 32%, respectively) than WAE and other English (6.2% and 7.2%, respectively). Additionally, we find that 97.8% documents in C4.EN are assigned the WAE dialect category, with only 0.07% AAE and 0.09% Hispanicaligned English documents.
Discussion & Recommendations
Our analyses of C4.EN and associated corpora revealed several surprising findings. At the metadata level ( §3), we show that patents, news, and wikipedia domains are most represented in C4.EN, and that it contains substantial amounts of data from over a decade ago. Upon inspecting the included data ( §4), we find evidence of machine generated text, benchmark data contamination, and social biases. Finally, we also find evidence that the blocklist filtering step is more likely to include minority voices ( §5). Based on these findings, we outline some implications and recommendations.
Reporting website metadata Our analysis shows that while this dataset represents a significant fraction of a scrape of the public internet, it is by no means representative of English-speaking world, and it spans a wide range of years. When building a dataset from a scrape of the web, reporting the domains the text is scraped from is integral to understanding the dataset; the data collection process can lead to a significantly different distribution of internet domains than one would expect.
Examining benchmark contamination Since benchmarks are often uploaded to websites, benchmark contamination a potential issue for dataset creation from webtext. Brown et al. (2020) raised this issue when introducing GPT-3, as they acknowledged that a bug in their filtering caused some benchmark contamination, found after finishing their training. Due to the cost of retraining the model, they instead opt to analyze the impact of contamination of different tasks, finding that contamination could affect performance on benchmarks. Our observations support dynamically collecting data with the human-in-the-loop approach (Nie et al., 2020;Kiela et al., 2021) that might reduce contamination of future benchmarks since (i) pretaining data is infrequently collected, and (ii) annotator-written examples for a given task are less likely to be (previously) crawled from the web.
Social biases and representational harms In
§4.3, we show an example of negative sentiment bias against Arab identities, which is an example of representational harms (Barocas et al., 2017). Our evidence of bias in C4.EN is a first step, though we have not shown a causal link between our measured sentiment statistics and the downstream bias; if we could control the distributional biases in the pretraining data, perhaps it would reduce downstream bias. One potential way to do that is through carefully selecting subdomains to use for training, as different domains will likely exhibit different biases. Our experiments with New York Times articles and Al Jazeera indicate that indeed, text from different internet domains contain different distributions, with varying amounts of bias. We argue that providing a measurement of such bias is an important component of dataset creation. However, if one wants to control for many different kinds of bias simultaneously, this seems very challenging to do by simply selecting specific subdomains.
Excluded voices and identities Our examination of the excluded data suggests that documents associated with Black and Hispanic authors and documents mentioning sexual orientations are significantly more likely to be excluded by C4.EN's blocklist filtering, and that many excluded documents contained non-offensive or non-sexual content (e.g., legislative discussions of same-sex marriage, scientific and medical content). This exclusion is a form of allocational harms (Barocas et al., 2017;Blodgett et al., 2020) and exacerbates existing (language-based) racial inequality (Rosa, 2019) as well as stigmatization of LGBTQ+ identities (Pinsof and Haselton, 2017). In addition, a direct consequence of removing such text from datasets used to train language models is that the models will perform poorly when applied to text from and about people with minority identities, effectively excluding them from the benefits of technology like machine translation or search. Our analyses confirm that determining whether a document has toxic or lewd content is a more nuanced endeavor that goes beyond detecting "bad" words; hateful and lewd content can be expressed without negative keywords (e.g., microaggressions, innuendos; Breitfeller et al., Dinan et al., 2019). Importantly, the meaning of seemingly "bad" words heavily depends on the social context (e.g., impoliteness can serve prosocial functions; Wang et al., 2012), and who is saying certain words influences its offensive-ness (e.g., the reclaimed slur "n*gga" is considered less offensive when uttered by a Black speaker than by a white speaker; Croom, 2013;Galinsky et al., 2013). We recommend against using blockilst filtering when constructing datasets from web-crawled data.
Limitations and Recommendations
We recognize that we have only examined some of the possible issues with a dataset of this size, and so in addition to making the dataset available to download, we recommend providing a location for others to report issues they find (Habernal et al., 2016;Schäfer, 2016). For example, it is likely that there exists personally identifiable information and copyrighted text within C4.EN, but we leave quantifying or removing such text to future work. We also recognize that the data that tools such as LangID work disproportionately well for English compared to other languages (Caswell et al., 2021), and that many of the analyses done in this paper might not generalize to other languages. , and English-language WIKIPEDIA (3%). GPT-3's Common Crawl data was downloaded from 41 monthly "snapshots" from 2016-2019, and it constitutes 45TB of compressed text before filtering 23 and 570GB after (∼400 billion byte-pair-encoded tokens). Since analyzing pretraining corpora is challenging due to their size, their documentation is of-ten missing (Bender et al., 2021;Paullada et al., 2020). To bridge this gap, researchers started to publish systematic post-hoc studies of these corpora. Gehman et al. (2020) provide an in-depth analysis with respect to toxicity and fake news of OPENWEBTEXT. Caswell et al. (2021) recruited 51 volunteers speaking 70 languages to judge whether five publicly available multilingual webcrawled corpora (El-Kishky et al., 2020;Xue et al., 2021;Ortiz Suárez et al., 2020;Bañón et al., 2020;Schwenk et al., 2019) contain text in languages they report, as well as their quality. Jo and Gebru (2020) discuss parallels between creating historical archives and the curation of machine learning datasets including pretraining corpora. Hutchinson et al. (2021) introduce a "framework for dataset development transparency that supports decisionmaking and accountability" that could be used for developing pretraining corpora. The Masakhane organization advocates for participatory research (Nekoto et al., 2020), a set of methodologies that includes all necessary agents, e.g., people from countries where the low-resourced languages are spoken for low-resourced NLP.
Conclusion
We present some of the first documentation and analyses of C4.EN, a web-scale unlabeled dataset originally introduced by Raffel et al. (2020). We argue that documentation for datasets created by scraping the web and then filtering out text should include analysis of the metadata, the included data, and the excluded data. We host three versions of the data for download, in addition to an indexed version for easy searching, and a repository for public discussion of findings. 24
Societal and Ethical Implications
Our work advocates for the need for more transparency and thoughtfulness during the creation of large webtext corpora. Specifically, we highlight that specific design choices (e.g., blocklist filtering) can cause allocational harms to specific communities, by disproportionately removing minorityrelated content. Additionally, we show that using passively crawled webtext corpora (e.g., Common-Crawl) can cause representational harms to specific demographic identities, showing disparate cooccurrences of specific geographic origins with neg-ative sentiment. Better documentation for webcraweld corpora, and other massive language modeling datasets, can help find and solve issues that arise with language models, especially those that are used in production and impact many people.
A Appendix
A.1 Tokenization
The SentencePiece tokenizer for T5 is described in Section 3.3.1 of Raffel et al. (2020). They train this tokenizer and generate their Word-Pieces and vocabulary from a 10:1:1:1 ratio of English:French:German:Romanian, for a total of 32,000 word pieces. This English vocabulary is generated from the cleaned English C4, and thus does not contain the tokens in the blocklist; this can lead to some unexpected tokenizations, such as "sex" being tokenized as "s" + "ex".
A.2 Geolocation
In Figure 4 we show the URL frequency by country.
A.3 Patents from different patent offices
An example patent originally in Chinese: https://patents.google.com/ patent/CN1199926A/en, an example originally in German and run through OCR:
https://patents.google.com/ patent/WO1998039809A1/en.
A.5 Classification label contamination
We observe that a large portion of GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a) datasets can be easily found on Github (see a list below). This prompted us to check do these datasets occur in the unfiltered Common Crawl. We select phrases from each datasets that we identify on Github, and check if they occur in the unfiltered Common Crawl. If there is a match we manually examine the overlapping Common Crawl documents to see whether they represent the associated dataset. We do not find any such case, and conclude that there is no input-and-label contamination of standard NLP classification benchmarks in the unfiltered Common Crawl. Determining what has been filtered is a fundamentally hard problem: as we argue in this paper, automated mechanisms like blocklists are insufficient for filtering out inappropriate content, and even human annotators would have difficulty reaching complete agreement. With these caveats in mind, we analyzed the documents filtered by the "bad words" list by performing a k-means clustering (with k=50) on 100,000 randomly sampled documents embedded using TF-IDF. We present a tSNE projection of this clustering in Figure A.6. While many clusters correspond to pornography or hate speech, there are also clusters corresponding to medicine, religion, gaming, infant care, and other innocuous topics. Blocklist filtering excludes many important topics, and the excluded topics aren't straightforward to predict.
A.7 Demographic Bias Experiment Details
To reproduce the ethnicity bias results of Li et al. Figure 5: Pointwise Mutual Information (PMI) between identity mentions and documents being filtered out by the blocklist. Identities with higher PMI (e.g., lesbian, gay) have higher likelihood of being filtered out. Figure 6: K-means clustering of 100k randomly sampled filtered documents encoded using TF-IDF and tSNE PCA (only 5k shown for clarity). Five top keywords for each cluster given in legend. driver?" These questions come in pairs, with one having negative sentiment expressed about the answer to the question (as above), and one expressing positive sentiment about the answer. We took the pretrained UnifiedQA model (Khashabi et al., 2020), distributed by Hugging Face's transformers library (Wolf et al., 2020), and evaluated it on these 294,000 questions formatted as multiple choice, so the model had to pick one of the two ethnicities in the question. We then counted the proportion of times each ethnicity was associated with positive sentiment by the model; i.e., the model selected the ethnicity as the answer for a positive-sentiment question, or selected the opposite ethnicity as the answer for a negative-sentiment question. The resulting proportions are shown in the following table:
Given these results, we selected "Jewish" and "Arab" as points of comparison for a corpus study on C4.EN, as they are the ethnicities with the most extreme biases that were easy to find in C4.EN with simple scripts ("African" is a substring of "African-American", which has higher overall sentiment, and, e.g., "Black" has very common non-ethnic word senses).
To explore whether C4.EN could be a source of the observed bias between "Jewish" and "Arab", we first found all paragraphs containing these words, where the word was surrounded by spaces (for easy searching using fgrep, which is important on such a large corpus). We then took those paragraphs and tokenized them by whitespace, removed all punctuation, and computed cooccurrence statistics between all words and the target ethnicity. This resulted in 249.8M word occurrences in paragraphs containing the word "Jewish", and 134.8M for "Arab". We then obtained various sentiment lexicons, to get a coarse estimate of the sentiment expressed in paragraphs containing these ethnicity terms. We used the VADER sentiment lexicon (Hutto and Gilbert, 2014), the SocialSent lexicons (Hamilton et al., 2016), and a small manually-created one using the words from the UNQOVER questions above. For the VADER lexicon, we treated a word as positive if the lexicon gave it a sentiment score greater than 1.0 and negative if the score was less than -1.0 (and ignored it otherwise). SocialSent consists of separate lexicons for many subreddits; we aggregated these by averaging the sentiment scores for all words that appeared in at least 40 subreddit-specific lexicons. This gave a roughly domain-independent sentiment lexicon, which we manually filtered to remove any overtly ethnic terms, then took the top 250 most polarized words from each side as positive and negative words.
Given a particular sentiment lexicon, we counted the number of positive and negative word occur-rences in paragraphs containing the ethnicity word, then found the proportion of these occurrences that had positive sentiment. For the SocialSentderived lexicon, which we believe to be the most robust out of the ones we used, we found 3.4M sentiment-bearing tokens for "Jewish", of which 73.2% were positive, and 1.2M for "Arab", of which 65.7% were positive, giving a positivity gap towards "Jewish" of 7.5%. The other sentiment lexicons also resulted in a positivivty gap towards "Jewish", though it was smaller (1.4% for the manual lexicon based on UNQOVER questions, and 2.0% for the VADER lexicon).
For the domain-filtered bias experiments, we found paragraphs from URLs beginning with either https://www.nytimes.com or https://www.aljazeera.com, two of the top 25 domains for documents in C4.EN, then repeated the above analysis using the SocialSentderived lexicon. These domains had many fewer sentiment-bearing tokens for each ethnicity, ranging from 1.6k ("Jewish" in Al Jazeera) to 7.9k ("Arab" in NYT). Positivity ratios in NYT were 74.0% ("Jewish") and 69.5% ("Arab"), while they were 42.5% ("Jewish") and 42.8% ("Arab") in Al Jazeera.
Figure 3 :
3The date URLs were first indexed by the Internet Archive 13 before the Common Crawl snapshot was collected.
Devlin et al., 2019) was trained on BOOKSCORPUS(Zhu et al., 2015) and Englishlanguage WIKIPEDIA. It was soon improved with additional data(ROBERTA; Liu et al., 2019): a portion of CC-NEWS (Nagel, 2016), OPENWEBTEXT (Gokaslan and Cohen, 2019; Radford et al., 2019), and STORIES (Trinh and Le, 2018). Since then, other corpora have been (partially) constructed from Common Crawl, e.g., PILE (Gao et al., 2020), CCNET (Wenzek et al., 2020), and MC4(Xue et al., 2021).Luccioni and Viviano (2021) provide some exploratory analysis of undesirable content in Common Crawl, wherein they find hatespeech and adult content. One of the largest language models,GPT-3 (Brown et al., 2020), was trained on a mixture of filtered Common Crawl (60% of GPT-3's data), WEBTEXT2 (22%; Kaplan et al., 2020),BOOKS1 and BOOKS2 (8% each; Brown et al., 2020)
(Rajpurkar et al., 2016; Wang et al., 2019b) • RTE (Dagan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) • SST-2 (Socher et al., 2013) • STS-B (Cer et al., 2017) • WNLI (Levesque et al., 2012; Wang et al., 2019b)
primary language other than English are represented in the top 25 (such as ru). 11 A significant portion of the text comes from .gov websites, reserved for the US government. Another potentially interesting top-level domain is .mil, reserved for the US government military. While not in the top 25 TLDs, C4.EN contains 33,874,654 tokens from .mil top-level domain sites, coming from 58,394 unique URLs. There are an additional 1,224,576 tokens (from 2,873 unique URLs) from .mod.uk, the domain for the United Kingdom's armed forces and Ministry of Defence.10 8
10 9
10 10
10 11
# tokens (log scale)
com
org
co.uk
net
com.au
edu
ca
info
org.uk
in
gov
eu
de
tk
co
co.za
us
ie
co.nz
ac.uk
ru
nl
io
me
it
Top-level domain
10 7
10 8
10 9
# tokens (log scale)
patents.google.com
en.wikipedia.org
en.m.wikipedia.org
www.nytimes.com
www.latimes.com
www.theguardian.com
journals.plos.org
www.forbes.com
www.huffpost.com
patents.com
www.scribd.com
www.washingtonpost.com
www.fool.com
ipfs.io
www.frontiersin.org
www.businessinsider.com
www.chicagotribune.com
www.booking.com
www.theatlantic.com
link.springer.com
www.aljazeera.com
www.kickstarter.com
caselaw.findlaw.com
www.ncbi.nlm.nih.gov
www.npr.org
Website
Figure 2: Number of tokens from the 25 most represented top-level domains (left) and websites (right) in C4.EN.
different internet domains as sources of text, the
date the websites were first indexed by the Internet
Archive, and geolocation of IP addresses of hosted
websites.
3.1 Internet domains
Figure 2 (left) shows the 25 most represented top-
level domains (TLD) 9 , by number of word tokens
in C4.EN (measured using the SpaCy English to-
kenizer). 10 Unsurprisingly, popular top-level do-
mains such as .com, .org, and .net are well
represented. We note that some top-level domains
reserved for non-US, English-speaking countries
are less represented, and even some domains for
countries with a
18 Brown et al. used a very conservative measurement because of the bug in their pretraining data preprocessing.Dataset
% Matching
Label
LAMA T-REx
4.6
LAMA Google-RE
5.7
XSum
15.49
TIFU-short
24.88
TIFU-long
1.87
WikiBio
3.72
AMR-to-text
10.43
Input
BoolQ
2.4
CoLA
14.4
MNLI (hypothesis)
14.2
MNLI (premise)
15.2
MRPC (sentence 1)
2.7
MRPC (sentence 2)
2.7
QNLI (sentence)
53.6
QNLI (question)
1.8
RTE (sentence 1)
6.0
RTE (sentence 2)
10.8
SST-2
11.0
STS-B (sentence 1)
18.3
STS-B (sentence 2)
18.6
WNLI (sentence 1)
4.8
WNLI (sentence 2)
2.1
Table 2 :
2The number of exact matches from test sets of various benchmarks in C4.EN. For datasets where the input has multiple components (e.g. hypothesis and premise on MNLI), we report contamination separately for each component. Numbers vary widely for different datasets, ranging from 1 to over 50% of samples.
Su Lin Blodgett, Lisa Green, and Brendan O'Connor.2016. Demographic dialectal variation in social media: A case study of African-American English.In Proceedings of the 2016 Conference on Empiri-
cal Methods in Natural Language Processing, pages
1119-1130, Austin, Texas. Association for Compu-
tational Linguistics.
Luke Breitfeller, Emily Ahn, David Jurgens, and Yu-
lia Tsvetkov. 2019. Finding microaggressions in the
wild: A case for locating elusive phenomena in so-
cial media posts. In Proceedings of the 2019 Con-
ference on Empirical Methods in Natural Language
Processing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 1664-1674.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry,
Amanda Askell, Sandhini Agarwal, Ariel Herbert-
Voss, Gretchen Krueger, Tom Henighan, Rewon
Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu,
Clemens Winter, Chris Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In
Advances in Neural Information Processing Systems,
volume 33, pages 1877-1901. Curran Associates,
Inc.
Nicholas Carlini, Florian Tramèr, Eric Wallace,
Matthew Jagielski, Ariel Herbert-Voss, Katherine
Lee, Adam Roberts, Tom Brown, Dawn Song, Ul-
far Erlingsson, Alina Oprea, and Colin Raffel. 2020.
Extracting training data from large language models.
arXiv:2012.07805.
Isaac Caswell, Julia Kreutzer, Lisa Wang, Ahsan Wa-
hab, D. V. Esch, Nasanbayar Ulzii-Orshikh, Al-
lahsera Tapo, Nishant Subramani, Artem Sokolov,
Claytone Sikasote, Monang Setyawan, S. Sarin,
Sokhar Samb, B. Sagot, C. Rivera, Annette Rios
Gonzales, Isabel Papadimitriou, S. Osei, Pedro
Javier Ortiz Suárez, Iroro Orife, Kelechi Ogueji,
Rubungo Andre Niyongabo, Toan Q. Nguyen, Math-
ias Muller, A. Muller, S. Muhammad, N. Muham-
mad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov,
Tapiwanashe Matangira, Colin Leong, Nze Lawson,
Sneha Kudugunta, Yacine Jernite, M. Jenny, Orhan
Firat, Bonaventure F. P. Dossou, Sakhile Dlamini,
N. D. Silva, Sakine cCabuk Balli, Stella Rose
Biderman, Alessia Battisti, A. Baruwa, Ankur
Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayo-
dele Awokoya, Duygu Ataman, Orevaoghene Ahia,
Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa
Adeyemi. 2021. Quality at a glance: An audit of
web-crawled multilingual datasets. In Proceedings
of the AfricanNLP Workshop.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-
Gazpio, and Lucia Specia. 2017. SemEval-2017
task 1: Semantic textual similarity multilingual and
crosslingual focused evaluation. In Proceedings
of the 11th International Workshop on Semantic
Evaluation (SemEval-2017), pages 1-14, Vancouver,
Canada. Association for Computational Linguistics.
Kenneth Church and Patrick Hanks. 1990. Word as-
sociation norms, mutual information, and lexicogra-
phy. Computational linguistics, 16(1):22-29.
Christopher Clark, Kenton Lee, Ming-Wei Chang,
Tom Kwiatkowski, Michael Collins, and Kristina
Toutanova. 2019. BoolQ: Exploring the surprising
difficulty of natural yes/no questions. In Proceed-
ings of the 2019 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2924-2936, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Adam M Croom. 2013. How to do things with slurs:
Studies in the way of derogatory words. Language
& Communication, 33(3):177-204.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2005. The pascal recognising textual entailment
challenge. In Machine Learning Challenges Work-
shop, pages 177-190. Springer.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Emily Dinan, Samuel Humeau, Bharath Chintagunta,
and Jason Weston. 2019. Build it break it fix it for
dialogue safety: Robustness from adversarial human
attack. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing
and the 9th International Joint Conference on Natu-
ral Language Processing (EMNLP-IJCNLP), pages
4537-4546, Hong Kong, China. Association for
Computational Linguistics.
William B. Dolan and Chris Brockett. 2005. Automati-
cally constructing a corpus of sentential paraphrases.
In Proceedings of the Third International Workshop
on Paraphrasing (IWP2005).
Ahmed El-Kishky, Vishrav Chaudhary, Francisco
Guzmán, and Philipp Koehn. 2020. CCAligned: A
massive collection of cross-lingual web-document
pairs. In Proceedings of the 2020 Conference on
Empirical Methods in Natural Language Process-
ing (EMNLP), pages 5960-5969, Online. Associa-
tion for Computational Linguistics.
William Fedus, Barret Zoph, and Noam Shazeer.
2021. Switch transformers: Scaling to trillion pa-
rameter models with simple and efficient sparsity.
arXiv:2101.03961.
Adam D Galinsky, Cynthia S Wang, Jennifer A Whit-
son, Eric M Anicich, Kurt Hugenberg, and Galen V
Bodenhausen. 2013. The reappropriation of stig-
matizing labels: the reciprocal relationship between
power and self-labeling. Psychol. Sci., 24(10):2020-
2029.
Leo Gao, Stella Biderman, Sid Black, Laurence Gold-
ing, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, Shawn
Presser, and Connor Leahy. 2020. The pile: An
800gb dataset of diverse text for language modeling.
arXiv:2101.00027.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot
learners. In Proceedings of the 59th Annual Meet-
ing of the Association for Computational Linguistics
and the 11th International Joint Conference on Nat-
ural Language Processing (Volume 1: Long Papers),
pages 3816-3830, Online. Association for Computa-
tional Linguistics.
Timnit Gebru, Jamie Morgenstern, Briana Vecchione,
Jennifer Wortman Vaughan, Hanna M. Wallach, Hal
Daumé, and Kate Crawford. 2018. Datasheets for
datasets. In Proceedings of the 5th Workshop on
Fairness, Accountability, and Transparency in Ma-
chine Learning.
Samuel Gehman, Suchin Gururangan, Maarten Sap,
Yejin Choi, and Noah A. Smith. 2020. RealToxi-
cityPrompts: Evaluating neural toxic degeneration
in language models. In Findings of the Association
for Computational Linguistics: EMNLP 2020, pages
3356-3369, Online. Association for Computational
Linguistics.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan,
and Bill Dolan. 2007. The third PASCAL recogniz-
ing textual entailment challenge. In Proceedings of
the ACL-PASCAL Workshop on Textual Entailment
and Paraphrasing, pages 1-9, Prague. Association
for Computational Linguistics.
Aaron Gokaslan and Vanya Cohen. 2019. OpenWeb-
Text Corpus.
Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita
Honnavalli, Sharon Levy, Diba Mirza, and
William Yang Wang. 2020. Investigating African-
American Vernacular English in transformer-based
text generation. In Proceedings of the 2020 Confer-
ence on Empirical Methods in Natural Language
Processing (EMNLP), pages 5877-5883, Online.
Association for Computational Linguistics.
Suchin Gururangan, Ana Marasović, Swabha
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In
Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics, pages
8342-8360, Online. Association for Computational
Linguistics.
Ivan Habernal, Omnia Zayed, and Iryna Gurevych.
2016. C4Corpus: Multilingual web-size corpus
with free license. In Proceedings of the Tenth In-
ternational Conference on Language Resources and
Evaluation (LREC'16), pages 914-922, Portorož,
Slovenia. European Language Resources Associa-
tion (ELRA).
R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo
Giampiccolo, Bernardo Magnini, and Idan Szpektor.
2006. The second pascal recognising textual entail-
ment challenge. In Proceedings of the Second PAS-
CAL Challenges Workshop on Recognising Textual
Entailment.
William L. Hamilton, Kevin Clark, Jure Leskovec, and
Dan Jurafsky. 2016. Inducing domain-specific senti-
ment lexicons from unlabeled corpora. In Proceed-
ings of the 2016 Conference on Empirical Methods
in Natural Language Processing, pages 595-605,
Austin, Texas. Association for Computational Lin-
guistics.
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen,
Christopher Hesse, Jacob Jackson, Heewoo Jun,
Tom B Brown, Prafulla Dhariwal, Scott Gray, et al.
2020. Scaling laws for autoregressive generative
modeling. arXiv:2010.14701.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefen-
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to read
and comprehend. In Advances in Neural Informa-
tion Processing Systems, volume 28. Curran Asso-
ciates, Inc.
Ben Hutchinson, Andrew Smart, Alex Hanna, Emily
Denton, Christina Greer, Oddur Kjartansson, Parker
Barnes, and Margaret Mitchell. 2021. Towards ac-
countability for machine learning datasets: Practices
from software engineering and infrastructure. In
Proceedings of the 2021 ACM Conference on Fair-
ness, Accountability, and Transparency, pages 560-
575.
C. Hutto and Eric Gilbert. 2014. Vader: A parsimo-
nious rule-based model for sentiment analysis of so-
cial media text. In Proceedings of the Eighth Inter-
national AAAI Conference on Weblogs and Social
Media.
Eun Seo Jo and Timnit Gebru. 2020. Lessons from
archives: Strategies for collecting sociocultural data
in machine learning. In Proceedings of the 2020
Conference on Fairness, Accountability, and Trans-
parency, pages 306-316.
Jared Kaplan, Sam McCandlish, Tom Henighan,
Tom B Brown, Benjamin Chess, Rewon Child, Scott
Gray, Alec Radford, Jeffrey Wu, and Dario Amodei.
2020. Scaling laws for neural language models.
arXiv:2001.08361.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish
Sabharwal, Oyvind Tafjord, Peter Clark, and Han-
naneh Hajishirzi. 2020. UNIFIEDQA: Crossing for-
mat boundaries with a single QA system. In Find-
ings of the Association for Computational Linguis-
tics: EMNLP 2020, pages 1896-1907, Online. As-
sociation for Computational Linguistics.
Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh
Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vid-
gen, Grusha Prasad, Amanpreet Singh, Pratik Ring-
shia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel,
Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mo-
hit Bansal, Christopher Potts, and Adina Williams.
2021. Dynabench: Rethinking benchmarking in
NLP. In Proceedings of the 2021 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, pages 4110-4124, Online. Association for
Computational Linguistics.
Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim.
2019. Abstractive summarization of Reddit posts
with multi-level memory networks. In Proceed-
ings of the 2019 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2519-2531, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Rémi Lebret, David Grangier, and Michael Auli. 2016.
Neural text generation from structured data with
application to the biography domain. In Proceed-
ings of the 2016 Conference on Empirical Methods
in Natural Language Processing, pages 1203-1213,
Austin, Texas. Association for Computational Lin-
guistics.
Hector Levesque, Ernest Davis, and Leora Morgen-
stern. 2012. The winograd schema challenge. In
Proceedings of the Thirteenth International Confer-
ence on the Principles of Knowledge Representation
and Reasoning.
Tao Li, Daniel Khashabi, Tushar Khot, Ashish Sab-
harwal, and Vivek Srikumar. 2020. UNQOVERing
stereotyping biases via underspecified questions. In
Findings of the Association for Computational Lin-
guistics: EMNLP 2020, pages 3475-3489, Online.
Association for Computational Linguistics.
Pedro Javier Ortiz Suárez, Laurent Romary, and Benoît
Sagot. 2020. A monolingual approach to contextual-
ized word embeddings for mid-resource languages.
In Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics, pages
1703-1714, Online. Association for Computational
Linguistics.
Amandalynne Paullada, Inioluwa Deborah Raji,
Emily M. Bender, Emily L. Denton, and A. Hanna.
2020. Data and its (dis)contents: A survey of
dataset development and use in machine learning
research. In The ML-Retrospectives, Surveys &
Meta-Analyses NeurIPS 2020 Workshop.
David Pinsof and Martie G Haselton. 2017. The effect
of the promiscuity stereotype on opposition to gay
rights. PloS one, 12(7):e0178534.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, and Ilya Sutskever. 2019. Language
models are unsupervised multitask learners.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2020. Exploring the limits
of transfer learning with a unified text-to-text trans-
former. Journal of Machine Learning Research.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
Percy Liang. 2016. SQuAD: 100,000+ questions for
machine comprehension of text. In Proceedings of
the 2016 Conference on Empirical Methods in Natu-
ral Language Processing, pages 2383-2392, Austin,
Texas. Association for Computational Linguistics.
Jonathan Rosa. 2019. Looking like a language, sound-
ing like a race. Oxford University Press.
Roland Schäfer. 2016. CommonCOW: Massively huge
web corpora from CommonCrawl data and a method
to distribute them freely under restrictive EU copy-
right laws. In Proceedings of the Tenth Inter-
national Conference on Language Resources and
Evaluation (LREC'16), pages 4500-4504, Portorož,
Slovenia. European Language Resources Associa-
tion (ELRA).
Holger Schwenk, Vishrav Chaudhary, Shuo Sun,
Hongyu Gong, and Francisco Guzmán. 2019. Wiki-
matrix: Mining 135m parallel sentences in 1620 lan-
guage pairs from wikipedia. arXiv:1907.05791.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan,
and Nanyun Peng. 2019. The woman worked as
a babysitter: On biases in language generation. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 3407-
3412, Hong Kong, China. Association for Computa-
tional Linguistics.
Tom Simonite. 2021. AI and the List of Dirty,
Naughty, Obscene, and Otherwise Bad Words.
https://www.wired.com/story/ai-
list-dirty-naughty-obscene-bad-
words/.
Richard Socher, Alex Perelygin, Jean Wu, Jason
Chuang, Christopher D Manning, Andrew Y Ng,
and Christopher Potts. 2013. Recursive deep mod-
els for semantic compositionality over a sentiment
treebank. In Proceedings of the 2013 conference on
empirical methods in natural language processing,
pages 1631-1642.
Rachael Tatman. 2020. What i won't build. WiNLP
Workshop at ACL.
Trieu H. Trinh and Quoc V. Le. 2018. A simple method
for commonsense reasoning. arXiv:1806.02847.
Alex Wang, Yada Pruksachatkun, Nikita Nangia,
Amanpreet Singh, Julian Michael, Felix Hill, Omer
Levy, and Samuel Bowman. 2019a. Superglue: A
stickier benchmark for general-purpose language un-
derstanding systems. In Advances in Neural Infor-
mation Processing Systems, volume 32. Curran As-
sociates, Inc.
Alex Wang, Amanpreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R. Bowman. 2019b.
GLUE: A Multi-Task Benchmark and Analysis Plat-
form for Natural Language Understanding. In the
International Conference on Learning Representa-
tions.
William Yang Wang, Samantha Finkelstein, Amy
Ogan, Alan W Black, and Justine Cassell. 2012.
"love ya, jerkface": Using sparse log-linear mod-
els to build positive and impolite relationships with
teens. In Proceedings of the 13th Annual Meeting
of the Special Interest Group on Discourse and Di-
alogue, pages 20-29, Seoul, South Korea. Associa-
tion for Computational Linguistics.
Alex Warstadt, Amanpreet Singh, and Samuel R. Bow-
man. 2019. Neural network acceptability judgments.
Transactions of the Association for Computational
Linguistics, 7:625-641.
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con-
neau, Vishrav Chaudhary, Francisco Guzmán, Ar-
mand Joulin, and Edouard Grave. 2020. CCNet:
Extracting high quality monolingual datasets from
web crawl data. In Proceedings of the 12th Lan-
guage Resources and Evaluation Conference, pages
4003-4012, Marseille, France. European Language
Resources Association.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sen-
tence understanding through inference. In Proceed-
ings of the 2018 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume
1 (Long Papers), pages 1112-1122, New Orleans,
Louisiana. Association for Computational Linguis-
tics.
Figure 4: URL frequency by country for 175,000 ran-
domly selected URLS from the cleaned common crawl
dataset.
• https://github.com/nyumll/CoLA-baselines/blob/ master/acceptability_corpus/ • https://github.com/ 333caowei/extract-stanfordSentimentTreebank/ blob/master/sst2_test.csv • https://github.com/ abhishekshridhar/Paraphrase-Detection/blob/master/msrparaphrase-corpus/msr_ paraphrase_test.txt • https://github.com/ AndriyMulyar/semantic-textsimilarity/blob/master/ semantic_text_similarity/data/ sts_b/sts-test.csv • https://raw.githubusercontent. com/qinxinlei/QNLI/master/ glue_data/QNLI/dev.tsvCount
Country or WIPO Code
Country or Office Name
Language
70489
US
USA
English
4583
EP
European Patent Office
English, French, or German
4554
JP
Japan
Japanese
2283
CN
China
Chinese (Simplified)
2154
WO
World Intellectual Property Organization
Various
1554
KR
Republic of Korea
Korean
1417
CA
Canada
English
982
AU
Australia
English
747
GB
United Kingdom
English
338
DE
Germany
German
332
TW
Taiwan
Traditional Chinese
271
FR
France
French
138
MX
Mexico
Spanish
118
SE
Sweden
Swedish
711
Other
Various
Various
Table 3 :
3The number of patents from different different patent offices from patents.google.com, the largest single Internet domain (in terms of tokens) for C4. Many patent offices require a patent be filed in a particular language (listed above), but also allow translations into other languages be submitted. The majority of patents in C4 are from the US, which includes patents originally written in English, with older patents OCR'd. "Other" contains 48 other patent offices which each have fewer than 100 patents. Summaries The takeover of Bradford Bulls by Omar Khan's consortium has been ratified by the Rugby Football League. US presidential candidate Donald Trump has given out the mobile phone number of Senator Lindsey Graham -one of his Republican rivals for the White House. Two men who were sued over the Omagh bomb have been found liable for the 1998 atrocity at their civil retrial. Grimsby fought back from two goals down to beat Aldershot and boost their National League play-off hopes. Doctors say a potential treatment for peanut allergy has transformed the lives of children taking part in a large clinical trial. A breast surgeon who intentionally wounded his patients has had his 15-year jail term increased to 20 years. Turkey has bombarded so-called Islamic State (IS) targets across the border in northern Syria ahead of an expected ground attack on an IS-held town. Peterborough United have signed forward Danny Lloyd on a free transfer from National League North side Stockport. The first major trial to see if losing weight reduces the risk of cancers coming back is about to start in the US and Canada. Villarreal central defender Eric Bailly is set to be Jose Mourinho's first signing as Manchester United manager.Contaminated
Table 5 :
5A sample of XSum summaries that are found in C4.EN. • https://github.com/ himanshushivhare/RTE/blob/ master/RTE3-TEST/RTE3-TEST.xml • https://github.com/zdwls/ boolqQA/blob/main/datafile/ test.jsonl • https://github.com/mcdm/ CommitmentBank/blob/master/ CommitmentBank-items.csv • https://github.com/drwiner/ COPA/blob/master/datasets/ copa-test.xml • https://raw.githubusercontent. com/eitanhaimashiah/ multibidaf/master/data/ multirc_dev.json • https://github.com/aEE25/ Testing-WiC-with-ERNIE/blob/ main/WiC_dataset/test/test. data.txt • https://github.com/xiandong79/ WinogradSchemaChallenge/blob/ master/datasets/WSCollection. xml A.6 Filtered Text Clustering and Analysis
, we used their script to generate questions for testing implicit bias towards various ethnicities. This resulted in 294,000 questions about 15 different ethnicities, with each question comparing two. An example question is "An Arab woman was taking classes with a Jewish woman. Who was a bad PMI(identity term; filtered by blocklist) european, europeans, european americans, ... white, whites straight, straights christian, christians black, blacks african american, african-american, african americans, ... jewish, jews, jew muslim, muslims man, men caucasian, caucasians asian, asians, asian american, ... women, woman trans, transgender female, females non-binary, nonbinary, non binary male, males latina, latino, latinas, ... bisexual, bisexuals, bi-sexual, ...0.0
0.5
1.0
1.5
2.0
2.5
homosexual, homosexuals
heterosexual, heterosexuals
gay, gays
lesbian, lesbians
Identity
Table 6 :
6List of regular expressions used to capture the identity mentions studied in §5.2
Table 7 :
7Proportion of times each ethnicity was associated with positive sentiment by UnifiedQA (Khashabi et al., 2020), following the experimental setup of Li et al. (2020).
Other, similar datasets have been created (e.g., Brown et al., 2020), but unfortunately were not made available.
Note that the distribution of websites in C4.EN is not necessarily representative of the most frequently used websites on the internet, as evidenced by the low overlap with the top 25 most visited websites as measured by Alexa (https: //www.alexa.com/topsites)
https://git.io/vSyEu
Two filters applied are (i) a similarity filter to documents from other corpora, and (ii) deduplication.
https://github.com/allenai/c4documentation
AcknowledgementsWe thank the Internet Archive (especially Sawood Alam and Mark Graham) for providing the data used forFigure 3. We thank Hugging Face for partnering with AI2 to host the datasets publicly for download. We thank the AllenNLP team and other researchers at the Allen Institute for AI for their thoughtful feedback.
ParaCrawl: Web-scale acquisition of parallel corpora. Marta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, 10.18653/v1/2020.acl-main.417Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsElsa Sarrías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume ZaragozaOnline. Association for Computational LinguisticsMarta Bañón, Pinzhen Chen, Barry Haddow, Ken- neth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sar- rías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2020. ParaCrawl: Web-scale acquisition of parallel cor- pora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4555-4567, Online. Association for Compu- tational Linguistics.
The problem with bias: Allocative versus representational harms in machine learning. Solon Barocas, Kate Crawford, Aaron Shapiro, Hanna Wallach, SIGCIS. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: Al- locative versus representational harms in machine learning. In SIGCIS.
Data statements for natural language processing: Toward mitigating system bias and enabling better science. Emily M Bender, Batya Friedman, 10.1162/tacl_a_00041Transactions of the Association for Computational Linguistics. 6Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.
On the dangers of stochastic parrots: Can language models be too big?. Emily M Bender, Timnit Gebru, Angelina Mcmillan-Major, Shmargaret Shmitchell, 10.1145/3442188.3445922Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21. the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21New York, NY, USAAssociation for Computing MachineryEmily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, FAccT '21, page 610-623, New York, NY, USA. As- sociation for Computing Machinery.
The fifth pascal recognizing textual entailment challenge. Luisa Bentivogli, Peter Clark, Ido Dagan, Danilo Giampiccolo, TAC. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing tex- tual entailment challenge. In TAC.
Language (technology) is power: A critical survey of "bias" in NLP. Solon Su Lin Blodgett, Hal Barocas, Iii Daumé, Hanna Wallach, 10.18653/v1/2020.acl-main.485Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSu Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5454- 5476, Online. Association for Computational Lin- guistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv:1907.11692.
What's in the box? an analysis of undesirable content in the Common Crawl corpus. Alexandra Luccioni, Joseph Viviano, 10.18653/v1/2021.acl-short.24Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational LinguisticsAlexandra Luccioni and Joseph Viviano. 2021. What's in the box? an analysis of undesirable content in the Common Crawl corpus. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), pages 182-189, Online. As- sociation for Computational Linguistics.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesOregon, USAAssociation for Computational LinguisticsPortlandAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.
A non-parametric test to detect datacopying in generative models. Casey Meehan, Kamalika Chaudhuri, Sanjoy Dasgupta, Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics. the 23rd International Conference on Artificial Intelligence and StatisticsAISTATSCasey Meehan, Kamalika Chaudhuri, and Sanjoy Das- gupta. 2020. A non-parametric test to detect data- copying in generative models. In Proceedings of the 23rd International Conference on Artificial In- telligence and Statistics (AISTATS).
. Sebastian Nagel, CC-NEWSSebastian Nagel. 2016. CC-NEWS.
Abstractive text summarization using sequence-to-sequence RNNs and beyond. Ramesh Nallapati, Bowen Zhou, Caglar Cicero Dos Santos, Bing Gulcehre, Xiang, 10.18653/v1/K16-1028Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. The 20th SIGNLL Conference on Computational Natural Language LearningBerlin, GermanyAssociation for Computational LinguisticsRamesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, Shay B Cohen, Mirella Lapata, 10.18653/v1/D18-1206Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, 10.18653/v1/2021.naacl-main.41Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLinting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computa- tional Linguistics.
Defending against neural fake news. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi, NeurIPS. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In NeurIPS.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Richard S Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, ICCV. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV.
. Dataset % Matched Count Matched / Dataset Size Label LAMA. 14REx 4.6% 1,585 / 34Dataset % Matched Count Matched / Dataset Size Label LAMA T-REx 4.6% 1,585 / 34,014
LAMA Google-RE 5.7% 314 / 5,528 XSum 15. LAMA Google-RE 5.7% 314 / 5,528 XSum 15.49 1756 / 11334
Table 4: An extended version of Table 2 with number of instances that are matched. Table 4: An extended version of Table 2 with number of instances that are matched.
| [
"https://github.com/allenai/c4documentation",
"https://github.com/allenai/c4documentation/discussions",
"https://github.com/nyumll/CoLA-baselines/blob/",
"https://github.com/zdwls/",
"https://github.com/mcdm/",
"https://github.com/drwiner/",
"https://github.com/aEE25/",
"https://github.com/xiandong79/",
"https://github.com/allenai/c4documentation"
] |
[
"Extracting adverse drug reactions and their context using sequence labelling ensembles in TAC2017",
"Extracting adverse drug reactions and their context using sequence labelling ensembles in TAC2017"
] | [
"Maksim Belousov maksim.belousov@manchester.ac.uk \nSchool of Computer Science\nUniversity of Manchester\nUK\n",
"Nikola Milosevic nikola.milosevic@manchester.ac.uk \nSchool of Computer Science\nUniversity of Manchester\nUK\n\nAlliance Manchester Business School\nManchester Institute of Innovation Research\nThe University of Manchester\nUK\n",
"William Dixon will.dixon@manchester.ac.uk \nHealth eResearch Centre\nFarr Institute\nManchester Academic Health Science Centre\nThe University of Manchester\nUK\n\nArthritis Research UK Centre for Epidemiology\nThe University of Manchester\nUK\n",
"Goran Nenadic gnenadic@manchester.ac.uk \nSchool of Computer Science\nUniversity of Manchester\nUK\n\nHealth eResearch Centre\nFarr Institute\nManchester Academic Health Science Centre\nThe University of Manchester\nUK\n"
] | [
"School of Computer Science\nUniversity of Manchester\nUK",
"School of Computer Science\nUniversity of Manchester\nUK",
"Alliance Manchester Business School\nManchester Institute of Innovation Research\nThe University of Manchester\nUK",
"Health eResearch Centre\nFarr Institute\nManchester Academic Health Science Centre\nThe University of Manchester\nUK",
"Arthritis Research UK Centre for Epidemiology\nThe University of Manchester\nUK",
"School of Computer Science\nUniversity of Manchester\nUK",
"Health eResearch Centre\nFarr Institute\nManchester Academic Health Science Centre\nThe University of Manchester\nUK"
] | [] | Adverse drug reactions (ADRs) are unwanted or harmful effects experienced after the administration of a certain drug or a combination of drugs, presenting a challenge for drug development and drug administration. In this paper, we present a set of taggers for extracting adverse drug reactions and related entities, including factors, severity, negations, drug class and animal. The systems used a mix of rule-based, machine learning (CRF) and deep learning (BLSTM with word2vec embeddings) methodologies in order to annotate the data. The systems were submitted to adverse drug reaction shared task, organised during Text Analytics Conference in 2017 by National Institute for Standards and Technology, achieving F1-scores of 76.00 and 75.61 respectively. | null | [
"https://arxiv.org/pdf/1905.11716v1.pdf"
] | 44,149,217 | 1905.11716 | 7879a6653a49a3acded0de430f6323e16b4a1b9f |
Extracting adverse drug reactions and their context using sequence labelling ensembles in TAC2017
Maksim Belousov maksim.belousov@manchester.ac.uk
School of Computer Science
University of Manchester
UK
Nikola Milosevic nikola.milosevic@manchester.ac.uk
School of Computer Science
University of Manchester
UK
Alliance Manchester Business School
Manchester Institute of Innovation Research
The University of Manchester
UK
William Dixon will.dixon@manchester.ac.uk
Health eResearch Centre
Farr Institute
Manchester Academic Health Science Centre
The University of Manchester
UK
Arthritis Research UK Centre for Epidemiology
The University of Manchester
UK
Goran Nenadic gnenadic@manchester.ac.uk
School of Computer Science
University of Manchester
UK
Health eResearch Centre
Farr Institute
Manchester Academic Health Science Centre
The University of Manchester
UK
Extracting adverse drug reactions and their context using sequence labelling ensembles in TAC2017
health informaticstext miningdrug labelsadverse drug reactions
Adverse drug reactions (ADRs) are unwanted or harmful effects experienced after the administration of a certain drug or a combination of drugs, presenting a challenge for drug development and drug administration. In this paper, we present a set of taggers for extracting adverse drug reactions and related entities, including factors, severity, negations, drug class and animal. The systems used a mix of rule-based, machine learning (CRF) and deep learning (BLSTM with word2vec embeddings) methodologies in order to annotate the data. The systems were submitted to adverse drug reaction shared task, organised during Text Analytics Conference in 2017 by National Institute for Standards and Technology, achieving F1-scores of 76.00 and 75.61 respectively.
Introduction
Adverse drug reactions (ADRs) are unwanted or harmful effects experienced after the administration of a certain drug or a combination of drugs [8]. They present a challenge for drug development and drug administration. During 1994, it was estimated that 700,000 patients in the United States suffered from adverse drug reaction, while 100,000 died as a consequence of such reactions [7]. Roughly half of the people in the UK take prescribed medications. Adverse drug reactions are serious burden on health care systems. About 7% of all hospital admissions were accounted to ADRs. Moreover, quality of life and adherence to treatment is, as well, affected by adverse drug reactions [?]. Also, they are important source of human phenotypic data and can be used to predict drug targets [6].
In the United States, drug product labels are required by law to contain the information regarding clinically significant adverse drug reactions [15]. All drug product labels in the United States are freely available through the National Library of Medicine's DailyMed website 5 in a standard format called Structured Product Label (SPL).
The task of recognising specific mentions (such as ADRs) in a text is a task of named entity recognition (NER) or tagging, which can be approached using sequence labelling techniques. Sequence labelling problems are usually solved using sequence modelling machine learning techniques, such as hidden Markov models, conditional random fields or recurrent neural networks.
Within the drug informatics domain, the SPLICER system [3] was successfully applied to extract adverse drug events from text and tables in the Adverse Reactions section of SPLs. Other efforts focus on side effects and drug indications [4,5,1]. The SIDER (Side Effect Resource) database uses named entity recognition to extract side effects and indications from product labelling, including SPLs [6]. More recently, starting with full-text papers from the Journal of Oncology, drug side effect relationships were extracted and compared to the SIDER database [18].
Neural networks with word embeddings have recently showed successes in the biomedical named entity recognition. Word2vec embeddings with bidirectional recurrent neural networks combined with a CRF tagger and SVM classifier showed promising results for disease recognition [16]. Named entity recognition methodology based on recurrent neural networks and word embeddings (GloVe or Word2vec) was used for de-identification of electronic health records and gave the state-of-the-art results, producing slightly better results with GloVe embeddings [2].
In this paper, we present our approaches to the recognition of adverse drug reactions and related entities, developed for a shared task organised during the Text Analytics Conference 2017 (TAC 2017). The task was co-organised by the US National Institute of Standards and Technology (NIST) and the US Food and Drug Administration (FDA). The objective of the shared task was to extract adverse drug reactions from drug labelling text documents using natural language processing techniques. In the task 1, in which we participated, the participants were supposed to build a system to extract adverse drug reactions and related mentions such as severity, drug class, negation, factors, and whether it was reported on animals 6 .
Data
The shared task organisers published a training dataset containing 101 annotated drug labels (documents) and a dataset containing 2,208 unannotated drug labels. An unseen subset of unannotated documents was used as a testing data during the task evaluation. The drug label is a multi-section document that may contain headings, paragraphs, tables and lists. In the provided dataset each drug label was converted to a text document disregarding the structure (i.e. representing all elements as an unformatted text, keeping only the main sections of the document). It is worth noting that the gold-standard dataset contained some discontinuous annotations (6.8% of all annotations). Annotation that involves more than one continuous span of characters is considered discontinuous annotation. For the simplicity of tagging schemes, we ignored discontinuous annotations during the document parsing.
The class distribution of annotated entities is imbalanced, where the majority of annotations were adverse drug reactions. On the other hand, some related entities had only a few annotations. The numbers of annotated mentions (groups of tokens), tokens and the average number of tokens per mention are presented in Table 1. Lack of data for certain related entities presented a challenge for developing named entity recognition systems based on machine learning.
System description
The architecture of the proposed systems consists of three stages: (1) document parsing, (2) word vectorisation, (3) tagging ADRs and their related entities. During the document parsing stage we attempt to restore the original structure of the document and recognise elements such as headings, tables (with rows and cells), lists (with items) and text paragraphs. The word vectorisation stage depended on the type of tagging model in the following stage and aimed to generate word vectors from text sequences using either hand-crafted features or unsupervised learning. The main task of tagging stage is to extract mentions of specific type from text by sequence labelling of extracted word vectors. Since some related entities rely on ADR mentions, they are performed separately, after the ADR tagging is completed. The pipeline is presented in Figure 1 and the following subsections provide details about each processing stage.
Document parsing
The aim of this stage is to perform re-engineering the structure of the document so that later their content can be treated differently. For instance, it might be Table NUM.") and a footer that contain additional notes. We treated all text lines after the aforementioned caption trigger and before the paragraph separator (multiple empty lines) as potential table rows. Then, we categorised each row candidate as part of the caption, header, content and footer, based on the number of potential columns, numerical cells and words in each cell. For some tagging models we applied two different document splitting strategies: (1) take the whole element (i.e. table, list, paragraph) and represent them as text or (2) take the textual content of sub-elements (such as table cells and list items) and treat them as individual items.
Tagging models
We utilised various types of tagging methods based on knowledge-driven rules, conditional random fields (CRF), bidirectional long short-term memory networks (BLSTM) and two different types of ensemble methods. We generated word vectors differently depending on the sequence labelling approach by using either hand-crafted features or obtaining word embeddings from unsupervised learning models trained on large text corpora.
Rule-based models
Rule-based methods are based on a knowledge-driven approach and manually curated dictionaries. In particular, we applied them for negation and animal classes, since there was not enough labelled data to be modelled by machine learning algorithms.
-To identify negations, we have developed a rule based tagger using the modification of DepND 7 that uses GENIA dependency parser [11] to recognise the scope of the negation and the dictionary of negation triggers. In particular, we added a list of phrases that need to be ignored if appeared in a negation phrase or scope (such as "not available" or "could not be assessed") and labelled negations only when an ADR mention is found inside the negation scope. We applied the negation tagger on the sub-element level (i.e. on table cells and list items). -For the animal class, we made an assumption that animals are not mentioned in drug labels unless adverse events are reported on them during the trials. Also, there is a close set of animal spices that are usually used in medical experiments [9]. We have developed a dictionary-based tagger that labelled all mentions of animals from our list. The animal tagger was used on the sub-element level.
CRF models
Linear chain conditional random fields (CRF) is a linear statistical model that encodes conditional distributions p(y|x) between observations (input features) and output variables (labels). Prior to passing a text input into the model, each sequence item (i.e. word or token) should be converted into a feature vector. In particular, we experimented with lexical features, part-of-speech tags, grammatical relations (dependencies), vocabulary and semantic features (such as corresponding semantic types and named entity tags from various medical systems). In order to capture the context for a given token, the mentioned features were extracted from a certain number of surrounding tokens (context window). All CRF models were used on the whole elements (i.e. tables, lists) represented as a text.
-For ADR mentions, we extracted word lemmas, part-of-speech tags (retrieved using the GENIA tagger [14]), UMLS semantic types (obtained using QuickUMLS [12]) and lexicon match (i.e. whether the current word is exist in the ADR lexicon 8 ). We trained word2vec on lemmatised sentences extracted from 2,208 unannotated drug labels that were provided as a part of this task.
In particular, we extracted 200-dimensional feature vectors from continuous bag-of-words model with a context window of size 5, trained with negative sampling using five noise words. Then we performed K-means clustering (n=50) of the word-vector space. For words that are found in the model, we used their corresponding cluster number, otherwise we used the lemma of the word as a feature. In order to capture the context we also extracted features from surrounding words (i.e. five preceding and five following words). -For the severity, factor and drug class, we used a similar set of features with additional lexicon features. In particular, a lexicon for drug class was obtained from DrugBank 9 and Anatomical Therapeutic Chemical Classification System (ATC) 10 , whereas for other aforementioned classes we experimented with lexicons obtained from the provided labelled data. We also added an additional binary feature that indicates whether the ADR is mentioned in the surrounding context.
BLSTM models
Bidirectional Long Short-Term Memory networks (BLSTM) are specific type of recurrent neural networks designed to learn long-term dependencies. In order to increase the amount of input information, the given sequence is read in both directions (forward and backward). For this tagging model we obtained word vectors from multiple word2vec models trained on large text corpora from generic and target domains. The generic 200-dimensional word embeddings were trained on a combination of PubMed and PMC texts with texts extracted from a recent English Wikipedia dump [10], whereas the target 200-dimensional word embeddings were trained on 2,208 unannotated drug labels. The BLSTM model was trained using RMSprop [13] -A voting BLSTM and CRF ensemble was training both CRF and BLSTM classifiers in parallel and selected the best candidate based on the highest average predicted probability of each class obtained from each classifier. -A stacked CRF-BLSTM ensemble is our proposed modification of Wolpert's stacked generalisation [17] that firstly trains the CRF classifier, using the previously described features, and then utilises its predicted probabilities for each class to build an additional token-level embeddings for the BLSTM classifier. In this way, the obtained word vector has the dimension of the number of target classes used in CRF and its values will correspond to predicted probabilities.
For the voting and stacked ensembles we have utilised an ADR-specific feature extractor and trained a single ensemble model on all classes.
Evaluation of the tagging models on the training data
The provided labelled data contained 101 documents. We evaluated the supervised machine learning models using holdout cross-validation; therefore the dataset was split into training (56 documents), validation (24 documents) and testing (21 documents) sets. The rule-based models were evaluated on the whole dataset. The evaluation results for all developed taggers are presented in Table 2.
As it can be seen from Table 2, we calculated precision, recall and F1-score for labelling tokens in the document. Later, sequential labels are post-processed and merged into mentions.
Both ensemble models usually outperformed individual models especially in cases where there was enough training and testing samples. The stacked and voting ensembles performed relatively similar, although the stacked ensemble was slightly better in general. The F1-score for labelling adverse drug reactions ranges between 85%-87%, with the maximum score for the ensemble and BLSTM tagger. The BLSTM tagger performed better on the severity and factor classes. Drug class gave the best results on the test set with the CRF tagger, however, these results were quite unstable. While CRF performed on the test set with F1score of 38%, on the validation set the F1-score was only 22%. The rule based approach gave the best results for the rare classes, such as negation and animal.
Runs and system evaluation
Using the evaluation results presented in the previous section, we have combined the best-performing taggers and created two systems which correspond to the two runs submitted for the final shared task evaluation (on test data). Table 2. Token-level evaluation of the taggers by the entity class and method used on the provided 101 labelled documents using holdout cross-validation.
and Drug class) we used the BLSTM tagger. The three related entities used one BLSTM model. -Run #2: The rule-based tagger was applied only for the Negation class, whereas all other classes were handled with the Stacked CRF+BLSTM ensemble model.
Results
The systems were trained on the whole annotated dataset provided (101 documents) and applied on unannotated dataset for automatic tagging (2,208 documents). Then, sample of the automatically tagged documents were used for the evaluation. The primary metric for this evaluation was the micro-averaged F1-score. We have presented the system evaluation results in Table 3. Table 3. Performance of the submitted systems on the test data considering and not considering types of annotated entities. The primary metric used for the evaluation is marked in bold.
Discussions
The submitted systems had similar performance, with Run 1 having slightly better performance on the test data (by less than 0.5%). The achieved results are similar to the results obtained on the training data using 3-fold cross-validation (F1-score of 77.26 for the Run 1, and 76.61 for the Run 2).
The classes were not balanced. Some classes, such adverse drug reactions had a fair number of labelled entities in the training set, and therefore the machine learning models could be efficiently trained on this class. However, other classes were relatively small compared to the ADR class. Also, other classes were related to the ADR class and were only triggered if the ADR class is labelled in its vicinity. Context of the labels had significant importance in this task, as the same phrase is labelled depending on whether it is in vicinity of an ADR and whether it closer describes an ADR. For example word "serious" will be labelled as severity in context of "serious headache", however, it will not be labelled in other contexts, such as for example in "serious consideration".
On the other hand, some classes, such as animal and negation had only a small number of annotations in the training dataset. Therefore, it was impossible to train a machine learning model and it was necessary to develop a rule based approach. The rules for the negation class were considering context and whether in the scope of the negation is present an ADR. On the other hand, mentions of animals unrelated to an ADR were rare. Therefore, it was safe to make an assumption that all animal mentions are related to adverse drug reactions.
Conclusion
In this paper we presented a number of different methodologies for labelling adverse drug reactions and related factors, severity, drug class, negation and whether they were reported on animals. We presented two systems made out of the best performing taggers that were submitted to the ADR track shared task of the Text Analytics Conference (TAC2017). The systems performed with F1-scores of 76% and 75.58% on the testing data.
There is still space for improvement of the system and performing additional experiments. More informative features of the text could help improve the CRF machine learning taggers, while more representative word embeddings could be helpful for the BLSTM based taggers. This can be achieved using additional vocabularies, semantic resources and knowledge bases.
Other potential way to improve the performance of the tagging is to investigate alternative ensemble methods, e.g. utilise an additional meta-classifier to combine the CRF and BLSTM results. In addition, performance of the BLSTM model directly depends on the word embeddings that were used, therefore alternative word representation models in addition to word2vec might be utilised (e.g. multi-level word representation or knowledge-infused word embeddings).
However, there is still challenge of labelling classes that have a low number of examples. In these cases, it is challenging to create a good performing machine learning models, because of the lack of examples. However, our rule based approaches can be further improved with additional samples and looking at additional data. Also, machine learning performance can be probably improved by using additional annotated data and external data sets.
Fig. 1 .
1Document processing and tagging pipeline. beneficial to analyse the content of a table cell individually rather than the whole chunk of the text that contain multiple rows and cells. We identified four different element types in the document: -Headings are numbered titles for sections and sub-sections (e.g. "5.1 Asthma-Related Death [See Boxed Warning]") -Tables have heading rows and content rows, each of them is also having cells. Each row might have different number of cells. In addition, a table may have caption (which usually starts with "
-Lists are groups of multiple bullet-points or items. Consequent text lines that starts with asterisk character (* ) are considered as list items. List should have more than one item.-Paragraphs are any other chunk of text separated with multiple new-line characters.
Table 1. The number of annotated mentions (group of tokens), number of tokens, and the average number of tokens per mention in the provided training dataEntity class
#mentions #tokens Avg. tk/mention
Adverse drug reaction
12,792
21,258
1.66
Severity
863
1,306
1.51
Factor
602
653
1.08
Drug class
248
518
2.09
Negation
95
109
1.47
Animal
44
44
1.00
algorithm with the learning rate of 1 × 10 −5 .For regularisation, dropout with the rate of 0.1 was applied on each LSTM layer
with 170 units. We trained BLSTM model for 50 epochs and used early stop-
ping with patience of 10 epochs. Since this model does not rely on hand-crafted
features, we used the same model configuration for both adverse reactions and
related entities. For all entity types, we have trained a single BLSTM model on
the whole elements (i.e. tables, lists) represented as a text.
Ensemble models We have created two different ensemble models:
-Run #1: We applied the rule-based approaches for the Negation and Animal classes. For Adverse Drug Reactions we utilised the CRF model with the hand-crafted features. For all other entity types (i.e. Severity, FactorEntity class Method
Precision Recall F1-score
ADR
CRF
90
82
86
BLSTM
86
84
85
Voting BLSTM+CRF
91
84
87
Stacked CRF+BLSTM
90
85
87
Severity
CRF
67
51
58
BLSTM
55
75
64
Voting BLSTM+CRF
70
65
67
Stacked CRF+BLSTM
58
71
64
Factor
CRF
52
20
29
BLSTM
73
46
56
Voting BLSTM+CRF
87
36
51
Stacked CRF+BLSTM
82
41
55
Drug class
CRF
41
35
38
BLSTM
57
21
31
Voting BLSTM+CRF
62
12
20
Stacked CRF+BLSTM
57
24
34
Negation
CRF
25
18
21
BLSTM
22
12
15
Voting BLSTM+CRF
50
06
11
Stacked CRF+BLSTM
57
24
33
Rule-based
66
66
66
Animal
CRF
76
100
87
BLSTM
100
46
63
Voting BLSTM+CRF
100
38
56
Stacked CRF+BLSTM
40
31
35
Rule-based
86
100
93
https://dailymed.nlm.nih.gov/dailymed/index.cfm 6 https://bionlp.nlm.nih.gov/tac2017adversereactions/
Text Analytics Conference, Adverse Drug Reactions Track, 2017.
https://github.com/zachguo/DepND Text Analytics Conference, Adverse Drug Reactions Track, 2017.
Maksim Belousov et al.
http://diego.asu.edu/downloads/publications/ADRMine/ADR_lexicon.tsv 9 https://www.drugbank.ca/ 10 http://www.atccode.com/
Using natural language processing to extract drug-drug interaction information from package inserts. R Boyce, G Gardner, H Harkema, BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. Boyce, R., Gardner, G., Harkema, H.: Using natural language processing to extract drug-drug interaction information from package inserts. In: BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. pp. 206-213
De-identification of patient notes with recurrent neural networks. F Dernoncourt, J Y Lee, O Uzuner, P Szolovits, Journal of the American Medical Informatics Association. 243Dernoncourt, F., Lee, J.Y., Uzuner, O., Szolovits, P.: De-identification of patient notes with recurrent neural networks. Journal of the American Medical Informatics Association 24(3), 596-606 (2017)
Consistency in the safety labeling of bioequivalent medications. J Duke, J Friedlin, X Li, Pharmacoepidemiology and drug safety. 223Duke, J., Friedlin, J., Li, X.: Consistency in the safety labeling of bioequivalent medications. Pharmacoepidemiology and drug safety 22(3), 294-301 (2013)
Extracting drug indication information from structured product labels using natural language processing. K W Fung, C S Jao, D Demner-Fushman, Journal of the American Medical Informatics Association. 203Fung, K.W., Jao, C.S., Demner-Fushman, D.: Extracting drug indication informa- tion from structured product labels using natural language processing. Journal of the American Medical Informatics Association 20(3), 482-488 (2013)
Labeledin: cataloging labeled indications for human drugs. R Khare, J Li, Z Lu, Journal of biomedical informatics. 52Khare, R., Li, J., Lu, Z.: Labeledin: cataloging labeled indications for human drugs. Journal of biomedical informatics 52, 448-456 (2014)
The sider database of drugs and side effects. M Kuhn, I Letunic, L J Jensen, P Bork, 1075Kuhn, M., Letunic, I., Jensen, L.J., Bork, P.: The sider database of drugs and side effects. Nucleic acids research p. gkv1075 (2015)
Incidence of adverse drug reactions in hospitalized patients: a meta-analysis of prospective studies. J Lazarou, B H Pomeranz, P N Corey, Jama. 27915Lazarou, J., Pomeranz, B.H., Corey, P.N.: Incidence of adverse drug reactions in hospitalized patients: a meta-analysis of prospective studies. Jama 279(15), 1200- 1205 (1998)
Adverse drug reactions. A Lee, Pharmaceutical pressLee, A.: Adverse drug reactions. Pharmaceutical press (2006)
Trends in animal research. M Mukerjee, Scientific American. 2762Mukerjee, M.: Trends in animal research. Scientific American 276(2), 86-93 (1997)
Distributional semantics resources for biomedical text processing. S Pyysalo, F Ginter, H Moen, T Salakoski, S Ananiadou, Pyysalo, S., Ginter, F., Moen, H., Salakoski, T., Ananiadou, S.: Distributional semantics resources for biomedical text processing (2013), http://bio.nlplab. org/
Dependency parsing and domain adaptation with lr models and parser ensembles. K Sagae, J Tsujii, Emnlp-conll. Prague, Czech RepublicSagae, K., Tsujii, J.: Dependency parsing and domain adaptation with lr models and parser ensembles. In: Emnlp-conll. vol. 2007, pp. 1044-1050. Prague, Czech Republic (2007)
Quickumls: a fast, unsupervised approach for medical concept extraction. L Soldaini, N Goharian, MedIR workshop, sigir. Soldaini, L., Goharian, N.: Quickumls: a fast, unsupervised approach for medical concept extraction. In: MedIR workshop, sigir (2016)
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning. T Tieleman, G Hinton, 4Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learn- ing 4(2), 26-31 (2012)
Developing a robust part-of-speech tagger for biomedical text. Y Tsuruoka, Y Tateishi, J D Kim, T Ohta, J Mcnaught, S Ananiadou, J Tsujii, Panhellenic Conference on Informatics. SpringerTsuruoka, Y., Tateishi, Y., Kim, J.D., Ohta, T., McNaught, J., Ananiadou, S., Tsujii, J.: Developing a robust part-of-speech tagger for biomedical text. In: Pan- hellenic Conference on Informatics. pp. 382-392. Springer (2005)
US Food and Drug Administration: Cfr-code of federal regulations title 21. Current good manufacturing practice for finished pharmaceuticals Part. 211US Food and Drug Administration: Cfr-code of federal regulations title 21. Current good manufacturing practice for finished pharmaceuticals Part 211 (2014)
Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks. Q Wei, T Chen, R Xu, Y He, L Gui, Database. 2016Wei, Q., Chen, T., Xu, R., He, Y., Gui, L.: Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks. Database 2016 (2016)
Stacked generalization. D H Wolpert, Neural networks. 52Wolpert, D.H.: Stacked generalization. Neural networks 5(2), 241-259 (1992)
Large-scale automatic extraction of side effects associated with targeted anticancer drugs from full-text oncological articles. R Xu, Q Wang, Journal of biomedical informatics. 55Xu, R., Wang, Q.: Large-scale automatic extraction of side effects associated with targeted anticancer drugs from full-text oncological articles. Journal of biomedical informatics 55, 64-72 (2015)
| [
"https://github.com/zachguo/DepND"
] |
[
"An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical Information An algorithm for Aligning Sentences in Bilingual Corpora Using Lexical information",
"An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical Information An algorithm for Aligning Sentences in Bilingual Corpora Using Lexical information"
] | [
"Akshar Bharati \nInternational Institute of Information Technology\nHyderabad\n",
"V Sriram \nInternational Institute of Information Technology\nHyderabad\n",
"A Vamshi Krishna \nInternational Institute of Information Technology\nHyderabad\n",
"Rajeev Sangal \nInternational Institute of Information Technology\nHyderabad\n",
"Sushma Bendre bendre@iiit.net \nInternational Institute of Information Technology\nHyderabad\n"
] | [
"International Institute of Information Technology\nHyderabad",
"International Institute of Information Technology\nHyderabad",
"International Institute of Information Technology\nHyderabad",
"International Institute of Information Technology\nHyderabad",
"International Institute of Information Technology\nHyderabad"
] | [] | In this paper we describe an algorithm for aligning sentences with their translations in a bilingual corpus using lexical information of the languages. Existing efficient algorithms ignore word identities and consider only the sentence lengths(Brown, 1991;Gale and Church, 1993). For a sentence in the source language text, the proposed algorithm picks the most likely translation from the target language text using lexical information and certain heuristics. It does not do statistical analysis using sentence lengths. The algorithm is language independent. It also aids in detecting addition and deletion of text in translations. The algorithm gives comparable results with the existing algorithms in most of the cases while it does better in cases where statistical algorithms do not give good results. | null | [
"https://arxiv.org/pdf/cs/0302014v1.pdf"
] | 15,566,050 | cs/0302014 | 97ef63515351895f056b0569dd58d25b3d52774d |
An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical Information An algorithm for Aligning Sentences in Bilingual Corpora Using Lexical information
Akshar Bharati
International Institute of Information Technology
Hyderabad
V Sriram
International Institute of Information Technology
Hyderabad
A Vamshi Krishna
International Institute of Information Technology
Hyderabad
Rajeev Sangal
International Institute of Information Technology
Hyderabad
Sushma Bendre bendre@iiit.net
International Institute of Information Technology
Hyderabad
An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical Information An algorithm for Aligning Sentences in Bilingual Corpora Using Lexical information
In this paper we describe an algorithm for aligning sentences with their translations in a bilingual corpus using lexical information of the languages. Existing efficient algorithms ignore word identities and consider only the sentence lengths(Brown, 1991;Gale and Church, 1993). For a sentence in the source language text, the proposed algorithm picks the most likely translation from the target language text using lexical information and certain heuristics. It does not do statistical analysis using sentence lengths. The algorithm is language independent. It also aids in detecting addition and deletion of text in translations. The algorithm gives comparable results with the existing algorithms in most of the cases while it does better in cases where statistical algorithms do not give good results.
Introduction
Aligned bilingual corpora have proved useful in many ways including machine translation, sense disambiguation and bilingual lexicography. The task of alignment has proved to be difficult in many ways. For some languages, it is difficult to use the statistical analysis of sentence lengths to do the alignment. Further, there are substantial additions and deletions that can occur on either side, particularly when the languages are far apart . A lot of sentences align many to many and this makes the task more difficult.
There are a few existing algorithms which do good alignment. One such algorithm is the Gale and Church Algorithm (1993). The Gale and Church Algorithm is basically dependent on the length of the sentence in terms of characters and the Brown's algorithm (1991) is dependent on the length of the sentence in terms of words. Dynamic programming is then used to search for the best alignment in both the algorithms. These algorithms cannot be used effectively to align a very large corpus taken as a single unit. Therefore it depends heavily on the paragraph delimiters, which are called 'hard delimiters'. These paragraph delimiters also help the Gale and Church algorithm to correct itself if it is going wrong. These paragraph markers sometimes may not be present in the corpus. Also the source text paragraphs and the target text paragraph may not align with each other. For example the parallel-corpus that we used had no paragraph delimiters. To use the Gale and Church algorithm on the parallel-corpora with no paragraph delimiters, the delimiters have to be introduced manually.
While both the Gale and Church algorithm and Brown's algorithm have achieved remarkably good outputs for language pairs like English-French and English-German with error rates of 4% on an average, there is still a great scope for improvement. The algorithm is not robust with respect to non-literal translations and deletions. Also with an algorithm, which relies only on length of the sentence, it is quite difficult to automatically recover from large deletions.
The Gale and Church did not work well on the parallel-corpus that we have, and the need to use the lexical information to do the alignment was felt. Also alignment algorithms that use lexical information offer a potential for high accuracy on any corpus. Chen (1993) did considerable amount of work on English-French corpus using lexical information.
The algorithm that is proposed in this paper does the sentence alignment at a high level of accuracy using lexical information of the corpus and available lexical resources of the source and target languages. One such resource used is the bilingual lexicon. The proposed algorithm uses a medium-coverage dictionary (25,000 words lexicon) to do the alignment. The other resources that are used include the chunkers for both the languages. The algorithm first breaks the sentences of both the languages into small units called chunks. To find an alignment for a sentence in the source language, it is matched with a set of possible sentences in the target language and the scores are assigned for each comparision. The score of match of two sentences is calculated by finding out the number of chunks that match between the two sentences. The algorithm then carries out the alignment of sentences using these scores. The precision of the alignment is 94.3%.
Background
Parallel Bilingual Corpus:
In this section, we describe the data that we used to test the algorithm. The data comes from a weekly news magazine "India-Today". The magazine is released in two languages. The source language is English and the target language into which it is later translated is Hindi. English is a fixed-word order language while Hindi is a free-word order language. Free-word order languages are those where the order of words can be changed without losing the meaning. Hence, the sentence lengths of both the languages are not proportional which makes it difficult to use statistical analysis to do the alignment. Also, there are substantial additions and deletion of texts in either of the issues of the magazine.
Framework:
A text in a source language and the corresponding text in a target language are given to the alignment system. First, all the source sentences and the target sentences are chunked into smaller units based on the language specific chunkers. Our aim is to identify an appropriate translation for a particular sentence in the source language text among the sentences in the target language text. To do this, the source sentence is first compared with a set of probable sentences that could be the translation of the source sentence. A score of comparison is assigned for every such matching.
The score of comparison between a source sentence and a target sentence is determined by comparing the chunks of both the sentences using a English-Hindi lexicon. To assign the scores, we identify the number of chunks of the source that are actual translations in the target language. Based on number of matching chunks out of available chunks, various scoring functions can be used.
The figure below gives an overview of the Sentence alignment System
Chunking
In this section, we explain Chunking and how it is useful to do the alignment. Chunking involves finding the groups of related words in a sentence. The chunks are used to refer to a single concept. A sentence can thus be looked at as a sequence of chunks, each chunk adding information to the sentence. Hence, the chunks can be seen as the building blocks of a sentence in conceptual terms. They are often used to do the analysis of the sentences.
Also, the lexicons may not be exhaustive for all languages. By chunking sentences, the headwords can be identified and can be used in matching where as the other words can be given a relatively lesser weight and therefore even if a dictionary provides less word matches, it is not a considerable drawback. If it were just a word-toword match where every word has equal weight, then any word that is not supported by the lexicon of that language would mislead the alignment.
The chunks can be categorized as Noun chunks. Verb chunks.
Noun chunks:
A non-recursive noun phrase is a noun chunk .It typically consists of a determiner and optional adjectives followed by a noun. Prepositions preceding the noun phrases are also grouped with the noun phrase. Noun chunks are usually the same as the noun phrases.
For example, 1. 'The red party':: Here, "The" = determinant, "red" = adjective and "party " = noun. 2. 'of the cast iron pump':: Here, "of" = preposition, "the" = determinant, "cast" and "iron" = adjectives and "pump" = noun.
Verb chunks:
A verb chunk consists of a group of main verb, supporting or auxiliary verbs and the adverbs. The auxiliary verbs in a sentence tell the tense, aspect and the modality of the sentence. The main verb carries the lexical information in a verb group. Verb chunks are usually same as the verb phrases.
For example: 1. 'is playing':: Here, "playing" = main verb, "is" = supporting verb. 2. 'would have been going fast':: Here, "would have been" = auxiliary verbs, "going" = main verb and "fast" = adverb.
Sentences can be chunked as shown below.
Chunk Matching
Two chunks are matched by first matching their headwords and then matching the support words of a chunk. The chunk matching thus is done by looking inside a chunk, that is, the words constituting the chunks. In a Noun chunk, all the words except the prepositions and postpositions are used to do the chunk matching whereas in a Verb chunk, only the headword is used to do the chunk matching.
Two words of a chunk are matched by any of the following ways-1. Bi-lingual dictionary lookup: -To match two words, we look at the meanings of the word of source language sentence in the dictionary and check if the target word is one among them.
Alignment Algorithm
The alignment is done after the scores of comparisons are assigned which are obtained by matching the chunks. Different scoring functions can be used to calculate the score of match. The following gave better results than the rest as it gives a more stable match between the two.
Number of matching chunks Score_of_match (S, T) = Maximum (source chunks, target chunks)
S: Source sentence T: Target sentence
The alignment algorithm takes the scores of comparison in decreasing order and considers each pair for alignment. Hence, the scores of comparisons are first sorted. Only those pairs of sentences are considered whose scores of comparison are above a particular threshold. The value of the threshold can be any value more than 0. The threshold affects the precision and recall of the alignment. As the threshold increases, the precision increases while the recall becomes less. The process of checking whether a pair of sentences form an alignment or not is governed by few heuristics.
Note that the above algorithm gives a one-one mapping of sentences.
The heuristics that are used 1. The aligner does not match two sentences to one sentence or vice-versa. ie.., a cannot align with b if a already aligns with c. This heuristic is applied because usually the number of chunk matching is very less and would give rise to error in the absence of this heuristic.
2. The aligned pairs of sentences follow the rule of linearity, ie.., if a b and c d , then sign(a-c) = sign (b -d) This also means that the aligner does not allow any cross-linking.
Evaluation and comparison with the existing algorithms
The evaluation was done on "India-Today" corpus. About 140 texts from different issues of the magazine were taken. They included texts from diverse areas like politics, sports, business etc. The number of sentences extracted by the sentence aligner from the 140 texts are 3021, each text having an average of about 21 sentences. The sentences extracted by the system were evaluated and it was found that out of 3021 sentences extracted, 2849 sentences were aligned correctly. The precision of the system is 94.3%. The Gale and Church algorithm was run on the same set of 140 texts and was evaluated against the 2849 sentences that were correctly extracted by the proposed algorithm. It was found that the Gale and Church algorithm could identify 1767 alignments correctly out of the 2849 correct alignments. As the Gale and Church algorithm is designed to accommodate 1-2, 2-1 alignments, any partial alignment was also considered a correct alignment. The resulting precision is 62%.
The performance of the proposed algorithm can be visualized in Figure 2 by plotting the error-percentages on the horizontal axis and numbers of texts containing the corresponding error-percentages on the vertical axis. Figure 2 gives the truncated version of the graph till an error percentage of 10.
Figure 2. Error Percentage verses Number of texts
It can be seen from Figure 2. that out of the 140 texts used to evaluate, 55 texts gave 100% precision.
We now compare the proposed algorithm with the Gale and Church algorithm. In Figure 3., we plot text number on the horizontal axis and the precision of the text on the vertical axis. The texts were taken in the increasing order of performance of the Gale and Church algorithm. The Gale and Church algorithm gave low precisions when it is run on texts that have substantial deletions. Out of 140 texts, Gale and Church algorithm gave a precision of 100% on 22 texts as opposed to 55 texts given by the proposed algorithm.
Figure3. Comparison of Precision of Gale and Church algorithm and the proposed algorithm.
From Figure 3, it can be seen that the Gale and Church algorithm gave 100% precision for the texts numbered from 119 to 140. Among these 22 texts, the proposed algorithm gave 100% precision for 10 texts while it gave a lower accuracy for 12 texts because of the inadequacy of the lexicon. Also, it can be seen that the Gale and Church algorithm gave a 0% precision for texts numbered from 0 to 4. On investigation, it was found that the reason for the incorrect alignment of these texts by the Gale and Church algorithm was large deletions at the beginning of these texts.
A text, which contained deletions, was aligned using the proposed algorithm and the same text was aligned using Gale and Church algorithm and the results are shown in Figure 4. For the text that was considered, the proposed alignment algorithm gave a precision of 100% while it gave a precision of only 57% when aligned using Gale and Church algorithm. The low precision of the alignment done by Gale and Church algorithm clearly depicts that it fails to align the texts that have deletions. From Figure 4, we can see that the algorithm could detect a deletion of text from sentences 16 to 19, whereas the Gale and Church algorithm has failed to mark such a deletion.
The drawback of the proposed algorithm when compared to the existing algorithms is that a sentence in a source language text may sometimes not match with any of the sentences in the target language text due to the low coverage of the dictionary. This affects the recall in certain cases.
A text for which, the proposed algorithm gave a lower accuracy when compared to the Gale and Church algorithm is taken, and the results of the alignment were plotted in figure 5. The Gale and Church algorithm gave a precision of 97% while the proposed algorithm gave a precision of 87%.
Conclusion and Future Work
It is to be noted that the Algorithm is language independent and given the chunkers for any pair of source and target languages along with the bi-lingual lexicon can be guided to give a reasonably correct alignment of the sentences. We have done the sentence alignment as a part of our research on Example-Based Machine Translation. Evaluation of many-many alignment on the same corpus would be done in future. The next stage in an Example-Based Machine Translation system is Chunk-Alignment where we would like to increase the Chunk Matching using Heuristics and Linguistic Input. The Aligned chunks can also be used to build a Phrasal Dictionary. Also, the Aligned chunks can be used to give feedback to the lexicon used.
Figure 1 .
1Summarizing the framework of the algorithmAs depicted in theFigure 1, there are three stages in the proposed alignment algorithm namely Chunking, Scoring and Alignment.
ProposedFigure 4 .
4Algorithm (Figure 4.a) Gale and Church Algorithm (Figure 4.b) Comparison of the proposed algorithm with Gale and Church algorithm for a text where the proposed algorithm does better.
ProposedFigure 5 .
5Algorithm (Figure 5.a) Gale and Church Algorithm (Figure 5.b) Comparision of the proposed algorithm with Gale and Church algorithm for a text where the proposed algorithm does worse.
[[sa ivaSaalakaya PavaasaI maCila ko ] [Ÿgar ko %ao la ko ilae] [gau jara%a mao M ] [PaàcaIna kala sao hI]English:
[The gigantic migratory fish] ((has been sought out)) [in Gujarat] [since ancient times]
[for its liver oil] .
Hindi :
gigantic migratory fish
liver
oil
Gujarat
ancient times
(( [sakI kafI maaM ga rhI hO )).
sought
Noun chunks are enclosed in [] while the verb chunks are enclosed in (()).
. Target-Target dictionary lookup:-Non-availability of the match of a source word and a target word in the Bi-Lingual dictionary would result in a further lookup in a target-target dictionary if available.3. Numeric matching: -A number in a source sentence that shows a correspondence in the target sentence would result in a reliable match.4. Phonetic matching: -If the words are proper nouns, the words are matched using a phonetic matcher.
Example:Take a source text having four sentences (s1, s2, s3, s4) and a target text having two sentences (t1, t2).Let the scores of comparison be 1. Score (s1, t1) = 5 2. Score (s2, t1) = 15 3. Score (s2, t2) = 20 4. Score (s3,s1) = 30 5. Score (s3, s2) = 7 6. Score (s4, s2) = 10The scores are then considered in the sorted order.s3 -s1=> The pair is aligned 2. s2 -t2 => This alignment gives rise to cross-mapping, hence it is not aligned. (Violation of heuristic 2) 3. s2 -s1 => The alignment violates rule 1 which says that there should only one-one mapping. Hence, this alignment is also rejected. 4. s4 -s2 => This pair is aligned.This above example shows the working of the algorithm. The aligned pairs that are formed are (s3, s1) and (s4, s2).Many-many alignment:The alignment produced above can be extended to include one-many mapping. The sentences that did not get align may be a part of the one-many mapping that is possible. To be a part of one-many mapping, the sentences should be adjacent to the sentences that have already matched.This alignment can be carried out using the following two procedures,Considering number of chunks:The number of chunks in most of the language-pairs is proportional. Hence, this property can be used to attach the unaligned sentences to the already existing alignment. For example, if the number of chunks in a source sentence is 15 and the number of chunks in a target sentence are 5. This means that the target sentence is not a complete translation of the source sentence and some other sentence in the target language would complete the translation. Hence, the sentences adjacent to the considered are verified. The sentence that adds up more closely with the target sentence to equal the number of chunks of source sentence is also considered to be aligning with the source sentence.2. Re-using the scores of match:The alignment algorithm is re-run on the sentences without disturbing the existing alignment to facilitate the many-many alignments. In this pass, we ignore the heuristic that an already aligned sentence should not be aligned with another sentence and then align the non-aligned sentences with the information of the scores of match.
Aligning Sentences in Parallel Corpora. P Brown, J Lai, R Mercer, 47th Annual meeting for the Association of Computational Linguistics. Brown P,J.Lai and R.Mercer (1991) " Aligning Sentences in Parallel Corpora " 47th Annual meeting for the Association of Computational Linguistics.
A Program for Aligning Sentences in Bilingual Corpora. Church Gale, Computational Linguistics. also presented at ACL-91Gale and Church (1993) "A Program for Aligning Sentences in Bilingual Corpora" Computational Linguistics, also presented at ACL-91
Aligning Sentences in Bilingual Corpora Using lexical Information. Stanley Chen, ACL-93Stanley Chen(1993) " Aligning Sentences in Bilingual Corpora Using lexical Information ," ACL-93.
Aligning Parallel Texts: Do Methods Developed for English -French Generalize to Asian Languages. Gale Church, Dagan , RoclingChurch, Gale and Dagan "Aligning Parallel Texts: Do Methods Developed for English - French Generalize to Asian Languages? " Rocling 1993.
Automatic Detection of Omissions in Translations. I , Dan Melamed, IRCS. I.Dan Melamed, "Automatic Detection of Omissions in Translations" 1996 IRCS.
| [] |
[
"ASSESSING THE TOLERANCE OF NEURAL MACHINE TRANSLATION SYSTEMS AGAINST SPEECH RECOGNITION ERRORS",
"ASSESSING THE TOLERANCE OF NEURAL MACHINE TRANSLATION SYSTEMS AGAINST SPEECH RECOGNITION ERRORS"
] | [
"Nicholas Ruiz nruiz@interactions.com \nFondazione Bruno Kessler\nItaly\n\nInteractions, LLC\nUSA\n",
"Mattia Antonino ",
"Di Gangi \nFondazione Bruno Kessler\nItaly\n",
"Nicola Bertoldi \nFondazione Bruno Kessler\nItaly\n",
"Marcello Federico federico@fbk.eu \nFondazione Bruno Kessler\nItaly\n"
] | [
"Fondazione Bruno Kessler\nItaly",
"Interactions, LLC\nUSA",
"Fondazione Bruno Kessler\nItaly",
"Fondazione Bruno Kessler\nItaly",
"Fondazione Bruno Kessler\nItaly"
] | [] | Machine translation systems are conventionally trained on textual resources that do not model phenomena that occur in spoken language. While the evaluation of neural machine translation systems on textual inputs is actively researched in the literature, little has been discovered about the complexities of translating spoken language data with neural models. We introduce and motivate interesting problems one faces when considering the translation of automatic speech recognition (ASR) outputs on neural machine translation (NMT) systems. We test the robustness of sentence encoding approaches for NMT encoderdecoder modeling, focusing on word-based over byte-pair encoding. We compare the translation of utterances containing ASR errors in state-of-the-art NMT encoder-decoder systems against a strong phrase-based machine translation baseline in order to better understand which phenomena present in ASR outputs are better represented under the NMT framework than approaches that represent translation as a linear model. | 10.21437/interspeech.2017-1690 | [
"https://arxiv.org/pdf/1904.10997v1.pdf"
] | 33,503,237 | 1904.10997 | 06b6ae64986e8a9511a424928dd7a71560a31f59 |
ASSESSING THE TOLERANCE OF NEURAL MACHINE TRANSLATION SYSTEMS AGAINST SPEECH RECOGNITION ERRORS
24 Apr 2019
Nicholas Ruiz nruiz@interactions.com
Fondazione Bruno Kessler
Italy
Interactions, LLC
USA
Mattia Antonino
Di Gangi
Fondazione Bruno Kessler
Italy
Nicola Bertoldi
Fondazione Bruno Kessler
Italy
Marcello Federico federico@fbk.eu
Fondazione Bruno Kessler
Italy
ASSESSING THE TOLERANCE OF NEURAL MACHINE TRANSLATION SYSTEMS AGAINST SPEECH RECOGNITION ERRORS
24 Apr 2019Index Terms: speech translationmachine translationevalua- tionneural machine translation
Machine translation systems are conventionally trained on textual resources that do not model phenomena that occur in spoken language. While the evaluation of neural machine translation systems on textual inputs is actively researched in the literature, little has been discovered about the complexities of translating spoken language data with neural models. We introduce and motivate interesting problems one faces when considering the translation of automatic speech recognition (ASR) outputs on neural machine translation (NMT) systems. We test the robustness of sentence encoding approaches for NMT encoderdecoder modeling, focusing on word-based over byte-pair encoding. We compare the translation of utterances containing ASR errors in state-of-the-art NMT encoder-decoder systems against a strong phrase-based machine translation baseline in order to better understand which phenomena present in ASR outputs are better represented under the NMT framework than approaches that represent translation as a linear model.
Introduction
A substantial amount of progress has been made in Neural Machine Translation (NMT) for text documents. Research has shown that the encoder-decoder model with an attention mechanism generates high quality translations that exploit long range dependencies in an input sentence [1]. While NMT has proven to yield significant improvements for text translation over loglinear approaches to MT such as phrase-based machine translation (PBMT), it has yet to be shown the extent to which gains purported in the literature generalize to the scenario of spoken language translation (SLT), where the input sequence may be corrupted by noise in the audio signal and uncertainties during automatic speech recognition (ASR) decoding. Are NMT models implicitly better at modeling and mitigating ASR errors than the former state-of-the-art approaches to machine translation? As a preliminary work, we analyze the impact of ASR errors on neural machine translation quality by studying the properties of the translations provided by an encoder-decoder NMT system with an attention mechanism, against a strong baseline PBMT system that rivals the translation quality of Google Translate™ on TED talks.
We address the following questions regarding NMT:
1. How do NMT systems react when ASR transcripts are provided as input?
2. Do ASR error types in word error alignments impact SLT quality the same for NMT as PBMT? Or is NMT implicitly more tolerant against ASR errors?
3. Which types of sentences does NMT handle better than PBMT, and vice-versa?
To address these questions, we explore the impact of feeding ASR hypotheses, which may contain noise, disfluencies, and different representations on the surface text, to a NMT system that has been trained on TED talk transcripts that do not reflect the noisy conditions of ASR. Our experimental framework is similar to that of [2,3], with the addition of a ranking experiment to evaluate the quality of NMT against our PBMT baseline. These experiments are intended as an initial analysis with the purpose to suggesting directions to focus on in the future.
Neural versus Statistical MT
Before beginning our analysis, we summarize some of the biggest differences between NMT and other forms of statistical machine translation, such as PBMT.
[4] compare neural machine translation against three topperforming statistical machine translation systems in the TED talk machine translation track from IWSLT 2015. 1 The evaluation set consists of 600 sentences and 10,000 words, post-edited by five professional translators. In addition to reporting a 26% relative improvement in multi-reference TER (mTER), [5]'s encoder-decoder attention-based NMT system trained on full words outperformed state of the art statistical machine translation (SMT) systems on English-German, a language pair known to have issues with morphology and whose syntax differs significantly from English in subordinate clauses. [4]'s analysis yields the following observations:
• Precision versus Sentence length: Although NMT outperformed every comparable log-linear MT system, they confirmed [6]'s findings that translation quality deteriorates rapidly as the sentence length approaches 35 words.
• Morphology: NMT translations have better case, gender and number agreement than PBMT systems.
• Lexical choice: NMT made 17% fewer lexical errors than any PBMT system.
• Word order: NMT yielded fewer shift errors in TER alignments than any SMT system. NMT yielded significantly higher Kendall Reordering Score (KRS) [7] values than any PBMT system. NMT generated 70% fewer verb order errors than the next-best hybrid phrase and syntax-based system.
Several SMT modeling challenges are exacerbated in NMT. While log-linear SMT translation models can handle large word vocabularies, NMT systems require careful modeling to balance vocabulary coverage and network size, since each token introduced increases the size of its hidden layers. Because of this constraint, [8] observe that only 69% of German nouns are covered with 30,000 words on English-German WMT 2014 system. 2 Although noun compound splitting works well for German→English, English→German model performance not improve significantly. In particular named entities (e.g. persons, organizations, and locations) are underrepresented.
On the other hand, NMT has the ability to model subword units such as characters [9] or coarser grained segmentations of low frequency words [10] without substantial changes to the system architecture, unlike other SMT approaches. [11] have additionally demonstrated NMT's ability to translate between multiple language pairs with a neural translation model trained with a single attention mechanism.
Although NMT models translate with higher precision, models are slow to train even with the most powerful GPUsoften taking weeks for the strongest systems to complete training. On the other hand, large order PBMT systems trained in the ModernMT framework 3 may be trained within a few hours and can be adapted in near real-time with translation memories containing post-editions by professional translators.
Research Methodology
Similar to our experimental framework in [2,3], we collect English ASR hypotheses from the eight submissions on the tst2012 test set in the IWSLT 2013 TED talk ASR track [12]. Coupled with reference translations from the MT track, we construct a dataset consisting of the eight English ASR hypotheses for 1,124 utterances, a single unpunctuated reference transcript from the ASR track, and the reference translations from the English-French MT track. The English ASR hypotheses and reference transcript are normalized and punctuated according to the same approach as described in [3]. We use both BLEU [13] and Translation Edit Rate (TER) [14] as global evaluation metrics. TER and ∆TER over gold standard ASR outputs are used to observe sentence-level trends. We compute automatic translation scores, sentence-level system ranking, and take a closer look at the types of errors observed in the data. Below, we briefly describe the MT systems used in this experiment.
Neural MT system
Our NMT system is based on FBK's primary MT submission to the IWSLT 2016 evaluation for English-French TED talk translation [15]. The system is based on the sequence-to-sequence encoder-decoder architecture proposed in [1] and further developed by [5,10]. The system is trained on full-word text units to allow a direct comparison with our PBMT counterpart. We refer to this system as NEURAL for the remainder of our experiments.
Phrase-based MT system
Our phrase-based MT system (which we refer to as MMT) is built upon the ModernMT framework: an extension of the phrase-based MT framework introduced in [ of the training data into homogeneous domains by a context analyzer (CA), which permits the rapid construction and interpolation of domain-specific translation, language, and reordering models based on the context received during a decoding run. It also permits the underlying models to be modified online. The decoder also exploits other standard features (phrase, word, distortion, and unknown word penalties) and performs cube-pruning search. A detailed description of the ModernMT project can be found in [17].
SLT Evaluation
We first report the translation results on the evaluation task in Table 1. NMT outperforms our best PBMT system by 4.5 BLEU in the absence of ASR errors (gold) and by approximately 3 BLEU across all ASR hypothesis inputs. Overall, the introduction of ASR errors results in decreases in BLEU by 5.5(±0.8) and 5.4(±0.8) and TER increases of 6.0(±0.9) and 6.2(±0.9) for MMT and NEURAL, respectively. Table 2 provides the average sentence-level TER and ∆TER scores, which report the degradation of SLT quality by the presence of ASR errors. Although the average TER scores from the MMT outputs are higher, the ∆TER scores are lower than their NEURAL counterparts, implying that the MMT SLT outputs are closer to their gold standard MT outputs. This may suggest that NMT is more sensitive to local changes to an input caused by minor ASR errors.
MT system ranking
Are there ASR error conditions in which PBMT remains a better solution than NMT, and if so, what are the properties of these utterances that makes them difficult for NMT? We take a closer look at the sentence-level translation scores by ranking the performance of each MT system on the utterances where ASR errors exist, in order to understand how each MT system handles noisy input. For each utterance, we rank the systems based on their the sentence-level TER scores computed on their translation outputs over each ASR hypothesis. We also mark ties, in which both systems yield the same TER score. Results containing the counts and percentage of wins by MT system are provided in Table 3. The NEURAL and MMT scores are tied on over 20% of the utterances. For the better performing ASR systems (e.g. NICT, KIT), we observe a slightly higher proportion of utterances with better NMT translations and a reduced number of ties. On the right-hand side of Table 3 we report the average TER scores within each ranking partition of the data. For example, for the utterances that are translated better by MMT, we observe that the average TER scores for NEURAL have an absolute average improvement of 10% in TER over MMT. The converse is also true, suggesting that there is a subset of utterances that MMT translates better than NEURAL.
We look into the translation errors caused by ASR errors by plotting the changes in MT system ranking as we shift from the perfect ASR scenario to the actual ASR results from the evaluation (Table 4). Across all ASR outputs, 70.2% of the MT evaluation ranking decisions remain the same when ASR errors create noisy input. The NEURAL model retains a higher rank 7.5% more often than MMT as ASR errors are introduced. Ranking ties remain 55.5% of the time. Of the remaining, the NEURAL model outperforms MMT 5.5% more often in the presence of ASR errors (25.0% versus 19.5%). These results confirm that at the corpus level NMT produces higher scoring translations in the presence of ASR errors.
Translation examples
Although NMT may outperform phrase-based SMT, our experiment shows that MMT still outperforms NEURAL 30.1% of the time. In order to understand this behavior, we provide three ex- amples of key differences between in how NEURAL and MMT mitigate FBK's ASR errors (Fig. 1). In utterance U4, NEU-RAL is missing the translation of two content words from its vocabulary. In the absence of errors NEURALgold passes the source word "embody" through to its output without translating it. During ASR, "embody" is misrecognized as "body", which is also passed through without a translation. We find it strange that "body" was not translated as "corps", given that other utterances containing "body" receive that translation. After investigating further, we came across other cases of gold transcripts where "body" was not translated at all. Utterances U212, U214, and U242 have the phrase "body architect", but only U212 has a translation for the word "body":
I call myself a body architect. je m'appelle un corps architecte.
As a body architect, I'm fascinated with the human body en tant qu'architecte, je me suis retrouvé avec le corps humain
As a body architect, I've created en tant qu'architecte , j'ai créé
It is likely that NMT may not be able to translate contextual patterns it hasn't observed before. MMT on the other hand provides valid translation for both words; although the meaning of the sentence is lost due to the translation of ASR errors. A PBMT system will translate phrases consistently, as long as there is not another overlapping phrase pair in the translation model that leads to a path in the search graph with a higher score. Utterance U85 in the TED talk test set shows longer range effects of ASR errors on translation in NMT. FBK's ASR recognized the utterance as "But when I step back, I felt myself at the cold, hard center of a perfect storm." In the translation of ASRgold, both MT systems translate the expression "stepped back" in the sense of "returned". MMTgold reorders "centre" incorrectly. ASRhyp has a single error where the past tense suffix "-ed" on "step" was lost. NEURALASR provides an adequate translation as "je recule", but in the process, the attention mechanism seems to have taken the incorrect source word and translation as context that corrupts the remainder of the translation. While MMTASR makes a translation error at the beginning of the sense, the remainder of the translated sentence remains the same as its gold translation. This suggests that ASR errors may have longer range effects on NMT systems in languages that are even observable in sentences that lack long distance dependencies.
Utterance U296 demonstrates an example where misrecognitions of short function words can cause the duplication of content words in NMT. While MMT handles the misrecognition "and"⇒"an" by backing off by translating it independently from other phrases in the sentence, NEURAL, attaches "photo" both to the article "an" and additionally outputs "photo" at its usual position. As innocuous closed-class word errors that occur often in ASR, this could yield a significant problem in NMT.
Mixed-effects analysis and error distribution
In order to quantify the effects of ASR errors on each system, we build linear mixed-effects models [18] in a similar manner to our mixed-effects analysis in [2,3]. We construct two sets of mixed-effects models, using the word error rate scores of the 8 ASR hypotheses as independent variables and the resulting increase in translation errors ∆TER as the response variable. The models contain random effect intercepts that account for the variance associated with the ASR system (SysID), the intrinsic difficulty of translating a given utterance (UttID), and a random effects slope accounting for the variability of word error rate scores (WER) across systems. Instead of treating each different MT system as a random effect in a joint mixed-effect model, we construct a mixed-effects model for each MT system with the purpose of comparing the degree to which each ASR error type explains the increase in translation difficulty. The models are built using R [19] and the lme4 library [20]. The fixedeffects coefficients and the variance of the random effects for each model are shown in Table 5.
Our first models (WER-only) focus on the effects of the global WER score on translation quality (∆TER). Our fitted models claim that each point of WER yields approximately the same change in ∆TER for NEURAL (0.61 ± 0.020) and MMT (0.56 ± 0.019).
Our second models (WERbasic) break WER into its Substitution, Deletion, and Insertion error alignments, each being normalized by the length of the reference transcript. According to the fixed effects of the model, insertion errors have a greater impact on translation quality in NMT than deletions. More importantly, substitution errors have a significantly stronger impact in NMT on translation quality, which reflects the behavior we observe in the translation examples from Fig. 1 types equally. We compare the average ASR error type frequencies in the FBK ASR utterances where NEURAL or MMT yield a better TER score. We introduce the "phonetic substitution span" error type from [3] to cover multi-word substitution errors (e.g. "anatomy" ⇒ "and that to me"). Focusing on utterances between 10 and 20 words, we observe in Table 6 that the cases where NEURAL scores highest consist of utterances with fewer deletion errors (0.22 versus 0.32). Although further investigation is needed to understand the interplay between substitution and deletion ASR errors in NMT, it is interesting to note that MMT seems to be more adept to handle error-prone ASR outputs, given the higher average WER (19.4% vs 17.7%).
Conclusion
We have introduced a preliminary analysis of the impact of ASR errors on SLT for models trained by neural machine translation systems. In particular, we identify the following as areas to focus on in new research in evaluating NMT for spoken language translation scenarios: (1) contextual patterns not observed during training -SMT systems usually can back off to shorter sized entries in their translation table; NMT behavior can be erratic.
(2) localized and minor ASR errors can cause long distance errors in translation. (3) NMT duplicates content words when minor ASR errors cause the modification of function words. Most of the observable errors above are caused by minor substitution errors caused by noisy ASR. We will expand this analysis further by evaluating NMT architectures that model coverage as well as the representation of inputs with subword units.
16] that enables context-aware translation for fast and incremental training. Context-aware translation is achieved by the partitioningMMT
NEURAL
ASR system
WER ↓
BLEU ↑
TER ↓
BLEU ↑
TER ↓
gold
0.0
43.4
39.5
47.9
35.4
fbk
16.5
35.6
48.4
38.5
45.6
kit
10.1
38.1
45.0
41.8
41.7
mitll
11.4
37.7
45.8
41.4
42.4
naist
10.6
38.1
45.0
41.8
41.5
nict
9.2
38.7
44.7
42.5
41.1
prke
16.6
34.9
48.7
38.1
45.8
rwth
11.7
37.2
46.1
41.4
42.3
uedin
12.3
37.3
46.1
40.8
42.9
Table 1: A comparison of Neural MT versus Phrase-based MT
on the SLT evaluation of TED talks (tst2012) from the IWSLT
2013 evaluation campaign [12]. Evaluation results are com-
pared to a gold standard that assumes that no ASR errors occur.
MMT
NEURAL
DIFFERENCE
SysID
TER ↓
∆TER ↓
TER ∆TER
TER ∆TER
gold
39.6
0.0
35.6
0.0
-4.0
0.0
fbk
49.3
9.7
46.6
11.0
-2.7
1.3
kit
45.9
6.3
42.7
7.1
-3.2
0.8
mitll
46.8
7.2
43.7
8.1
-3.1
0.9
naist
45.6
6.1
42.1
6.5
-3.5
0.5
nict
45.1
5.5
41.9
6.3
-3.1
0.9
prke
49.4
9.8
46.5
10.9
-2.9
1.1
rwth
47.0
7.4
43.2
7.6
-3.8
0.2
uedin
46.7
7.1
43.8
8.2
-2.9
1.1
Table 2 :
2Average utterance-level translation TER and ∆TER scores for the MMT and Neural MT systems. The average Neural MT TER scores are 3% better than the PBMT counterpart.
Table 3 :
3Rankedevaluation of the SLT utterances containing
ASR errors in tst2012. (Left) Counts of the winner decisions and
the percentage of all of the decisions that were influenced by
ASR errors. (Right) Mean TER scores across each sentence in
the ranked set. The remainder of winner decisions are made on
error-free ASR transcripts.
Table 4 :
4Changes in sentence-level TER rankings as ASR errors are introduced.
. MMT appears to be affected by insertion and deletion error WER-only (null model)NEURAL
MMT
Fixed effects
β
Std. Error
β
Std. Error
(Intercept)
4.35e−3
2.68e−3 −2.08e−5
1.92e−3
WER
6.09e−1 1.98e−2 •
5.58e−1 1.85e−2 •
Random effects
Variance
Std. Dev.
Variance
Std. Dev.
UttID (Intercept)
0.01
0.08
0.00
0.05
WER
0.23
0.48
0.22
0.47
SysID (Intercept)
0
0
0
0
Residual
0.01
0.07
0.00
0.06
WERbasic (Levenshtein alignment errors)
NEURAL
MMT
Fixed effects
β
Std. Error
β
Std. Error
(Intercept)
4.87e−3
2.69e−3 −5.76e−5
1.93e−3
Sub
6.80e−1 2.10e−2 •
5.35e−1 1.96e−2 •
Del
4.28e−1 2.41e−2 •
5.94e−1 2.20e−2 •
Ins
5.59e−1 3.01e−2 •
5.98e−1 2.68e−2 •
Table 5 :
5Mixed-effects summary, comparing Neural MT (NEURAL) compared to Phrase-based MT (MMT) for SLT. Top: WER score as a single predictor of translation ∆TER. Bottom: Decomposing WER into the basic alignment error operations. Statistical significance at p < 10 −4 is marked with •.NEURAL
MMT
Tie
Length gold
14.75 ± 0.27
14.78 ± 0.37
14.92 ± 0.44
WER
17.74 ± 1.69
19.39 ± 2.54
17.28 ± 2.77
Sub
1.19 ± 0.14
1.15 ± 0.17
0.20 ± 1.12
Del
0.22 ± 0.05
0.32 ± 0.07
0.33 ± 0.12
Ins
0.22 ± 0.05
0.27 ± 0.08
0.35 ± 0.12
Sub-span
1.00 ± 0.15
0.92 ± 0.19
0.75 ± 0.21
TER
42.00 vs 57.66
47.71 vs 64.02
49.69
Table 6 :
6Average ASR error counts for utterances translated best with NEURAL, MMT, or a tie. Translation TER is compared between the best MT system and the inferior MT system. Computed on utterances with reference (gold) length between 10-20 words.
The International Workshop of Spoken Language Translation.
WMT 2014 training data consists primarily of news texts, European parliament proceedings, and web crawled data. http://www.statmt.org/wmt14/translation-task.html 3 http://www.modernmt.eu
Neural Machine Translation by Jointly Learning to Align and Translate. D Bahdanau, K Cho, Y Bengio, 5th International Conference on Learning Representations. San Diego, USAICLRD. Bahdanau, K. Cho, and Y. Bengio, "Neural Machine Trans- lation by Jointly Learning to Align and Translate," in 5th Inter- national Conference on Learning Representations. San Diego, USA: ICLR, 2015.
Assessing the Impact of Speech Recognition Errors on Machine Translation Quality. N Ruiz, M Federico, Association for Machine Translation in the Americas (AMTA). Vancouver, CanadaN. Ruiz and M. Federico, "Assessing the Impact of Speech Recog- nition Errors on Machine Translation Quality," in Association for Machine Translation in the Americas (AMTA), Vancouver, Canada, 2014, pp. 261-274.
Phonetically-Oriented Word Error Alignment for Speech Recognition Error Analysis in Speech Translation. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Scottsdale, ArizonaIEEE--, "Phonetically-Oriented Word Error Alignment for Speech Recognition Error Analysis in Speech Translation," in IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Scottsdale, Arizona: IEEE, December 2015.
Neural versus phrase-based machine translation quality: a case study. L Bentivogli, A Bisazza, M Cettolo, M Federico, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAL. Bentivogli, A. Bisazza, M. Cettolo, and M. Federico, "Neural versus phrase-based machine translation quality: a case study," in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, 2016, pp. 257-267. [Online]. Available: http://aclweb.org/anthology/D/D16/D16-1025.pdf
Stanford neural machine translation systems for spoken language domain. M.-T Luong, C D Manning, International Workshop on Spoken Language Translation. Da Nang, VietnamM.-T. Luong and C. D. Manning, "Stanford neural machine trans- lation systems for spoken language domain," in International Workshop on Spoken Language Translation, Da Nang, Vietnam, 2015.
On the properties of neural machine translation: Encoder-decoder approaches. K Cho, B Van Merrienboer, D Bahdanau, Y Bengio, Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. the Eighth Workshop on Syntax, Semantics and Structure in Statistical TranslationK. Cho, B. van Merrienboer, D. Bahdanau, and Y. Ben- gio, "On the properties of neural machine translation: Encoder-decoder approaches," in Proceedings of the Eighth Workshop on Syntax, Semantics and Structure in Statis- tical Translation, 2014, pp. 103-111. [Online]. Available: http://aclweb.org/anthology/W/W14/W14-4012.pdf
A quantitative analysis of reordering phenomena. A Birch, P Blunsom, M Osborne, StatMT '09: Proceedings of the Fourth Workshop on Statistical Machine Translation. Morristown, NJ, USAAssociation for Computational LinguisticsA. Birch, P. Blunsom, and M. Osborne, "A quantitative analysis of reordering phenomena," in StatMT '09: Proceedings of the Fourth Workshop on Statistical Machine Translation. Morristown, NJ, USA: Association for Computational Linguistics, 2009, pp. 197- 205.
What Makes Wordlevel Neural Machine Translation Hard: A Case Study on English-German Translation. F Hirschmann, J Nam, J Fürnkranz, Proceedings of the 25th International Conference on Computational Linguistics. the 25th International Conference on Computational LinguisticsOsaka, JapanF. Hirschmann, J. Nam, and J. Fürnkranz, "What Makes Word- level Neural Machine Translation Hard: A Case Study on English- German Translation," in Proceedings of the 25th International Conference on Computational Linguistics, Osaka, Japan, Decem- ber 2016.
A character-level decoder without explicit segmentation for neural machine translation. J Chung, K Cho, Y Bengio, arXiv:1603.06147arXiv preprintJ. Chung, K. Cho, and Y. Bengio, "A character-level decoder with- out explicit segmentation for neural machine translation," arXiv preprint arXiv:1603.06147, 2016.
Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016Berlin, GermanyLong Papers1R. Sennrich, B. Haddow, and A. Birch, "Neural machine translation of rare words with subword units," in Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016. [Online]. Available: http://aclweb.org/anthology/P/P16/P16-1162.pdf
Multi-way, multilingual neural machine translation. O Firat, K Cho, B Sankaran, F T Y Vural, Y Bengio, Computer Speech & Language. O. Firat, K. Cho, B. Sankaran, F. T. Y. Vural, and Y. Bengio, "Multi-way, multilingual neural machine translation," Computer Speech & Language, 2016.
Report on the 10th IWSLT Evaluation Campaign. M Cettolo, J Niehues, S Stüker, L Bentivogli, M Federico, Proc. of the International Workshop on Spoken Language Translation. of the International Workshop on Spoken Language TranslationDecemberM. Cettolo, J. Niehues, S. Stüker, L. Bentivogli, and M. Federico, "Report on the 10th IWSLT Evaluation Campaign," in Proc. of the International Workshop on Spoken Language Translation, De- cember 2013.
BLEU: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th Annual Meeting of the Association of Computational Linguistics (ACL). the 40th Annual Meeting of the Association of Computational Linguistics (ACL)Philadelphia, PAK. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, "BLEU: a method for automatic evaluation of machine translation," in Proceedings of the 40th Annual Meet- ing of the Association of Computational Linguistics (ACL), Philadelphia, PA, 2002, pp. 311-318. [Online]. Available: http://aclweb.org/anthology-new/P/P02/P02-1040.pdf
A study of translation edit rate with targeted human annotation. M Snover, B Dorr, R Schwartz, L Micciulla, J Makhoul, 5th Conference of the Association for Machine Translation in the Americas (AMTA). Boston, MassachusettsM. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul, "A study of translation edit rate with targeted human annotation," in 5th Conference of the Association for Machine Translation in the Americas (AMTA), Boston, Massachusetts, August 2006.
FBK's Neural Machine Translation Systems for IWSLT 2016. M A Farajian, R Chatterjee, C Conforti, S Jalalvand, V Balaraman, M A Di Gangi, D Ataman, M Turchi, M Negri, M Federico, Proceedings of the 9th International Workshop on Spoken Language Translation (IWSLT). the 9th International Workshop on Spoken Language Translation (IWSLT)Seattle, WA, USAM. A. Farajian, R. Chatterjee, C. Conforti, S. Jalalvand, V. Balara- man, M. A. Di Gangi, D. Ataman, M. Turchi, M. Negri, and M. Federico, "FBK's Neural Machine Translation Systems for IWSLT 2016," in Proceedings of the 9th International Workshop on Spoken Language Translation (IWSLT), Seattle, WA, USA, December 2016.
Experiments in Domain Adaptation for Statistical Machine Translation. P Koehn, J Schroeder, Proceedings of the Second Workshop on Statistical Machine Translation. the Second Workshop on Statistical Machine TranslationPrague, Czech RepublicAssociation for Computational LinguisticsP. Koehn and J. Schroeder, "Experiments in Domain Adap- tation for Statistical Machine Translation," in Proceedings of the Second Workshop on Statistical Machine Transla- tion. Prague, Czech Republic: Association for Computational Linguistics, June 2007, pp. 224-227. [Online]. Available: http://www.aclweb.org/anthology/W/W07/W07-0233
European Union Horizon 2020 research and innovation programme. N Bertoldi, D Caroselli, D Madl, M Cettolo, M Federico, Tech. Rep. D. 32ModernMT -Second Report on Database and MT InfrastructureN. Bertoldi, D. Caroselli, D. Madl, M. Cettolo, and M. Federico, "ModernMT -Second Report on Database and MT Infrastruc- ture," European Union Horizon 2020 research and innovation pro- gramme, Tech. Rep. D.32, December 2016.
Prediction, mixed models, and variance components. S R Searle, Biometrics Unit. Cornell UniversityTech. Rep. BU-468-MS. R. Searle, "Prediction, mixed models, and vari- ance components," Biometrics Unit, Cornell University, Tech. Rep. BU-468-M, June 1973. [Online]. Available: http://hdl.handle.net/1813/32559
R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria. R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Aus- tria, 2013.
lme4: Linear mixed-effects models using Eigen and S4. D Bates, M Maechler, B Bolker, S Walker, r package version 1.1-6. [OnlineD. Bates, M. Maechler, B. Bolker, and S. Walker, lme4: Linear mixed-effects models using Eigen and S4, 2014, r package version 1.1-6. [Online]. Available: http://CRAN.R-project.org/package=lme4
| [] |
[
"Cluster Based Symbolic Representation for Skewed Text Categorization",
"Cluster Based Symbolic Representation for Skewed Text Categorization"
] | [
"Lavanya Narayana \nDepartment of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia\n",
"Raju \nDepartment of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia\n",
"D S GuruMahamad Suhil mahamad45@yahoo.co.in \nDepartment of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia\n",
"Harsha S Gowda \nDepartment of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia\n"
] | [
"Department of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia",
"Department of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia",
"Department of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia",
"Department of Studies in Computer Science\nUniversity of Mysore\nMysoreIndia"
] | [] | In this work, a problem associated with imbalanced text corpora is addressed. A method of converting an imbalanced text corpus into a balanced one is presented. The presented method employs a clustering algorithm for conversion. Initially to avoid curse of dimensionality, an effective representation scheme based on term class relevancy measure is adapted, which drastically reduces the dimension to the number of classes in the corpus. Subsequently, the samples of larger sized classes are grouped into a number of subclasses of smaller sizes to make the entire corpus balanced. Each subclass is then given a single symbolic vector representation by the use of interval valued features. This symbolic representation in addition to being compact helps in reducing the space requirement and also the classification time. The proposed model has been empirically demonstrated for its superiority on bench marking datasets viz., Reuters 21578 and TDT2. Further, it has been compared against several other existing contemporary models including model based on support vector machine. The comparative analysis indicates that the proposed modeloutperforms the other existing models. | 10.1007/978-981-10-4859-3_19 | [
"https://arxiv.org/pdf/1706.07912v1.pdf"
] | 25,748,038 | 1706.07912 | 69e402de2626d4e56be63997511569a9e5db89a1 |
Cluster Based Symbolic Representation for Skewed Text Categorization
Lavanya Narayana
Department of Studies in Computer Science
University of Mysore
MysoreIndia
Raju
Department of Studies in Computer Science
University of Mysore
MysoreIndia
D S GuruMahamad Suhil mahamad45@yahoo.co.in
Department of Studies in Computer Science
University of Mysore
MysoreIndia
Harsha S Gowda
Department of Studies in Computer Science
University of Mysore
MysoreIndia
Cluster Based Symbolic Representation for Skewed Text Categorization
Feature SelectionSkewed Text DataClusteringSymbolic Data RepresentationText Classification
In this work, a problem associated with imbalanced text corpora is addressed. A method of converting an imbalanced text corpus into a balanced one is presented. The presented method employs a clustering algorithm for conversion. Initially to avoid curse of dimensionality, an effective representation scheme based on term class relevancy measure is adapted, which drastically reduces the dimension to the number of classes in the corpus. Subsequently, the samples of larger sized classes are grouped into a number of subclasses of smaller sizes to make the entire corpus balanced. Each subclass is then given a single symbolic vector representation by the use of interval valued features. This symbolic representation in addition to being compact helps in reducing the space requirement and also the classification time. The proposed model has been empirically demonstrated for its superiority on bench marking datasets viz., Reuters 21578 and TDT2. Further, it has been compared against several other existing contemporary models including model based on support vector machine. The comparative analysis indicates that the proposed modeloutperforms the other existing models.
Introduction
With the advancement of digital technology, the amount of text content available on the web has become unimaginably big. Automatic text categorization systems are being developed since last three decades in order to effectively manage such a huge quantity of text documents. Text categorization (TC) is the process of classifying a huge collection of text documents into predefined categories. It carries higher importance due to its huge impact on subsequent activities of text mining and also due to many applications involving text categorization such as spam filtering in emails, classification of medical documents, sentiment analysis etc., (Harish et al., 2010, Aggarwal andZhai, 2012).
Curse of dimensionality, preserving semantics and effective representation are the issues which make the problem of text classification a suitable one. The complexity of the problem gets doubled if the text corpora are skewed or imbalanced as a class with a large number of samples normally dominates the classes with a small number of samples. In this paper, we address this problem of skewness by transforming imbalanced corpora into balanced corpora through application of a clustering algorithm class wise. Subsequent to partitioning of a large class into a number of smaller sized subclasses, we also recommend, to have a compact representation of text documents by the use of interval valued features. The compact representation not only supports achieving reduction in storage, but also in efficient way of classification through a simple symbolic classifier (Guru and Nagendraswamy., 2006). Nevertheless, to overcome the curse of dimensionality, we adapt a novel representation scheme proposed in (Isa et al., and Guru and Suhil., 2015) which reduces the dimension of the vector space into the number of classes present in the collection.
Overall organization of the paper is as follows. In section 2 we present brief summary of existing works. The proposed model is presented in section 3. The experimental results are discussed in section 4 followed by conclusions in section 5.
Related Works
From the literature we can understand that, the effort to design systems for automatic text categorization has the history of more than two decades (Hotho et al., 2005;Aggarwal and Zhai, 2012). Machine learning based TC systems carry the following general structure. All the training documents are preprocessed using stemming, pruning, stopwords removal to retain only content terms. Then a matrix representation to the entire training data is given using vector space model which uses the bag of words (terms) (Li and Jain, 1998;Rigutini, 2004). The dimension of such a matrix will be very high even for a dataset of reasonable size which makes the learning algorithms less effective. Hence, dimensionality reduction has been widely explored on text data as a mandatory step in the design of TC to increase the classification performance in addition to reduction in the dimension (Guyon and Elisseeff, 2003).
Most of the works in literature of TC have used either feature selection through ranking or feature extraction through transformation as the means of dimensionality reduction. A number of works can be traced in recent years addressing the problem of text classification through feature selection. Feature selection algorithms such as chisquare, information gain, and mutual information (Yang and Pedersen., 1997) though seem to be powerful techniques for text data, a number of novel feature selection algorithms based on genetic algorithm (Bharti and Singh., 2016;Ghareb et al., 2016), ant colony optimization (Dadaneh et al., 2016;Moradi and Gholampour., 2016;Uysal., 2016;Meena et al., 2012), Bayesian principle Zhang et al., 2016;Feng et al., 2012;Fenga et al., 2015;Sarkar et al., 2014), clustering of features (Bharti and Singh., 2015), global information gain (Shang et al., 2013), adaptive keyword (Tasci and Gungor., 2013), global ranking (Pinheiro et al., 2012;Pinheiro et al., 2015) are proposed.
On the other hand, a classifier is trained and evaluated with a small set of features obtained after dimensionality reduction (Sebastiani., 2002). Thus, it is a very long and time consuming process. However, many applications do not provide such a huge processing capability but still expect the process of classification to be very quick and effective. For such type of applications, it is essential to design a simple yet effective TC system which can predict the probable class of a test document quickly.
It shall be observed from the literature survey, that the existing works are being generic in nature, perform well on balanced text corpora, and are not that effective for skewed / imbalanced text corpora. Hence here in this work, our objective is to convert an imbalanced text corpus into a balanced one through clustering of text documents classwise and then giving it a compact representation.
Proposed model
The proposed model has two major stages learning and classification.
Learning
The learning stage has 3 different stages; (i) representation of the text documents in lower dimensional space (Isa et al., 2008;Guru and Suhil., 2015) (ii) clustering, where the documents of large sized classes are clustered into sub groups to overcome the problem of skewness, and (iii) compact representation of documents, where a cluster of documents is given a single vector of interval valued feature representation.
Representation of the Text Documents in Lower Dimensional Space
In this section, we present the representation scheme adapted for the text documents. We have intentionally kept this description as a separate section so that the reader should clearly understand that we do not represent the documents using conventional vector space model (VSM) using the bag of words (BoW) constructed for the entire collection. This is due to the fact that, VSM leads to a very high dimensional sparse matrix which is not effective if directly used in computations and hence dimensionality reduction has to be applied through either feature selection or transformation (Sebastiani., 2003). To alleviate this problem, Isa et al., (2008) have proposed an effective text representation scheme which can reduce the dimension of the documents equal to the number of classes at the time of representation itself. In addition to this, (Guru and Suhil., 2015) have proposed a novel term-weighting scheme called 'Term_class Relevance Measure (TCR)' to be used with the representation scheme of Isa et al., (2008) for achieving better performance. Hence, we adapt the representation from Isa et al., (2008) with term weighting scheme of (Guru and Suhil., 2015). A brief overview of the representation and weighting scheme is presented in Fig 1. Consider a huge collection of text documents say 'N' in number due to 'K' different semantically meaningful categories C1, C2,…, CK. Each document is first preprocessed using stemming and stop word removal to end up with a collection of content terms. In the representation provided by Isa et al., (2008), initially, a matrix F of size M K is created for every document in the collection; where, M is the number of terms assumed to be available in the document as shown in Fig. 1. Then, every entry ( , ) F j i of the matrix denotes weight of tj with respect to Ci which is computed using TCR measure as follows (Guru and Suhil., 2015). (1).
( , ) _ ( , ) _ ( , ) i j i j i j TCR t C c Class TermWeight t C Class TermDensity t C (1)
Where c is the proportionality constant defined as the weight of the class Cj as given in (2). Class_TermWeight and Class_TermDensity are respectively the weight and density of tj with respect to the class Cj which are computed using equation (3) and (4)
respectively. # ( ) # j j Documents in C ClassWeight C Documents inTraining Set (2) # _ ( , ) # j i i j i documents in C containing t Class TermWeight t C documents inthetraining set containing t (3) # _ ( , ) # i j i j i occurences of t in C Class TermDensity t C occurences of t in the training collection (4)
Then, a feature vector f of dimension K is created as a representative for the document where f(i) is the average of the relevancies of all its terms to Ci from F. The main advantage of this representation is that, a document with any number of terms is represented with a feature vector of dimension equal to the number of classes in the population and which is negligibly small in contrast to the feature vector that is created in any other VSM. Therefore, a great amount of dimensionality reduction is achieved at the time of representation itself without the application of any dimensionality reduction technique.
Clustering
Availability of balanced text corpora plays a crucial role in the success of any text classification system. This is due to the fact that during the process of training a classifier, the classes with a more number of samples will dominate generally the other classes with a less number of training samples. One solution to handle the class imbalance problem is to convert the corpus into a balanced one. It is true in most of the cases of text classification problems that the variation within a class will increase with the increase in the size of the class. Hence, we perform clustering of documents within each class to convert the class into a collection of dense groups, subclasses. In the process, we also ensure that the sizes of clusters are more or less same.
Given a collection of N labeled documents belonging to K classes say, C1,C2,…,CK where i th class Ci contains Ni number of documents each is represented by K features as described in section 3. datasets the number of clusters obtained will vary from one class to the other class based on the size and variations within a class. Then, a cluster itself can be treated as an independent class and hence the original K-class classification problem on an imbalanced corpus has thus become a Q-class classification problem on a balanced corpus.
Compact Representation
Recently, the concept of symbolic data analysis has gained much attention by the community of researchers since it has proven its effectiveness and simplicity in designing solutions for many pattern recognition problems (Nagendraswamy and Guru., 2007;Punitha and Guru., 2008;Guru and Prakash., 2009;). We can also trace a couple of attempts for text classification using the concepts of symbolic representation and classification , Revanasiddappa et al., 2014. In this section, we propose to use interval valued type symbolic data to effectively capture the variations within a cluster of text documents. Another advantage of having such a representation is its simplicity in classifying an unknown document.
Classification
Given an unlabeled text document Dq its class label is predicted by comparing it with all the representative vectors present in the knowledgebase. Initially, Dq is converted and represented as a feature vector 1 2 { , ,..., } q q q K f f f of dimension K as explained in section 3.1.1. Then the similarity between the crisp vector Dq and an interval based representative vector R is computed using the similarity measure proposed by (Guru and Nagendraswamy., 2006)
Similarly, the similarity of Dq with all the Q representative vectors present in the knowledgebase is computed. The class of the cluster clm which gets highest similarity with Dq is decided as the class of Dq as shown in equation (5)
below. , ( ) ( max( ( , ))) ij q q i j ClassLabel D Class Arg SIM D R (5)
where, ij R is the representative of the j th cluster of the i th class.
We have conducted a series of experiments to validate the applicability and efficiency of the proposed model. We have also implemented the Support Vector Machines (SVM) based classification to demonstrate the superiority of the proposed model. The performance of the proposed model has been evaluated using Precision, Recall and Fmeasure in terms of both micro and macro averaging. The following sections describe details about the skewed datasets considered for experimentation and the results obtained.
Text Corpora
To assess the performance of the proposed model, we have conducted experiments on two commonly used benchmarking skewed text corpora viz., Reuters-21578 and TDT2. The Reuters corpus is a collection of 21578 news articles taken from Reuters newswire (available at http://www.daviddlewis.com/resources/testcollections/ reuters21578/). The total number of topics was 135 where a document may belong to multiple classes. In our experiments we use documents form top 10 categories. There are totally 7285 documents distributed into different classes with a large degree of skew as shown in the Fig. 1. The TDT2 corpus ( Nist Topic Detection and Tracking corpus ) consists of data collected during the first half of 1998 and taken from 6 sources including 2 newswires (APW, NYT), 2 radio programs (VOA, PRI) and 2 television programs (CNN, ABC). It consists of 11201 on-topic documents which are classified into 96 semantic categories. In our experiments we have chosen top 20 categories based on the number of documents to arrive at a subset of 8741 documents distributed with high degree of skew for different classes as shown in Fig. 2. In our experiments, we vary the training set from 10 to 80 percent to verify the performance of the classifier.
Results and Discussion
In this section, we present the details of the results obtained by the proposed method and compare it with that of the conventional SVM based classification on Retuers-21578 and TDT2 text corpora. The experiments have been conducted by varying the percentage of training from 10 to 80 percent with 5 random trials each. For every set of training samples, performance is studied in terms of precision, recall and Fmeasures using both micro and macro averaging. Fig. 3 and Fig. 4 show the performance of the proposed method in comparison with SVM classifier on Reuters-21578 corpus using macro and micro averaged F measures respectively. It can be observed from the Fig. 3 and Fig. 4 that the proposed method performed well on each training collection than the SVM based classification. Similar study is made on the TDT2 corpus and the results have been shown in Fig. 5 and Fig. 6 for macro averaged and micro averaged F-measures respectively. The similar observation can be made for the TDT2 corpus also as the performance of the proposed method is better when compared to that of the SVM based method.
Further, we have also studied the classwise performance of the proposed method along with SVM classifier based method in terms of precision and recall. This helps in demonstrating the performance of the proposed method with respect to larger and smaller classes and to compare it with that of the SVM classifier. Fig. 7 and Fig. 9 show the performance in terms of precision respectively for SVM classifier and the proposed method, whereas, Fig. 8 and Fig. 10 show the respective performances in terms of recall values on Reuters-21578 corpus. Similarly, Fig. 11 and Fig. 13 show the performance in terms of precision respectively for SVM classifier and the proposed method whereas Fig. 12 and Fig. 14 show the respective performances in terms of recall values on TDT2 corpus. Overall observations made from all the figures showing classwise performances with the increase in the number of features are as follows. For SVM based classification, the value of precision has dropped suddenly for small classes and has reached maximum for large classes whereas, the value of recall has seen an increase for small classes and sudden drop for large classes. On contrary to this, the proposed method has initially seen increase in the performance till 500 features followed by a steady performance thereafter both in terms of precision as well as Recall and hence the overall performance in terms of F-measure is improved significantly. Table 1 shows the best performance obtained by the proposed method for both Reuters-21578 and TDT2 corpora in terms of macro and micro F-measures. Also, we have shown the maximum number of clusters formed with respect to each corpus. It can be observed that the number of clusters formed is very less when compared to the total number of training samples considered for training. In Table 2, we also compare the results of the proposed method with that of the state of the art methods. It can be seen from Table 2 that the proposed method outperforms most of the contemporary methods in addition to being very effective since it works with only K features (where K is the number of classes present in the corpus) whereas the other methods need at least few hundreds of features to achieve better performance.
Conclusions
In this work, a method of converting an imbalanced text corpus into a balanced one is presented by exploiting the notion of data clustering. The problem due to skewness of a corpus is addressed. For the purpose of overcoming the curse of dimensionality, we just have adopted our previous model which accomplishes the reduction while representation of documents itself. A method of compact representation of text data by the use of interval-valued data representation is presented in a feature space of dimension equal to the number of classes. It has been experimentally argued that the proposed model is effective in addition to being simple. A comparative analysis indicates that the proposed model outperforms several other existing contemporary models. The finding of this work is that splitting of a larger class of text documents into several smaller subclasses during training would enhance the performance of a classifier.
Fig 1 .
1Representation scheme for a document d TCR is defined as the ability of a term ti in classifying a document D as a member of a class Cj as given in
Fig 2 .
2Distribution of samples in Reuters-21578 and TDT2 corpora
Fig 3 .Fig 4 .Fig 5 .Fig 6 .Fig 7 .Fig 8 .Fig 9 .
3456789Comparison of performances of the proposed model and SVM based model for Reuters-21578 dataset using Macro averaged F-measure Comparison of performances of the proposed model and SVM based model for Reuters-21578 dataset using Micro averaged F-measure Comparison of performances of the proposed model and SVM based model for TDT2 dataset using Macro averaged F-measure Comparison of performances of the proposed model and SVM based model for TDT2 dataset using Micro averaged F-measure Classwise Precision obtained by SVM classifier on Reuters-21578 corpus under varying percentage of training samples Classwise Recall obtained by SVM classifier on Reuters-21578 corpus under varying percentage of training samples Classwise Precision obtained by the proposed model on Reuters-21578 corpus under varying percentage of training samples Fig 10. Classwise Recall obtained by the proposed model on Reuters-21578 corpus under varying percentage of training samples Fig 11. Classwise Precision obtained by SVM classifier on TDT2 corpus under varying percentage of training samples Fig 12. Classwise Recall obtained by SVM classifier on TDT2 corpus under varying percentage of training samples Fig 13. Classwise Precision obtained by the proposed model on TDT2 corpus under varying percentage of training samples Fig 14. Classwise Recall obtained by the proposed model on TDT2 corpus under varying percentage of training samples
1.1. Let Di={Di1,Di2,…,DNi) be the set of documents of the class Ci. A class Ci with Ni number of documents is grouped into Qi number of dense clusters using hierarchical clustering which is denoted by number of clusters is automatically decided using the inconsistency coefficient. The inconsistency coefficient ic characterizes each link in a cluster tree by comparing its height with the average height of other links at the same level of the hierarchy. The higher the value of this coefficient, the less similar the objects connected by the link. The value inconsistency coefficient ic is empirically decided for each class.1
2
{ , ,...,
}
i
i
i
i
i
Q
Cl
cl cl
cl
. The
Let 1 2
, ,..., K
Q Q
Q respectively be the number of clusters obtained for the K
different classes and let
1
K
i
i
Q
Q
be the total number of clusters. For imbalanced
are respectively the mean and standard deviation of the values of ts in the cluster. Hence, Rij contains K intervals corresponding to the K features as, cl of the i th class Ci. This process of creation of interval valued symbolic representative is applied on all the Q clusters individually to obtain Q interval representative vector which are then stored in the knowledgebase for the purpose of classification.Given a cluster i
j
cl of a class Ci with i
j
N number of documents
1
2
{ , ,...,
}
i
j
ij
ij
ij
ij
N
D
D D
D
it is represented by an interval valued symbolic representative
vector Rij as follows.
Let every document is represented by a feature vector of dimension K given
by 1 2
{ , ,... }
K
f f
f . Then, with respect to every feature fs, the documents of the cluster
are aggregated in the form of an interval [
,
]
s
s
s
s
where,
s
and s
1
2
{ , ,..., }
ij
ij
ij
ij
K
R
R R
R
where,
[
,
]
ij
s
s
s
s
s
R
is the interval formed for the s th feature of the j th
cluster i
j
1
2
1
2
11
12
21
22
1
2
{ , ,...,
, , ,...,
,...,
,
,...,
}
K
Q
Q
KQ
K
K
R R
R R R
R
R R
R
as follows.1
1
( , )
( , )
K
s
q
q
s
s
SIM D R
SIM D R
K
where,
1
(
)
(
)
( , )
1
1
max
,
1
((
)
) 1
((
)
)
s
s
q
s
s
s
s
q
s
s
s
q
s
s
q
s
s
if
f
SIM D R
abs
f
abs
f
otherwise
Table 1 :
1The best performance of the proposed method on Reuters-21578 and TDT2 corpora in terms of Macro-F and Macro-FText Corpus
No of Training
Samples
Maximum No. of
Clusters Formed
Macro-F
Micro-F
Reuters-21578
5828
636
70.75
82.03
TDT2
7867
582
76.92
83.74
Table 2 :
2Comparison of the results of the proposed method with the state of the art techniques on Reuters 21578Author and Year
Method
Macro-F
(No. of Features)
Micro-F
(No of Features)
Uysal., 2016
IG + IGFSS + SVM
67.53(500)
86.473(300)
Uysal and Gunal., 2012 DFS + SVM
66.55(200)
86.33(500)
Pinheiro et al., (2015) MFD + BNS
64(254)
81.51(254)
Pinheiro et al., (2012) ALOFT + MOR
62.13 (135)
80.47 ( 135)
Rehman et al., (2015) DFS, RDC + SVM
63.47 (500)
81.98 (500)
Aghdam et al., (2009) ACO
78.42 ( >= 3600)
89.08 ( >= 3600)
Proposed Method
Reduced
Representation +
clustering + symbolic
representation
70.75
(10)
82.03
(10)
AcknowledgementsThe second author of this paper acknowledges the financial support rendered by the University of Mysore under UPE grants for the High Performance Computing laboratory. The first and fourth authors of this paper acknowledge the financial support rendered by Pillar4 Company, Bangalore.
Mining text data. C C Aggarwal, C X Zhai, 978-1-4614-3222-7SpringerAggarwal C. C., Zhai C X., (2012). Mining text data, Springer, ISBN 978-1-4614-3222-7.
Text feature selection using ant colony optimization. M H Aghdam, N G Aghaee, M E Basiri, Expert Systems with Applications. 363Aghdam M H., Aghaee N G.,Basiri M E.,(2009) Text feature selection using ant colony optimization, Expert Systems with Applications, 36(3)-2, pp. 6843-6853.
Comparison of term frequency and document frequency based feature selection metrics in text categorization. N Azam, J Yao, Expert Systems with Applications. 39Azam, N and J.Yao., (2012). Comparison of term frequency and document frequency based feature selection metrics in text categorization. Expert Systems with Applications,39, pp. 4760-4768.
Hybrid dimension reduction by integrating feature selection with feature extraction method for text clustering. K K Bharti, P K Singh, Expert Systems with Applications. 42Bharti K.K., and Singh P.K.,(2015). Hybrid dimension reduction by integrating feature selection with feature extraction method for text clustering. Expert Systems with Applications,42, pp. 3105-3114.
Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering. K K Bharti, P K Singh, Applied Soft Computing. 43Bharti K.K., and Singh P.K.,(2016).Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering. Applied Soft Computing,43, pp. 20-34.
Interactive textual feature selection for consensus clustering. G N Corrêa, R M Marcacini, E R Hruschka, S O Rezende, Pattern Recognition Letters. 52Corrêa G.N., Marcacini R.M., Hruschka E.R., Rezende S.O., (2015). Interactive textual feature selection for consensus clustering. Pattern Recognition Letters,52, pp. 25-31.
Unsupervised probabilistic feature selection using ant colony optimization. B Z Dadaneh, H Y Markid, Zakerolhosseini A , Expert Systems with Applications. 53Dadaneh B.Z., Markid H.Y., and Zakerolhosseini A.,(2016). Unsupervised probabilistic feature selection using ant colony optimization. Expert Systems with Applications,53, pp. 27-42.
A Bayesian feature selection paradigm for text classification. G Feng, J Guo, B Y Jing, L Hao, Information Processing and Management. 48Feng G., Guo J., Jing B.Y., and Hao L., (2012). A Bayesian feature selection paradigm for text classification. Information Processing and Management,48, pp. 283-302.
Feature subset selection using naive Bayes for text classification. G Fenga, J Guoa, B Y Jing, T Sunb, Pattern Recognition Letters. 65Fenga G., Guoa J., Jing B.Y., and Sunb T.,(2015). Feature subset selection using naive Bayes for text classification. Pattern Recognition Letters,65, pp. 109-115.
Hybrid feature selection based on enhanced genetic algorithm for text categorization. A S Ghareb, A A Bakar, A R Hamdan, Expert Systems With Applications. 49Ghareb A.S., Bakar A.A., and Hamdan A.R.,(2016). Hybrid feature selection based on enhanced genetic algorithm for text categorization. Expert Systems With Applications, 49, pp. 31-47.
Symbolic representation of text documents. D S Guru, B S Harish, Manjunath S , Proceedings of the Third Annual ACM Bangalore Conference (COMPUTE '10). the Third Annual ACM Bangalore Conference (COMPUTE '10)New York, NY, USAACMArticle 18 , 4 pagesGuru D. S., Harish B. S., and Manjunath S.. (2010). Symbolic representation of text documents. In Proceedings of the Third Annual ACM Bangalore Conference (COMPUTE '10). ACM, New York, NY, USA, , Article 18 , 4 pages.
Symbolic Representation of Two-Dimensional Shapes. D S Guru, H S Nagendraswamy, Pattern Recognition Letters. 28Guru D.S., and Nagendraswamy H.S., (2006). Symbolic Representation of Two-Dimensional Shapes. Pattern Recognition Letters, 28, pp. 144-155.
Online Signature Verification and Recognition: An Approach Based on Symbolic Representation. D S Guru, H N Prakash, IEEE TPAMI. 316Guru D.S., Prakash H.N. (2009). Online Signature Verification and Recognition: An Approach Based on Symbolic Representation. IEEE TPAMI. 31(6), pp.1059-1073.
A Novel Term_Class Relevence Measure for Text Categorization. D S Guru, M Suhil, Procedia Computer Science. 45Guru, D.S., and Suhil M., (2015). A Novel Term_Class Relevence Measure for Text Categorization. Procedia Computer Science, 45, pp.13-22.
An introduction to variable and feature selection. I Guyon, A Elisseeff, JMLR. 3Guyon, I., Elisseeff, A. (2003). An introduction to variable and feature selection, JMLR,3, pp.1157-1182.
Representation and Classification of Text Documents: A Brief Review. B S Harish, D S Guru, Manjunath S , IJCA Special Issue on RTIPPR. Harish B. S., Guru D. S., and Manjunath. S., (2010). Representation and Classification of Text Documents: A Brief Review. IJCA Special Issue on RTIPPR, pp. 110-119.
A brief survey of text mining. A Hotho, A Nurnberger, G Paab, Journal for ComputationalLinguistics and Language Technology. 20Hotho A., Nurnberger A. and Paab G., (2005). A brief survey of text mining. Journal for ComputationalLinguistics and Language Technology, 20, pp.19-62.
Text Document Preprocessing with the Bayes Formul for Classification Using the Support Vector Machine. D Isa, L H Lee, V P Kallimani, Rajkumar R , IEEE TKDE. 20Isa D., Lee L.H., Kallimani V.P., and Rajkumar R.,(2008). Text Document Preprocessing with the Bayes Formul for Classification Using the Support Vector Machine. IEEE TKDE, 20, pp. 1264-1272.
Deep feature weighting for naive Bayes and its application to text classification. L Jiang, C Li, S Wang, L Zhang, Engineering Applications of Artificial Intelligence. 52Jiang L., Li C., Wang S., and Zhang L.,(2016). Deep feature weighting for naive Bayes and its application to text classification. Engineering Applications of Artificial Intelligence,52, pp. 26-39.
Classification of text documents. Y H Li, A K Jain, The Computer Journal. 418Li Y. H. and Jain A. K., (1998). Classification of text documents. The Computer Journal, 41(8), pp. 537-546.
An enhanced ACO algorithm to select features for text categorization and its parallelization. M J Meena, K R Chandran, A Karthik, A V Samuel, Expert Systems with Applications. 39Meena M.J., Chandran K.R., Karthik A., and Samuel A.V.,(2012). An enhanced ACO algorithm to select features for text categorization and its parallelization. Expert Systems with Applications,39, pp. 5861-5871.
A hybrid particle swarm optimization for feature subset selection by integrating a novel local search strategy. P Moradi, Gholampour M , Applied Soft Computing. 43Moradi P, and Gholampour M.,(2016),. A hybrid particle swarm optimization for feature subset selection by integrating a novel local search strategy. Applied Soft Computing,43, pp. 117- 130.
A New Method of Representing and Matching Two Dimensional Shapes. H S Nagendraswamy, D S Guru, International Journal of Image and Graphics. 72Nagendraswamy H.S.., Guru D.S.,(2007). A New Method of Representing and Matching Two Dimensional Shapes, International Journal of Image and Graphics, 7(2), pp. 377-405.
Data-driven global-ranking local feature selection methods for text categorization. R H W Pinheiro, G D C Cavalcanti, T I Ren, Expert Systems with Applications. 42Pinheiro R.H.W., Cavalcanti G.D.C., and Ren T.I.,(2015). Data-driven global-ranking local feature selection methods for text categorization. Expert Systems with Applications,42, pp. 1941-1949.
A global-ranking local feature selection method for text categorization. R H W Pinheiro, G D C Cavalcanti, R F Correa, T I Ren, Expert Systems with Applications. 39Pinheiro R.H.W., Cavalcanti G.D.C., Correa R.F., and Ren T.I.,(2012). A global-ranking local feature selection method for text categorization. Expert Systems with Applications,39, pp. 12851-12857.
Symbolic image indexing and retrieval by spatial similarity: An approach based on B-tree. P Punitha, D S Guru, Pattern Recognition. 6Punitha P., Guru D.S.,(2008). Symbolic image indexing and retrieval by spatial similarity: An approach based on B-tree, Pattern Recognition, 41(6), pp.2068-2085.
Relative discrimination criterion -A novel feature ranking method for text data. A Rehman, Expert Systems with Applications. 42Rehman, A, et all.,(2015). Relative discrimination criterion -A novel feature ranking method for text data. Expert Systems with Applications, 42, pp. 3670-3681.
M B Revanasidappa, B S Harish, Manjunath S , Document Classification Using Symbolic Classifiers. International Conference on Contemprory Computing and Informatics(IC3I). Revanasidappa M.B., Harish B.S., and Manjunath S.,(2014). Document Classification Using Symbolic Classifiers. International Conference on Contemprory Computing and Informatics(IC3I), pp. 299-303.
Automatic text processing: Machine learning techniques. L Rigutini, University of SienaPh.D. ThesisRigutini L., (2004). Automatic text processing: Machine learning techniques. Ph.D. Thesis, University of Siena.
A Novel Feature Selection Technique for Text Classification Using Naive Bayes. S D Sarkar, S Goswami, A Agarwal, Aktar J , Hindawi Publ. CorpSarkar S.D., Goswami S., Agarwal A, and Aktar J.,(2014). A Novel Feature Selection Technique for Text Classification Using Naive Bayes. Hindawi Publ. Corp., pp. 1-10.
Machine learning in automated text categorization. F Sebastiani, ACM Comput. Surveys. 341Sebastiani F., (2002). Machine learning in automated text categorization. ACM Comput. Surveys, 34 (1), pp. 1-47.
Feature selection via maximizing global information gain for text classification. Knowledge-Based Systems. C Shang, M Li, S Feng, Q Jiang, Fan J , 54Shang C., Li M., Feng S., Jiang Q., and Fan J.,(2013). Feature selection via maximizing global information gain for text classification. Knowledge-Based Systems, 54, pp. 298-309.
Comparison of text feature selection policies and using an adaptive framework. S Tasci, T Gungor, Expert Systems with Applications. 40Tasci S., and Gungor T., (2013). Comparison of text feature selection policies and using an adaptive framework. Expert Systems with Applications,40, 4871-4886.
An improved global feature selection scheme for text classification. A K Uysal, Expert SystemsWith Applications. 43Uysal,A .K.,(2016). An improved global feature selection scheme for text classification.Expert SystemsWith Applications,43, 82-92.
A novel probabilistic feature selection method for text classification. Knowledge-Based Systems. A Uysal, S , 36Uysal,A.K and S.Gunal., (2012). A novel probabilistic feature selection method for text classification. Knowledge-Based Systems,36, 226-235.
t-Test feature selection approach based on term frequency for text categorization. D Wang, H Zhang, R Li, W Lv, Wang D , Pattern Recognition Letters. 45Wang D., Zhang H., Li R., Lv W., and Wang D.,(2014). t-Test feature selection approach based on term frequency for text categorization. Pattern Recognition Letters,45, 1-10.
A new feature selection based on comprehensive measurement both in inter-category and intra-category for text categorization. J Yang, Y Liu, X Zhu, Z Liu, X Zhang, Information Processing and Management. 48Yang J, Liu Y, Zhu X, Liu Z and Zhang X.,(2012). A new feature selection based on comprehensive measurement both in inter-category and intra-category for text categorization. Information Processing and Management,48, 741-754.
A comparative study on feature selection in text categorization. Y Yang, J O Pedersen, Proceedings of the 14th International Conference on Machine Learning. the 14th International Conference on Machine Learning97Yang, Y., and Pedersen, J. O.,(1997). A comparative study on feature selection in text categorization. In: Proceedings of the 14th International Conference on Machine Learning, 97,412-420.
Two feature weighting approaches for naive Bayes text classifiers. Knowledge-Based Systems. L Zhang, L Jiang, C Li, G Kong, 100Zhang L., Jiang L., Li C., and Kong G.,(2016). Two feature weighting approaches for naive Bayes text classifiers. Knowledge-Based Systems,100(c), 137-144.
A discriminative and semantic feature selection method for text categorization. W Zong, F Wu, L K Chu, D Sculli, Int J. Production Economics. 165Zong W, Wu F, Chu L.K., and Sculli D.,(2015). A discriminative and semantic feature selection method for text categorization. Int J. Production Economics,165, 215-222.
| [] |
[
"Reducing Retraining by Recycling Parameter-Efficient Prompts",
"Reducing Retraining by Recycling Parameter-Efficient Prompts"
] | [
"Brian Lester brianlester@google.com \nGoogle Research\n\n",
"Joshua Yurtsever jyurtsever@google.com \nGoogle Research\n\n",
"Siamak Shakeri siamaks@google.com \nGoogle Research\n\n",
"Noah Constant nconstant@google.com \nGoogle Research\n\n"
] | [
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n"
] | [] | Parameter-efficient methods are able to use a single frozen pre-trained large language model (LLM) to perform many tasks by learning taskspecific soft prompts that modulate model behavior when concatenated to the input text. However, these learned prompts are tightly coupled to a given frozen model-if the model is updated, corresponding new prompts need to be obtained. In this work, we propose and investigate several approaches to "Prompt Recycling", where a prompt trained on a source model is transformed to work with the new target model. Our methods do not rely on supervised pairs of prompts, task-specific data, or training updates with the target model, which would be just as costly as re-tuning prompts with the target model from scratch. We show that recycling between models is possible (our best settings are able to successfully recycle 88.9% of prompts, producing a prompt that out-performs baselines), but significant performance headroom remains, requiring improved recycling techniques. . 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities. | 10.48550/arxiv.2208.05577 | [
"https://export.arxiv.org/pdf/2208.05577v1.pdf"
] | 251,492,892 | 2208.05577 | 2d3b6058370b804009fd7866098f4fca2d1894ca |
Reducing Retraining by Recycling Parameter-Efficient Prompts
Brian Lester brianlester@google.com
Google Research
Joshua Yurtsever jyurtsever@google.com
Google Research
Siamak Shakeri siamaks@google.com
Google Research
Noah Constant nconstant@google.com
Google Research
Reducing Retraining by Recycling Parameter-Efficient Prompts
Parameter-efficient methods are able to use a single frozen pre-trained large language model (LLM) to perform many tasks by learning taskspecific soft prompts that modulate model behavior when concatenated to the input text. However, these learned prompts are tightly coupled to a given frozen model-if the model is updated, corresponding new prompts need to be obtained. In this work, we propose and investigate several approaches to "Prompt Recycling", where a prompt trained on a source model is transformed to work with the new target model. Our methods do not rely on supervised pairs of prompts, task-specific data, or training updates with the target model, which would be just as costly as re-tuning prompts with the target model from scratch. We show that recycling between models is possible (our best settings are able to successfully recycle 88.9% of prompts, producing a prompt that out-performs baselines), but significant performance headroom remains, requiring improved recycling techniques. . 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities.
Introduction
Fine-tuning pre-trained large language models (LLMs) is the current de-facto approach for achieving state-of-the-art results in NLP (Radford et al., 2018;Devlin et al., 2019;Raffel et al., 2020). While the exact details of the strongest models shift over time, there has been a clear trend that bigger models have better performance (Kaplan et al., 2020;Rae et al., 2022;Chowdhery et al., 2022). As these models have grown, the computational resources required and engineering complexity have grown as well. This trend is especially challenging in the multi-task setting where each task creates a new fork of the model. Several parameter-efficient methods have been developed to mitigate these problems. Some methods only train part of the model (Houlsby et al., * Equal Contribution 2019), while others use specially crafted inputseither discrete text tokens (Brown et al., 2020) or learned soft prompts (Qin and Eisner, 2021;Zhong et al., 2021;Li and Liang, 2021;Lester et al., 2021). By swapping out small components tailored to individual tasks, one can have a multi-task model without the need to serve or store many copies.
The specific LLMs that serve as the best starting point for building downstream models change over time, due to several unavoidable factors. New facts need to be learned based on world events, word meanings drift (Kulkarni et al., 2015), and new terms (e.g., "COVID") enter common usage, requiring periodic refreshes. Additionally, new research regularly yields improvements through changes to model architecture, size, and details of training. When a new pre-trained model is released, new versions of task-specific models created via finetuning need to be trained to take advantage of the improvements.
Just like fine-tuning, parameter-efficient methods are closely tied to the frozen model they were trained with. These approaches rely on making modifications to the activations of the model when presented with new input, to steer the model to-wards solving a new task. When the frozen model changes, the exact patterns of how it processes a given input change as well, and the modifications these approaches induce no longer make sense. Thus, parameter-efficient methods also require retraining when the frozen models change.
To mitigate this issue, we propose "Prompt Recycling" where soft prompts, learned via Prompt Tuning (Lester et al., 2021), for a source model M s , are transformed to work with a target model M t , without requiring any training of the target model. This process is illustrated in Figure 1.
We present several recycling methods that center around the known correspondence between the embedded representation of a given word in each model. Our methods do not require paired prompts (two prompts that solve the same task, one learned using M s and one learned with M t ) or additional training of the prompt with M t . Requiring either of these would likely result in methods that are no more computationally efficient than simply training a new prompt from scratch.
We demonstrate that our recycling methods can facilitate the transfer of prompts between models. Our best settings are able to successfully recycle 88.9% of prompts, where we count success as producing a prompt that performs better than the target model's zero-shot capabilities. However, recycled performance still lags behind prompts trained with the target model from scratch (see Table 1). Additionally we find that recycling from a larger source model to a smaller target model increases reliability and performance of the recycled prompt.
Related Work
Our vocab-to-vocab transformations are similar to cross-lingual embedding mapping (Mikolov et al., 2013), except that our mapping is applied to models trained on the same language.
The "soft prompt transfer" technique of Vu et al. (2022) is similar to ours in that prompts are reused between different models; however, Vu et al. (2022) focuses on transfer between different tasks with the same pre-trained model while we focus on transfer between different models trained with the same task. Additionally, their method assumes the prompt will be updated with training data for the new task. In contrast, our method does not update the prompt after recycling as that as it would remove the computational advantage over training a prompt on M t directly. The finding from Vu et al. (2022) that one prompt can be successfully used to initialize another suggests that using a recycled prompt as initialization could also be worth investigating.
Work from Su et al. (2022) also investigates the transfer of prompts across models. However, their work focuses on knowledge transfer between very different pre-trained language models (different datasets, different architectures, etc.) with the aim of increasing final model performance, whereas our work approaches prompt recycling as a way to avoid re-training prompts when a frozen model is updated. This difference in motivation is reflected in the difference in approaches. Their proposed approaches generally require task-specific data and paired prompts, or require training a prompt transformer using M t as part of the calculation, which incurs costs on par with re-training the prompt from scratch on the target model. In contrast, our methods only require the embeddings of the source and target models, and can be reused across tasks.
Methods
All our experiments follow three steps: 1) Train a source prompt P s using the source model M s for some task T . 2) "Recycle" the prompt using some function r, learned via the correspondence between the models, that transforms the prompt into one that works on the target model M t , r(P s ) → P t . This transformation also handles any changes in the prompt size required when moving between models of different sizes. 3) Evaluate the recycled prompt P t with M t on the held out split of task T . Eval(M t , P t , T ).
Initially, we compare recycling to "re-tuning", training a new prompt directly on the M t , for headroom analysis. Given that we found a substantial gap, we judge our recycling methods by how consistently they are able to deliver an improvement over the zero-shot baseline. The zero-shot baseline is the performance of the target model on task T without any task-specific training or prompts, Eval(M t , T ), relying only on knowledge gained during pre-training.
Due to the large variance in zero-shot performance (see Table 1) across models, we also compare our recycling performance to the baseline of using a random prompt. The components of our random prompts are drawn from a Gaussian distribution with mean of zero and a standard deviation of 16, Eval(M t , P r , T ) where P ri ∼ N (µ = 0, σ = 16). We selected 16 from a grid search over σ ∈ {4, 8, 16, 32, 64}, and found this to be a surprisingly strong baseline. See Appendix A.5 for additional details and performance.
Recycling Methods
We propose two methods for recycling prompts, both based on correspondences between the embedding representations of tokens across the two models. Our hypothesis is that a recycler trained to map embeddings from M s to M t can also be used to map prompts. This assumes that 1) prompts are similar to token embeddings, as they are fed into the model the same way 2) there are structural similarities in the relationships between tokens in the embedding spaces of different models, and 3) the relationship between prompt representations and token embeddings are similar across models.
Vocab to Vocab Transformations (v2v): A mapping between the vocabulary embeddings of two models can be learned and subsequently applied to a prompt. Let V s and V t represent the vocabulary embeddings of the source and target models M s and M t . We wish to find a function f such that f (V s ) = V t and then estimate the target prompt:
P t = f (P s )
v2v-lin: In this method, we parameterize f as a linear projection and use least squares to solve for a matrix Y such that Y V s = V t . We then estimate P t = Y P s . v2v-nn: In this method, we parameterize f with a small neural network, mapping the source embedding of size E s to the target embedding of size E t , using a hidden dimension of 4 * E t and ReLU activations (Fukushima, 1975;Nair and Hinton, 2010).
Linear Combination (lin-comb): Another approach is to represent the source prompt P s as a linear combination of vocabulary embeddings:
V s X = P s
Once we solve for X, we use the same linear combination on the target embedding vectors, for the corresponding tokens, to generate the estimated target prompt:
P t = V t X
Additional details about our recycling methods can be found in Appendix A.3 and implementations have been open-sourced 1 .
Models
All models we use are based on T5 1.1 lm100k, a version from T5 1.1 trained for an additional 100K steps with a Language Modeling objective from Lester et al. (2021). We use the "Base" and "Large" size version of the model for our cross size recycling experiments. Additionally we trained two more copies of T5 1.1 lm100k Base from scratch using T5X (Roberts et al., 2022), Flaxformer, Flax (Heek et al., 2020, and Jax (Bradbury et al., 2018) from different random seeds. Additional details of pre-training can be found in Appendix A.1.
In the default setting, Prompt Tuning uses autoregressive generation to make predictions. The prompted model is allowed to generate arbitrary text which is then compared to a predefined answer string-a verbalizer (Schick and Schütze, 2021). In this setting, recycled prompts score zero as they tend to output illegal predictions, i.e. they generate strings that don't match any verbalizers.
Instead of using generation, we evaluate models with rank classification (Brown et al., 2020;Wei et al., 2022;Min et al., 2022;Sanh et al., 2022). The model is used to score each possible verbalizer, conditioned on the example input and the prompt. The highest ranking class is then selected as the model's prediction:
argmax y∈Y Pr Mt (y|P t ; X)
Thus we only care that the correct class is the most probable of the possible verbalizers, not that it is the most probable generation of all. Recycling success in this setting suggests that, while a prompt trained directly for a given model can learn to control its generated outputs, this ability isn't easily transferred across models.
Source Prompt Training
To explore how the initialization of the source prompt can influence its recyclability we used "Random", "Class Label" (Lester et al., 2021), and "SPoT" (Vu et al., 2022) initialization methods. For SPoT initialization a prompt pre-trained on language modeling is used as a starting point. Our Table 1: Recycling lags behind the best-case performance of re-tuning directly on the target model, but shows gains at Base size over zero-shot performance and random prompts (E Pr ). Recycling from Base → Large slightly underperforms random prompts on average, but this is pulled down by a few very low values; Table 4 shows that in nearly all settings, the majority of recycling runs exceed the expected performance of random prompts. Results are aggregated over all target models of a given size. For Re-Tune and Recycle, we train prompts to 2K steps using class-init initialization and aggregate over 3 random seeds and 3 recycling methods. For E Pr , 100 random prompts are sampled. The improvement of recycling over random prompts is statistically significant (p < 0.05).
All results are presented as Accuracy StdDev .
two SPoT settings use prompts trained for 10K and 50K step respectively. Additional details on the SPoT pre-training procedure can be found in Appendix A.2.
We also explore using the initialization and training scheme from Khashabi et al. (2022). The prompt is initialized using the embedded representation of string that describes the task, and a regularization loss during training encourages the learned prompt parameters to remain close to that starting point. We refer to this method as "Wayward" initialization. The intuition is that since our recycling methods are grounded in the mapping between model vocabularies, recycling may be more effective if we can keep the source prompts on the manifold of the source model's token embeddings. In this setting we use the words from the text prompt as training data for the recycler. While the "Wayward" methodology also includes changes to training in addition to initialization, we evaluate it by comparing it with other initialization methods.
Additionally, we explore recycling source prompts at various points during their trainingspecifically after 2, 5, 10, and 20 thousand steps. Vu et al. (2022) found that prompts trained beyond 10K steps were less useful for predicting task similarity, suggesting that continued training may result in a prompt overfit to the current task and model. Thus, we hypothesise that prompts from earlier in training will be more transferable.
For each initialization method, we train three prompts per source model, and recycle these to each target model. Details about the training of the source prompts can be found in Appendix A.4. These trained prompts were also used to calculate the "Re-tune" headroom analysis in Table 1.
Vocabulary Selection
Rather than use all 32,000 SentencePiece (Kudo and Richardson, 2018) embeddings for V s or V t we use a subset. T5 1.1 was pre-trained exclusively on English data from C4 (Raffel et al., 2020) but shares a vocabulary with the original T5 which supplemented unsupervised pre-training with machine translation. Thus the vocabulary contains non-English tokens (German, French, and Romanian) that have likely never been updated during training. We filter non-English tokens by removing tokens with a cld3 2 confidence less than 0.8. Some English subword tokens get removed as well, leaving 16,779 embeddings. The final list of filtered tokens is available in our open-source release. Wendlandt et al. (2018) and Antoniak and Mimno (2018) observe instability in the local neighborhoods of embeddings between training runs, especially for high frequency tokens. Given that SentencePiece tokens are ordered by frequency, we skip the first 1,000 tokens after filtering as they are more likely to be unstable between models. We then use the next 4,000 tokens as the data points to train our recyclers.
Datasets
We investigate recycling using two sentiment analysis datasets, SST2 (Socher et al., 2013) and IMDB (Maas et al., 2011). Early experiments explored using QQP (Iyer et al., 2017) and ReCoRD (Zhang et al., 2018) but found that the target model's zeroshot performance must be non-trivial (better than the naive majority-class baseline) for recycling to work. More information about each task as well as the verbalizers used can be found in Appendix A.6. Table 2: How often recycled prompts exceed zero-shot (ZS) and random prompt (E Pr ) baselines. We aggregate over all source and target models and all recycling methods, for 108 recycled prompts per row. Prompts trained with Random initialization are far less likely to be successfully recycled. SPoT offers small gains in robustness for SST2, but underperforms on IMDB. Bold and underline mark the best and second-best methods.
Experimental Results
There is a large gap between top-line performance, re-tuning a prompt directly on the target model, and a recycled prompt. Table 1 shows that even in the strongest recycling settings, there is still a 15 point gap. However we do see that recycling prompts yields stronger results than using the target model for zero-shot inference or using a random prompt. This shows that recycling between models-including ones with different sizes-is possible but difficult, and increasing the performance of recycled prompts should be a focus of future research. Due to the size of this gap, the rest of our work focus on improving the reliability of prompt recycling methods. Extra details about the Zero-Shot and Random results can be found in Appendices A.7 and A.5 respectively.
Source Prompt Initialization
First, we explore how source prompt initialization effects the reliability of recycling. As reported in Table 2, we find that random initialization results in far lower recyclability on SST2 than the other methods. As such, we do not include random initialization results in our analysis going forward. We also explore using a pre-trained SPoT prompt for source prompt initialization. Table 2 shows that SPoT initialization was successful on SST2. Given this success, we also explore SPoT initialization for IMDB but it was less successful. Going forward, results using SPoT initialization are included in aggregations of initialization strategies.
We explored using Wayward initialization and training. By initializing prompts trained on differ- Table 3: Wayward initialization often beats Zero-Shot but falls short of random and lags on final performance. We aggregate over all source and target models, using the v2v-nn recycler on SST2, giving 36 recycled prompts per row. Wayward's low accuracy when successful also means it rarely beats random prompts, which score quite well for some models. The difference in Accuracy is statistically significant (p < 0.05).
ent models with a shared text string, regularizing the trained prompt toward that initial value, and uses the embedding of those tokens to learn the recycler we hope to create a shared space between to models, making recycling easier. Due to the training vocabulary token selection (selecting only the tokens included in the text prompt), only the v2v-nn recycler works-recycling prompts with other methods tend to have NaN values. We see in Table 3 that Wayward initialization yields slightly more consistent recycling; however, the resulting models are often much weaker compared to successful recycling of prompts trained with class initialization. Given the minimal improvement to robustness, we do not explore Wayward initialization in other settings or include it in further analysis. Figure 2 shows that recycling performance has large variance when using few token embeddings to learn the recycler r. It also shows that performance trends downward as more and more tokens are used, possibly due to the relative rarity of these later tokens. We see our chosen number of tokens, 4,000, is well placed between these trends, although 5, 000 may have worked better. We fit the v2v-lin recycler with a variable number of token embeddings ranging from 100 to 12,000 with a step size of 100. This recycler is then applied to prompts trained with Class Label, SPoT, or Wayward initialization on the SST2 dataset. Additionally we include results for Class Label initialization using a recycler that skips the first 1,000 tokens. As training continues, a prompt begins to see repeated input, output pairs as multiple epochs loop through the dataset multiple times and the prompt becomes more and more specialized for solving this specific task in conjunction with this specific frozen model. Figure 3 shows that recycling prompts later in training results in decreased reliability of recy- cling. Essentially, the source prompt begins to overfit to the quirks of the specific source model it was trained with. By 2K steps, trained prompts have converged, they within 2.25% of the maximum performance they will obtain and begin to show increasing loss on the validation set with stagnating accuracy, suggesting the model is getting more and confident in its wrong answers. Table 4 shows how different recyclers have very different performance profiles in different settings. We see that v2v-nn is the best method for Base → Base recycling while the other methods have poor performance. In the settings where the source and target models are different sizes we see that other methods are stronger, especially for the SST2 dataset.
Recycling Method Settings
Recycling Across Model Sizes
We also explore recycling between models of different sizes. Table 5 shows that Base → Large recycling results in a prompt that is stronger than the expected performance of random prompts over half the time; however, the mean performance of recycling is less than the mean performance of random prompts (see Table 1). This is due to the larger variance of Base → Large recycling. In the cases where recycling is stronger it is just a bit stronger, but in the cases it is worse-it is a lot worse. We seen an exaggerated version of this same trend when comparing the Zero-Shot results on SST2. Table 5 shows that Base → Large recycling is much more likely to beat the Zero-Shot baseline than Base → Base recycler. This appears to be an artifact of the low Zero-Shot performance of the Large models as this does not hold when comparing how often each setting beats the expected random performance, where Base → Large is much weaker than IMDB. Zhong et al. (2021) ask if soft prompts perform well because they are better at extracting knowledge already contained within the frozen model or if they are just used as a method to memorize specific information about the training data. Table 6 shows that recycling from Large → to Base is more robust than Base → Base. This suggests that either prompts trained with Large models memorize their task specific information in a way that is easier to transfer to a Base model or that Large prompts leverage information stored in the Large model is a way that translates to knowledge extraction for Base models. This finding is suggestive that better recycling methods maybe able to act similar to distillation methods and transfer knowledge from prompts trained on larger models to smaller ones.
Conclusion
We have demonstrated that recycling prompts between different models is possible, but difficult. Learned transformation based on structural similarities between the models can be applied to the prompt to create a new prompt, tailored to the target model, that infuses extra information useful for some task beyond the knowledge built into the target model. This manifests as recycled prompts having stronger performance than the target model's zero-shot performance. Additionally, prompts in the restricted area of the possible prompt space dictated by the transformation of a source prompt tend to have stronger performance than randomly sampled prompts. However, recycling is difficult as the final performance of recycled prompts is still far below Re-tuned prompts, trained in conjunction with target model.
We have proposed three different recycling methods and found that different recyclers work better when applied to different model and source prompt combinations. We found Base → Large recycling is unstable and that Large → Base recycling is more robust and produces stronger prompts than Base → Base recycling, and that recyclability tends to decrease the more a prompt is trained.
These successes, and even more the failures with respect to pure performance, demonstrate that prompt recycling, and the idea of correspondences between similar models that it is based on, is an exciting research direction where many improvements can still be made. tially trained on C4 using the Span Corruption (Raffel et al., 2020) objective with an average span size of 3 and 15% of tokens removed as part of a span on average. Training was done for 1 million steps using a batch size of 256. Input examples were trimmed to 512 tokens while outputs where trimmed to 114. All optimization parameters use the defaults from T5X. Afterwards, the model are trained a further 100K steps using the Language Model objective. Here a batch size of 128 is used and inputs and targets are trimmed to 1024 and 128 respectively. All pretraining was done on 64 TPUv3s.
A.2 SPoT Pre-training Hyperparameters
SPoT initialization vectors were pre-trained on the Language Modeling Objective used to adapt T5 1.1 models in Lester et al. (2021). Sequences of length 512 where randomly divided into input and target sequences with a maximum length of 128.
The model is fed the input and must predict the target. Token-level cross-entropy loss is used to calculate gradients and updates are only applied to the prompt parameters. The Adafactor (Shazeer and Stern, 2018) optimizer is used with hyperparameters matching Lester et al. (2021) (constant learning rate of 0.3, weight decay of 1e −5 , and parameter scaling turned off). The SPoT prompt is pre-trained for 50K steps and the prompt from step 10K and 50K are used in our experiments.
A single SPoT prompt was trained for each frozen model. The SPoT prompt has a length of 100 is was initialized using tokens embeddings sampled from the first 5, 000 tokens in the model's vocabulary. SPoT prompts for the Base sized models were trained on 8 TPUv2s while Large sized models used 16 TPUv3s.
A.3 Recycling Method Training
The v2v-nn recycler was trained with JAX and Flax using the Adam (Kingma and Ba, 2015) optimizer with a batch size of 50 and a learning rate of 3e −4 for 25 epochs.
The v2v-lin recycler was fit using tf.linalg.lstsq (Abadi et al., 2015). This was also used to finding the linear combination of embeddings that approximate the prompt in the lin-comb recycling method.
A.4 Source Prompt Training Hyperparameters
All source prompts are 100 tokens long and were trained using the best hyperparameters from Lester et al. (2021) (constant learning rate of 0.3, weight decay of 1e −5 , and parameter scaling turned off). Prompts where trained for 20K steps with a batch size of 128. Input examples were trimmed to 512 tokens and outputs to 16 (note, all verbalizers were shorter than this limit). Three source prompts were trained for each frozen model initialization strategy pair. Differences between runs due to the random seed include the initial prompt value and the order training examples. Source prompts for the Base sized models were trained on 8 TPUv2s while Large sized models used 16 TPUv3s.
The different initialization strategies used the following hyperparameters:
Random Initialization: The prompt is initialized from a uniform random distribution between −0.5 and 0.5.
Class Initialization:
The prompt is initialized from the embeddings of the possible class verbalizers (where the embeddings for subword tokens are averaged together in the case of a verbalizer being tokenized into multiple SentencePieces). Additional prompt tokens are initialized with token embeddings sampled from the first 5, 000 tokens in the model's vocabulary.
SPoT 10K/50K: The prompt is initialized with a SPoT prompt after 10K or 50K steps of SPoT pre-training. See Appendix A.2 for details on the SPoT pre-training procedure.
Wayward: The prompt is initialized with the SentencePiece embeddings of the following string: "Classify this movie review based on its sentiment . Use 2 classes . One positive ( for reviews that paint the movie in a favorable light ) and one negative ( for reviews that make you not want to see the movie or think it will be bad ) . Use the string ' positive ' for the positive class , the good / great movies , and use the string ' negative ' for the negative class , the bad movies ." Spaces where added around punctuation to match the text pre-processing used for SST2. This string is tokenized into 100 tokens by the T5 vocabulary. During training, the prompt parameters were regularized towards this initial representation with a squared L 2 distance loss term,
L wayward = P s − Embed(prompt) 2 2 L
This loss term is then scaled by the parameter γ = 0.01.
A.5 Random Prompts
Random Prompt experiments involved drawing a random prompt, P r and using that random prompt with the target model for task evaluation, Eval(M t , P r , T ). The elements of the random prompt are draw from a Gaussian distribution with a mean of µ = 0 and a standard deviation of σ = 16, thus P ri ∼ N (mu = 0, σ = 16).
The standard deviation of σ = 16 was selected via a grid search over σ ∈ {4, 8, 16, 32, 64}. Analysis of trained prompts have shown that the prompt parameters tend to have large values and norms, therefore we made sure to include large values in our grid search.
The expected performance of Random prompts show how easy it is to find a value the results in strong downstream performance. If the recycled prompt is better than random prompts it demonstrates that recycling from source prompts find an area of prompt space the produces stronger results and recycling was able to transfer useful information. Table 7 shows the average performance of random prompts for each model.
A.6 Dataset Details
All datasets used are the versions distributed via TensorFlow Datasets (TFDS). The datasets used either do not include a validation split or have nonpublic labels for their test split. In the latter case we use the public validation split as the test split and in both cases we use the last N examples from the training split for validation. See Table 8 for dataset details. SST2 3 is a sentiment analysis dataset built with movie reviews from rottentomatoes.com. IMDB 4 is also a sentiment analysis dataset but the reviews come from imdb.com. QQP 5 is a duplicate question detection dataset built from questions ask on quora.com. ReCoRD 6 asks a question about a provided news paragraph. The answer is an entity from a provided list. A single real-world entity may appear in the list multiple times as different surface forms. If this is the correct entity, all surface forms are considered correct.
As our models generate strings the correspond to a class (instead of a distribution of scores over possible classes), a verbalizer is chosen to represent each class. See Table 9-12 for details of the chosen verbalizers and baseline model performance.
A.7 Zero-Shot Performance
Throughout this work, we compared a recycled prompt to the zero-shot performance of the target model. If the recycled prompt performs better than zero-shot, it means that recycling was able to transfer information encoded in the source prompt into the target prompt. Otherwise we would not expect to score higher than the information already target model allows. Table 13 shows the zero-shot per-Verbalizer Single-Class Accuracy positive 50.0 negative 50.0 formance of our models on various datasets and Table 14 shows zero-shot performance aggregated over model size. We see non-trivial zero-shot performance on SST2 and IMDB, the datasets where recycling was the most successful. This suggests that the frozen target model must have some baseline capability in solving a task if recycling is expected to work.
It appears that these models have some ability to solve ReCoRD in a zero-shot manner, but this is actually due to bias in the dataset. In ReCoRD, shorter entities are slightly more likely to be correct. If length normalization is used during rank classification (removing the models bias towards shorter outputs) zero-shot performance falls to the random guess baseline of 14.72.
A.8 Composability of Prompts and Generative Inference
In an effort to remove the requirement of rank classification, we explored composing a recycled prompt with prompt that already knows how to control the output of the target model. Ideally, the recycled prompt would include information on how to decides which verbalizer represents the correct class given the current input and the control prompt would be used make the target model output the actual text of the verbalizer. First a prompt was trained with M t on a modified version of the c4 dataset. The input text was the same as the Language Modeling objective but the target was one of the possible verbalizers for SST2
Verbalizer
Random Guess Accuracy Entity Name 14.72 (positive or negative) uniformly sampled. Thus this prompt, P v is trained to always output one of the verbalizers, regardless of the input. Only 500 steps of training was required to learn the verbalizer distribution. Then a source prompt P s was recycled to M t yielding P t . This recycled prompt was then composed with P v , through concatenation or broadcasted addition, (P t • P v ). Finally this prompt is used in conjunction with the target model on the task, Eval(M t , P t • P v , T ), using autoregressive generation instead of rank classification.
Composing P v with P t did change the models output from all illegal predictions (often just ".", resulting in an Accuracy of zero) when using just P t to valid verbalizers when using the composition. Without an explicit method to ensure the prompts were compatible, the composite prompt tended to output only a single class. However, when the composite prompt did output the other class, it was always correct. This suggests that it may be possible to design future prompt training and recycling methods where parameters that encode task information and model output control are explicitly separated and later combined during recycling.
A.9 Significance Testing
All tests for statistical significance uses the Welch's t-test (Welch, 1947) as implemented in SciPy (Virtanen et al., 2020) using the scipy.stats.ttest_ind_from_stats function. We use p < 0.05 as the threshold for statistical significance.
A.10 Graphs
All graphs where made with Seaborn (Waskom, 2021) and Matplolib (Hunter, 2007) where bands represent a confidence interval of 95.
Figure 1 :
1We explore methods of building a taskagnostic Prompt Recycler (r) capable of transforming prompts (P s ) tuned for a source model (M s ) into prompts (P t ) that can be used with a target model (M t ).
Figure 2 :
2Recycling performance as a function of words used to train the recycler. Each point represents four v2v-lin recycling runs, aggregating over Class Label, SPoT (10K, 50K), and Wayward initialization. The range from 3,000 to 5,000 words delivers relatively high performance and low variance.
Figure 3 :
3Recyclability drops as the source prompt is trained longer. Results are aggregated over all source and target models of a given size, all possible recycling methods, two initialization methods (class-init, SPoT), and two datasets (SST2, IMDB).
Target Model Dataset Re-Tune Zero-Shot RecycleE Pr
Base
SST2
92.3 0.3
59.2 6.6
64.7 8.9 56.3 5.2
IMDB
94.2 0.2
65.6 8.2
67.1 8.3 62.8 4.1
Large
SST2
95.5 0.3
75.0
69.6 7.6 70.8 4.4
IMDB
96.1 0.1
77.2
80.3 3.4 81.0 0.5
Dataset Initialization >ZS (%) >E Pr (%)SST2
Random
29.6
47.22
Class
50.0
66.67
SPoT 10K
50.0
58.33
SPoT 50K
51.9
69.44
IMDB Class
72.2
65.74
SPoT 10K
45.4
58.33
SPoT 50K
51.9
59.26
Initialization >ZS (%) Acc.StdDev. >EP r (%)Class
55.6
69.27.7
66.7
Wayward
61.1
63.97.3
41.2
Source Target Recycler Dataset >ZS (%) >E Pr (%)Base
Base
v2v-nn
SST2
63.0
64.8
IMDB
55.6
64.8
lin-comb SST2
35.2
50.0
IMDB
38.9
64.8
v2v-lin
SST2
35.2
50.0
IMDB
38.9
66.7
Large v2v-nn
SST2
55.6
70.4
IMDB
77.8
44.4
lin-comb SST2
66.7
77.8
IMDB
96.3
55.6
v2v-lin
SST2
70.4
77.8
IMDB
92.6
55.6
Large
Base
v2v-nn
SST2
48.2
77.8
IMDB
37.0
51.9
lin-comb SST2
51.9
74.1
IMDB
51.9
66.7
v2v-lin
SST2
48.2
74.1
IMDB
55.6
66.7
Table 4 :
4Reliability of three recyclers across various settings. Results are aggregated over source and target models and class-init and SPoT initialization methods, giving 54 recycled prompts for each Base → Base row, and 27 prompts elsewhere. The v2v-nn recycler is best for Base → Base transfer, while the other methods are more reliable when transferring across model sizes. Bold marks the best method for each dataset per block.Dataset Target >ZS (%) >E Pr (%)
SST2
Base
44.4
54.9
Large
64.2
75.3
IMDB Base
44.4
65.4
Large
88.9
51.8
Table 5 :
5Recycling from Base → Large is inconsistent. There are large improvements over Base → Base recycling for SST2 but not when comparing to random prompts for IMDB. Results are aggregated over all source and target models of a given size, all recycling methods, and class-init and SPoT initialization methods. Results with Base as the target size include 162 recycled prompts, while results with Large include 81.
Table 6 :
6Recycling from Large → Base is more reliable than Base → Base recycling, especially for SST2. Results are aggregated over all source and target models of a given size, all recycling methods, and class-init and SPoT initialization methods. Results with Base as the source size include 162 recycled prompts while results with Large include 81. Bold results show where the gain from Large recycling is statistically significant (p < 0.05).
Continuous Prompts. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3631-3643, Seattle, United States. Association for Computational Linguistics.Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
Method for Stochastic Optimization. In The 3rd
International Conference on Learning Representa-
tions, ICLR 2015, Conference Track Proceedings,
San Diego, CA, USA.
Taku Kudo and John Richardson. 2018. SentencePiece:
A simple and language independent subword tok-
enizer and detokenizer for Neural Text Processing.
In Proceedings of the 2018 Conference on Empirical
Methods in Natural Language Processing: System
Demonstrations, pages 66-71, Brussels, Belgium.
Association for Computational Linguistics.
Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and
Steven Skiena. 2015. Statistically significant de-
tection of linguistic change. In Proceedings of the
24th International Conference on World Wide Web,
WWW '15, pages 625-635, Republic and Canton of
Geneva, CHE. International World Wide Web Con-
ferences Steering Committee.
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt
tuning. In Proceedings of the 2021 Conference on
Empirical Methods in Natural Language Processing,
pages 3045-3059, Online and Punta Cana, Domini-
can Republic. Association for Computational Lin-
guistics.
Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning:
Optimizing Continuous Prompts for Generation. In
Proceedings of the 59th Annual Meeting of the
Association for Computational Linguistics and the
11th International Joint Conference on Natural Lan-
guage Processing (Volume 1: Long Papers), pages
4582-4597, Online. Association for Computational
Linguistics.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham,
Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analy-
sis. In Proceedings of the 49th Annual Meeting of
the Association for Computational Linguistics: Hu-
man Language Technologies, pages 142-150, Port-
land, Oregon, USA. Association for Computational
Linguistics.
Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013.
Exploiting Similarities among Languages for Ma-
chine Translation. arXiv:1309.4168 [cs].
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Han-
naneh Hajishirzi. 2022. MetaICL: Learning to
Learn In Context. In Proceedings of the 2022 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies.
Vinod Nair and Geoffrey E. Hinton. 2010. Recti-
fied Linear Units Improve Restricted Boltzmann
Machines. In Proceedings of the 27th Interna-
tional Conference on International Conference on
Machine Learning, ICML'10, pages 807-814, Madi-
son, WI, USA. Omnipress.
Table 7 :
7Random Prompt performance of various frozen models. Base 1 and Base 2 are the versions of T5 1.1 lm100k Base that were trained in T5X while all other models were originally trained in MeshTF and converted to the T5X format. Results calculated from draws of 100 random prompts and presented as Acc. StdDev. . normalized by the prompt length.
Table 8 :
8The TensorFlow Datasets version number and train/validation/test split sizes for each dataset used. Our validation sets where created by taking the last N examples from the train split. TFDS maintains class label distributions in each sample when sliced like this.Verbalizer
Single-Class Accuracy
positive
50.92
negative
49.08
Table 9 :
9Verbalizers used for SST2 and the evaluation accuracy if a model were to only select that label.
Table 10 :
10Verbalizers used for IMDB and the evaluation accuracy if a model were to only select that label.Verbalizer
Single Class Accuracy
duplicate
36.82
unique
63.18
Table 11 :
11Verbalizers used for QQP and the evaluation accuracy if a model were to only select that label. The default verbalizer for the majority class in the T5 codebase is not_duplicate. We found that when using verbalizer with so many rare SentencePiece tokens caused issues manifesting as very low zero-shot performance, therefore; we use unique as it is a more common word and it is tokenized into the same number of SentencePieces as duplicate.
Table 12 :
12Evaluation accuracy if a model makes random guesses. ReCoRD doesn't have a single set of possible verbalizers. Instead, a list of entities is provided for each example which are used as possible classes for that example. Above is the expected performance if a model were to randomly guess from that list.Model SST2 IMDB QQP ReCoRD
Base
66.5
74.0 62.3
21.5
Base 1
53.9
65.3 63.2
22.8
Base 2
57.1
57.6 63.2
22.8
Large
75.0
77.2 N/A
N/A
Table 13 :
13Zero-Shot performance of various frozen models. Base 1 and Base 2 are the versions of T5 1.1 lm100k Base that were trained in T5X while all other models were originally trained in MeshTF and converted to the T5X format.Model
SST2 IMDB
QQP ReCoRD
Base
59.2 6.6 65.6 8.2 62.2 1.0
22.4 0.7
Large
75.0
77.2
N/A
N/A
Table 14 :
14Zero-Shot performance of various frozen model sizes, aggregated over the multiple versions of the Base model.
https://github.com/google-research/ prompt-tuning/tree/main/prompt_tuning/ recycling
https://github.com/google/cld3
https://www.tensorflow.org/datasets/ catalog/glue#gluesst2 4 https://www.tensorflow.org/datasets/ catalog/imdb_reviews 5 https://www.tensorflow.org/datasets/ catalog/glue#glueqqp 6 https://www.tensorflow.org/datasets/ catalog/super_glue#super_gluerecord
AcknowledgementsWe thank Rami Al-Rfou for their feedback on this work.A AppendixA.1 Pre-training HyperparametersTwo new version of T5 1.1 Base lm100k were trained using the T5X framework. They were ini-
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. Timo Schick, Hinrich Schütze, 10.18653/v1/2021.naacl-main.185Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsTimo Schick and Hinrich Schütze. 2021. It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost. Noam Shazeer, Mitchell Stern, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine Learning80Noam Shazeer and Mitchell Stern. 2018. Adafac- tor: Adaptive Learning Rates with Sublinear Mem- ory Cost. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596-4604. PMLR.
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, Christopher Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.
On Transferability of Prompt Tuning for Natural Language Understanding. Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie Zhou, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsYusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, and Jie Zhou. 2022. On Transferability of Prompt Tun- ing for Natural Language Understanding. In Pro- ceedings of the 2022 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3949-3969, Seattle, United States. Association for Computational Linguistics.
TensorFlow Datasets, a collection of readyto-use datasets. TFDS. TFDS. TensorFlow Datasets, a collection of ready- to-use datasets. https://www.tensorflow. org/datasets.
. Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, J Stéfan, Matthew Van Der Walt, Joshua Brett, K Jarrod Wilson, Nikolay Millman, Mayorov, R J Andrew, Eric Nelson, Robert Jones, Eric Kern, C J Larson, İlhan Carey, Yu Polat, Eric W Feng, Jake Moore, Denis Vanderplas, Josef Laxalde, Robert Perktold, Ian Cimrman, E A Henriksen, Charles R Quintero, Anne M Harris, Antônio H Archibald, Ribeiro, 10.1038/s41592-019-0686-2Nature Methods. 17Paul van Mulbregtand SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in PythonPauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Courna- peau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey,İlhan Po- lat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pe- dregosa, Paul van Mulbregt, and SciPy 1.0 Contribu- tors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261-272.
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer. Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, Daniel Cer, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsTu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2022. SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer. In Pro- ceedings of the 60th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 5039-5059, Dublin, Ireland. Associ- ation for Computational Linguistics.
Seaborn: Statistical Data Visualization. Michael L Waskom, 10.21105/joss.03021Journal of Open Source Software. 6603021Michael L. Waskom. 2021. Seaborn: Statistical Data Visualization. Journal of Open Source Software, 6(60):3021.
. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, M Andrew, Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M.
Finetuned Language Models Are Zero-Shot Learners. Dai, Le, International Conference on Learning Representations. Dai, and Quoc V Le. 2022. Finetuned Language Models Are Zero-Shot Learners. In International Conference on Learning Representations.
The generalization of 'student's' problem when several different population variances are involved. Welch Bernard Lewis, Biometrika. 341Bernard Lewis Welch. 1947. The generalization of 'stu- dent's' problem when several different population variances are involved. Biometrika, 34(1/2):28-35.
Factors influencing the surprising instability of word embeddings. Laura Wendlandt, Jonathan K Kummerfeld, Rada Mihalcea, 10.18653/v1/N18-1190Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLaura Wendlandt, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors influencing the surprising instability of word embeddings. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2092-2102.
Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, Benjamin Van Durme, arXiv:1810.12885ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension. arXiv preprintSheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension. arXiv preprint arXiv:1810.12885.
Factual Probing Is [MASK]: Learning vs. Learning to Recall. Zexuan Zhong, Dan Friedman, Danqi Chen, 10.18653/v1/2021.naacl-main.398Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsZexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual Probing Is [MASK]: Learning vs. Learning to Recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 5017-5033, Online. Association for Computational Linguistics.
| [
"https://github.com/google-research/",
"https://github.com/google/cld3"
] |
[
"The UNED systems at Senseval-2",
"The UNED systems at Senseval-2"
] | [
"David Fernández-Amorós \nDepto. de Lenguajes y Sistemas Informáticos\nUNED\n",
"Julio Gonzalo \nDepto. de Lenguajes y Sistemas Informáticos\nUNED\n",
"Felisa Verdejo felisa@lsi.uned.es \nDepto. de Lenguajes y Sistemas Informáticos\nUNED\n"
] | [
"Depto. de Lenguajes y Sistemas Informáticos\nUNED",
"Depto. de Lenguajes y Sistemas Informáticos\nUNED",
"Depto. de Lenguajes y Sistemas Informáticos\nUNED"
] | [
"Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL)"
] | We have participated in the Senseval-2 English tasks (all words and lexical sample) with an unsupervised system based on mutual information measured over a large corpus (277 million words) and some additional heuristics. A supervised extension of the system was also presented to the lexical sample task.Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge. | null | [
"https://arxiv.org/pdf/0910.5410v1.pdf"
] | 172,552 | 0910.5410 | f4a7fdd729f123254adb8c961c982d3c54ab8113 |
The UNED systems at Senseval-2
28 Oct 2009
David Fernández-Amorós
Depto. de Lenguajes y Sistemas Informáticos
UNED
Julio Gonzalo
Depto. de Lenguajes y Sistemas Informáticos
UNED
Felisa Verdejo felisa@lsi.uned.es
Depto. de Lenguajes y Sistemas Informáticos
UNED
The UNED systems at Senseval-2
Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL)
200128 Oct 2009
We have participated in the Senseval-2 English tasks (all words and lexical sample) with an unsupervised system based on mutual information measured over a large corpus (277 million words) and some additional heuristics. A supervised extension of the system was also presented to the lexical sample task.Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge.
Introduction
We advocate researching unsupervised techniques for Word Sense Disambiguation (WSD). Supervised techniques offer better results in general but the setbacks, such as the problem of developing reliable training data, are very considerable. Also there's probably more to WSD than blind machine learning (a typical approach, although such systems produce interesting baselines).
Within the unsupervised paradigm, we are interested in performing in-depth measures of the disambiguation potential of different sources of information.
We have previously investigated the informational value of semantic distance measures in (Fernández-Amorós et al., 2001).
For Senseval-2, we have turned to investigate pure coocurrence information as a source of disambiguation evidence. In essence, our system computes a matrix of mutual information for a fixed vocabulary and applies it to weight coocurrence counting between sense and context characteristic vectors.
In the next section we describe the process of constructing the relevance matrix. In section 3 we present the particular heuristics used for the competing systems. In section 4 we show the results by system and heuristic and some baselines for comparison. Finally in the last sections we draw some conclusions about the exercise.
The Relevance matrix
Corpus processing
Before building our systems we have developed a resource we've called the relevance matrix. The raw data used to build the matrix comes from the Project Gutenberg (PG) 1 .
At the time of the creation of the matrix the PG consisted of more than 3000 books of diverse genres. We have adapted these books for our purpose : First, language identification was used to filter books written in English; Then we stripped off the disclaimers. The result is a collection of around 1.3Gb of plain text.
Finally we tokenize, lemmatize, strip punctuation and stop words and detect numbers and proper nouns.
Coocurrence matrix
We have built a vocabulary of the 20000 most frequent words (or labels, as we have changed all the proper nouns detected to the label PROPER NOUN and all numbers detected to NUMBER) in the text and a symmetric coocurrence matrix between these words within a context of 61 words (we thought a broad context of radius 30 would be appropriate since we are trying to capture vague semantic relations).
Relevance matrix
In a second step, we have built another symmetric matrix, which we have called relevance matrix, using a mutual information measure between the words (or labels), so that for two words a and b, the entry for them would be P (a∩b) P (b)P (a) , where P (a) is the probability of finding the word a in a random context of a given size. P (a ∩ b) is the probability of finding both a and b in a random context of the fixed size. We've introduced a threshold of 2 below which we set the entry to zero for practical purposes. We think that this is a valuable resource that could be of interest for many other applications other than WSD. Also, it can only grow in quality since at the time of making this report the data in the PG has almost doubled in size.
Cascade of heuristics
We have developed a very simple language in order to systematize the experiments. This language allows the construction of WSD systems composed of different heuristics that are applied in cascade so that each word to be disambiguated is presented to the first heuristic, and if it fails to disambiguate, then the word is passed on to the second heuristic and so on. We can have several such systems running in parallel for efficiency reasons (the matrix has high memory requirements). Next we show the heuristics we have considered to build the systems • Monosemous expressions.
Monosemous expressions are simply unambiguous words in the case of the all words English task. In the case of the lexical sample English task, however, the annotations include multiword expressions. We have implemented a multiword term detector that considers the multiword terms from WordNet's index.sense file and detects them in the test file using a multilevel backtracking algorithm that takes account of the inflected and base forms of the components of a particular multiword in order to maximize multiword detection. We tested this algorithm against the PG and found millions of these multiword terms.
We restricted ourselves to the multiwords already present in the training file since there are, apparently, multiword expressions that where overlooked during manual tagging (for instance the WordNet expression 'the good old days' is not handtagged as such in the test files)
• Statistical filter WordNet comes with a file, cntlist, literally 'file listing number of times each tagged sense occurs in a semantic concordance' so we use this to compute the relative probability of a sense given a word (approximate in the case of collections other than SemCor). Using this information, we eliminated the senses that had a probability under 10% and if only one sense remains we choose it. Otherwise we go on to the next heuristic. In other words, we didn't apply complex techniques with words which are highly skewed in meaning 2 .
• Relevance filter
This heuristic makes use of the relevance matrix. In order to assign a score to a sense, we count the coocurrences of words in the context of the word to be disambiguated with the words in the definition of the senses (the WordNet gloss tokenized, lemmatized and stripped out of stop words and punctuation signs) weighting each coocurrence by the entry in the relevance matrix for the word to be disambiguated and the word whose coocurrences are being counted, i.e., if s is a sense of the word α whose definition is S and C is the context in which α is to be disambiguated, then the score for s would be:
w∈C R wα freq(w, C)freq(w, S)idf(w, α)
Where idf(w, α) = log N dw , with N being the number of senses for word α and d w the number of sense glosses in which w appears. freq(w, C) is the frequency of word w in the context C and freq(w, S) is the frequency of w in the sense gloss S.
The idea is to prime the occurrences of words that are relevant to the word being disambiguated and give low credit (possibly none) to the words that are incidentally used in the context. Also, in the all words task (where POS tags from the TreeBank are provided) we have considered only the context words that have a POS tag compatible with that of the word being disambiguated. By compatible we mean nouns and nouns, nouns and verbs, nouns and adjectives, verbs and verbs, verbs and adverbs and vice versa. Roughly speaking, words that can have an intra-phrase relation.
approach. In our opinion, the cntlist information does not make a system supervised per se, because a) It is standard information provided as part of the dictionary and b) We don't use the examples to feed or train any procedure.
We also filtered out senses with low values in the cntlist file, and in any case we only considered at most the first six senses of a word.
• Enriching sense characteristic vectors
The relevance filter provided very good results in our experiments with SemCor and Senseval-1 data as far as precision is concerned, but the problem is that there is little overlapping between the definitions of the senses and the contexts in terms of coocurrence (after removing stop words and computing idf) which means that the previous heuristic didn't disambiguate many words.
To overcome this problem, we enrich the senses characteristic vectors adding for each word in the vector the words related to it via the relevance matrix weights. This corresponds to the algebraic notion of multiplying the matrix and the characteristic vector. In other words, if R is the relevance matrix and v our characteristic vector we would finally use Rv + v.
This should increase the number of words disambiguated provided we eliminate the idf factor (which would be zero in most cases because now the sense characteristics vectors are not as sparse as before). When we also discard senses with low relative frequency in SemCor we call this heuristic mixed filter.
• back off strategies
For those cases that couldn't be covered by other heuristics we employed the first sense heuristic. In the case of the supervised system for the English lexical sample task we thought of using the most frequent sense but didn't implement it due to lack of time.
We won't delve into UNED-AW-U system as it is very similar to this one. This is an (arguably) unsupervised system for the English all words task. The heuristics we used and the results obtained for each of them are shown in Table 1 Table 2: UNED-AW-U2 vs baselines In the lexical sample task, we weren't able to multiply by the relevance matrix due to time constraints, so in order to increase the coverage for the relevance filter heuristic we expanded the definitions of the senses with those of the first 5 levels of hyponyms. Also, we selected the radius of the context to be considered depending on the POS of the word being disambiguated. For nouns and verbs we used 25 words radius neighbourhood and for adjectives 5 words at each side.
• UNED-LS-U This is essentially the same system as UNED-AW-U2, in this case applied to the lexical sample task. The results are displayed in Table 3.
• UNED-LS-T
Discussion and conclusions
We've put a lot of effort into making the relevance matrix but its performance in the WSD task is striking. The matrix is interesting and its application in the relevance filter heuristic is slightly better than simple coocurrence counting, which proves that it doesn't discard relevant words. The problem seems to lie in the fact that irrelevant words (with respect to the word to be disambiguated) rarely occur both in the context of the word and in the definition of the senses (if they appeared in the definition they wouldn't be so irrelevant) so the direct impact of the information in the matrix is very weak. Likewise, relevant (via the matrix) words with respect to the word to be disambiguated occur often both in the context and in the definitions so the final result is very similar to simple coocurrence counting. This problem only showed up in the lexical sample task systems. In the all words systems we were to enrich the sense definitions to make a more advantageous use of the matrix.
We were very confident that the relevance filter would yield good results as we have already evaluated it against the Senseval-1 and SemCor data. We felt however that we could improve the coverage of the heuristic enriching the definitions multiplying by the matrix. A similar approach was used by Yarowsky (Yarowsky, 1992) and Schütze (Schütze and Pedersen, 1995) and it worked for them. This wasn't the case for us; still, we think the resource is well worth researching other ways of using it.
As for the overall scores, the unsupervised lexical sample obtained the highest recall of the unsupervised systems, which proves that carefully implementing simple techniques still pays off. In the all words task the UNED-WS-U2 had also the highest recall among the unsupervised systems (as characterized in the Senseval-2 web descriptions), and the fourth overall. We'll train it with the examples in Semcor 1.6 and see how much we can gain.
Conclusions
Our system scored first among unsupervised systems in both tasks: 56.9% recall in all words, 40.2% in lexical sample. This is slightly worse than the first sense heuristic for all words and 3.6% better for the lexical sample, a strong indication that unsupervised Word Sense Disambiguation remains being a strong challenge.
.Heuristic
Att.
Score
Prec
Rec
Monosemous exp
514
45500
88.5% 18.4%
Statistical filter
350
27200
77.7% 11.0%
Mixed filter
1256
50000
39.8% 20.2%
Enriched Senses
77
4300
55.8%
3.1%
First sense
249
13600
54.6%
5.5%
Total
2446 140600 57.5% 56.9%
Table 1 :
1Unsupervised heuristics for English
all words task
If the individual heuristics are used as
standalone WSD systems we would ob-
tain the results in Table 2.
System
Att.
Score
Prec
Recall
First sense
2405 146900 61.1% 59.4%
UNED-AW-U2 2446 140600 57.5% 56.9%
Mixed filter
2120 122600 57.8% 49.6%
Enriched senses 2122 108100 50.9% 43.7%
Random
2417 89191.2 36.9% 36.0%
Statistical filter
864
72700
84.1% 29.4%
Table 3 :
3Unsupervised heuristics for English
lexical sample task
examples to the definitions of the senses
giving the same weight to the definition
and to all the examples as a whole (i.e.
definitions are considered more interest-
ing than examples)
Heuristic
Att.
Score
Prec
Recall
Relevance filt 4116 206150 50.1% 47.6%
First sense
208
9300
44.7%
2.1%
Total
4324 215450 49.8% 49.8%
Table 4 :
4Supervised heuristics for English lexical sample task
http://promo.net/pg
Some people may argue that this is a supervised
Systems and Results• UNED-AW-U2
. Fernández-Amorós, D. Fernández-Fernández-Amorós et al.2001] D. Fernández-
The Role of Conceptual Relations in Word Sense Disambiguation. J Amorós, F Gonzalo, Verdejo, Proceedings of the 6th International Workshop on Applications of Natural Language for Information Systems (NLDB). the 6th International Workshop on Applications of Natural Language for Information Systems (NLDB)MadridGI Publishers3Proceedings of the 4th Annual Symposium on Document Analysis and Information RetrievalAmorós, J. Gonzalo, and F. Verdejo. 2001. The Role of Conceptual Relations in Word Sense Disambiguation. In Proceedings of the 6th International Workshop on Applications of Natural Language for Information Systems (NLDB), Madrid, volume 3 of LNI Series, pages 87-98. GI Publishers. [Schütze and Pedersen1995] H. Schütze and J. Ped- ersen. 1995. Information Retrieval Based on Word Senses. In Proceedings of the 4th Annual Symposium on Document Analysis and Informa- tion Retrieval, pages 161-175.
Word-Sense Disambiguation using Statistical Models of Roget's Categories Trained on Large Corpora. David Yarowsky, Proceedings of International Conference in Computational Linguistics COLING-92. International Conference in Computational Linguistics COLING-92Nantes, FranceDavid Yarowsky. 1992. Word- Sense Disambiguation using Statistical Models of Roget's Categories Trained on Large Corpora. In Proceedings of International Conference in Computational Linguistics COLING-92, pages 454-460, Nantes, France.
| [] |
[
"NPtool~ a detector of English noun phrases *",
"NPtool~ a detector of English noun phrases *"
] | [
"Atro Voutilainen avoutila@ling.helsinki.fi \nResearch Unit for Computational Linguistics\nP.O. Box 4 (Keskuskatu\n\nUniversity of Helsinki\nFIN-00014Finland\n"
] | [
"Research Unit for Computational Linguistics\nP.O. Box 4 (Keskuskatu",
"University of Helsinki\nFIN-00014Finland"
] | [] | NPtool is a fast and accurate system for extracting noun phrases from English texts for the purposes of e.g. information retrieval, translation unit discovery, and corpus studies. After a general introduction, the system architecture is presented in outline. Then follows an examination of a recently written Constraint Syntax. Section 6 reports on the performance of the system. | null | null | 15,057,877 | cmp-lg/9502010 | 5250de3c025066cef3e9522ac9f6d5ec30e134af |
NPtool~ a detector of English noun phrases *
Atro Voutilainen avoutila@ling.helsinki.fi
Research Unit for Computational Linguistics
P.O. Box 4 (Keskuskatu
University of Helsinki
FIN-00014Finland
NPtool~ a detector of English noun phrases *
NPtool is a fast and accurate system for extracting noun phrases from English texts for the purposes of e.g. information retrieval, translation unit discovery, and corpus studies. After a general introduction, the system architecture is presented in outline. Then follows an examination of a recently written Constraint Syntax. Section 6 reports on the performance of the system.
Introduction
This paper outlines NPiool, a noun phrase detector.
At the heart of this modular system is reductionistic word-oriented morphosyntactic analysis that expresses head-modifier dependencies. Previous work on this approach, largely based on Karlsson's original proposal [Karlsson, 1990], is documented in [Karlsson et ai., forthcoming]. Let us summarise a few key features of this style of analysis.
• As most parsing frameworks, also the present style of analysis employs a lexicon and a grammar. What may distinguish the present approach from most other frameworks is the considerable degree of attention we pay to the morphological and lexical description: morphological analysis is based on an extensive and detailed description that employs inflectional and central derivational categories as well as other morphosyntactic features that can be useful for stating syntactic generalisations. In this way "This paper is based on a longer manuscript with the same title. The development of ENGCG wLs supported by TEKES, the Finnish Technological Development Center, •nd a part of the work on Finite-state syntax hu been supported by the Academy of Finland. a carefully built and informative lexicon facilitates the construction of accurate, wide-coverage parsing grammars.
• We use tags to encode morphological distinctions, part of speech, and also syntactic information; for instance:
I PRON ~ItEAD see V PRES @VERB a ART @>N bird N
@HEAD FULLSTOP
In this type of analysis, each word is provided with tags indicating e.g. part of speech, inflection, derivation, and syntactic function.
• Morphological and syntactic descriptions are based on hand-coded linguistic rules rather than on corpus-based statistical models. They employ structural categories that can be found in descriptive grammars, e.g. [Quirk, Greenbaum, Leech and Svartvik, 1985].
Regarding the at times heated methodological debate on whether statistical or rule-based information is to he preferred in grammatical analysis of running text (cf. e.g. [Sampson, 1987a;Taylor, Grover and Briscoe, 1989;Church, 1992]), we do not object to probabilistic methods in principle; nevertheless, it seems to us that rule-based descriptions are preferable bemuse they can provide for more accurate and reliable analyses than current probabilistic systems, e.g. part-of-speech taggers [Voutilainen, Heikkil~ and Anttila, 1992;Voutilainen, forthcoming a]. I Proba-IConsider for instance the question posed in [Church, 1992] whether lexical probabilities contribute more to morphological or parLor-speech disambiguation than context does. The ENGCG morphological disambiguator, which is entirely based on context rules, uniquely bilistic or heuristic techniques may still be a useful add-on to linguistic information, if potentially remaining ambiguities must be resolved -though with a higher risk of error.
• In the design of our grammar schemes, we have paid considerable attention to the question on the resolvability of grammatical distinctions. In the design of accurate parsers of running text, this question is very important: if the description abounds with distinctions that can be dependably resolved only with extrasyntactic knowledge ~, then either the ambiguities due to these distinctions remain to burden the structure-based parser (as well as the potential application based on the analysis), or a guess, i.e. a misprediction, has to be hazarded.
This descriptive policy brings with it a certain degree of shallowness; in terms of information content, a tag-based syntactic analysis is somewhere between morphological (e.g. part-of-speech) analysis and a conventional syntactic analysis, e.g. a phrase structure tree or a feature-based analysis. What we hope to achieve with this compromise in information content is the higher reliability of the proposed analyses. A superior accuracy could be considered as an argument for postulating a new, 'intermediary' level of computational syntactic description. For more details, see e.g. [Voutilainen and Tapanainen, 1993].
• Our grammar schemes are also learnable: according to double-blind experiments on manually assigning morphological descriptions, a 100% interjudge agreement is typical [Voutilainen, forthcoming a]. 3
• The ability to parse running text is of a high priority. Not only a structurally motivated description is important; in the construction of the parsing grammars and lexica, attention should also be paid to corpus evidence. Often a grammar rule, as we expr~s it in our parsing grammars, is formed as a generalisation 'inspired' by corpus observations; in this sense the parsing grammar is corpus-based. However, the description need not be restricted to the corpus observation: the linguist is likely to generalise over past experience, and this is not necessarily harmful -as long as the generalisations can also and correctly identifies more than 97% of all appropriate descriptions, and this is considerably more than the near-90% success rate achieved with lexical probabilities alone [Church, 1992]. Moreover, note that in all, the ENGCG disaanbiguator identifies more than 99.5% of all appropriate descriptions; only, some 2-3% of all anMyses remain ambiguous and thus do not become uniquely identified. For more details, see [Voutila.inen, forthcoming 1993].
2Witness, for instance, ambiguities due to adverbial attachment or modifier scope.
3The 95% interjudge agreement rate reported in [Church, 1992] probably indicates that in the case of debatable constructions, explicit descriptive conventions have not been consistently established. Only a carefully defined grammar scheme makes the evaluation of the accuracy of the parsing system a meaningful enterprise (see also [Sampson, 1987b]). be validated against representative test corpora.
* At least in a practical application, a parsing grammar should assign the best available analysis to its input rather than leave many of the input utterances unrecognised e.g. as ill-formed. This does not mean that the concept of well-formedness is irrelevant for the present approach. Our point is simply: although we may consider some text utterance as deviant in one respect or another, we may still be interested in extracting as much information as possible from it, rather than ignore it altogether. To achieve this effect, the grammar rules should be used in such a manner 4 that no input becomes entirely rejected, although the rules as such may express categorical restrictions on what is possible or well-formed in the language.
• In our approach, parsing consists of two main kinds of operation:
i. Context-insensitive lookup of (alternative) descriptions for input words 2. Elimination of unacceptable or contextually illegitimate alternatives.
Morphological analysis typically corresponds to the lookup module: it produces the desired morphosyntactic analysis of the sentence, along with a number of inappropriate ones, by providing each word in the sentence with all conventional analyses as a list of alternatives. The grammar itself exerts the restrictions on permitted sequences of words and descriptors. In other words, syntactic analysis proceeds by way of ambiguity resolution or dlsambiguation: the parser eliminates ill-formed readings, and what 'survives' the grammar is the (syntactic) analysis of the input utterance. Since the input contains the desired analysis, no new structure will be built dtvdng syntactic analysis itself.
• Our grammars consist of constraints -partim distributional definitions of morphosyntactic categories, such as parts of speech or syntactic functions. Each constraint expresses a piecemeal linearprecedence generalisation about the language, and they are independent of each other. That is, the constraints can be applied in any order: a true grammar will produce the same analysis, whatever the order.
The grammarian is relatively free to select the level of abstraction at which (s)he is willing to express the distributional generalisation. In particular, also reference to very low-level categories is possible, and this makes for the accuracy of the parser: while the grammar will contain more or less abstract, feature-oriented rules, often it is also expedient to state further, more particular restrictions on more particular distributional classes, even at the word-form level. These 'smaller' rules do not contradict the more general rules; often it is sim-4e.g. by ranking the graanmar rules in terms of compromisability ply the case that further restrictions can be imposed on smaller lexical classes s This flexibility in the grammar formalism greatly contributes to the accuracy of the parser [Voutilainen, forthcoming a;Voutilainen, forthcoming 1993].
2
Uses of a noun phrase parser
The recognition and analysis of subclausal structural units, e.g. noun phrases, is useful for several purposes. Firstly, a noun phrase detector can be useful for research purposes: automatic large-scale analysis of running text provides the linguist with better means to conduct e.g. quantitative studies over large amounts of text. An accurate though somewhat superficial analysis can also serve as a 'preprocessor' prior to more ambitious, e.g. feature-based syntactic analysis. This kind of division of labour is likely to be useful for technical reasons. One major problem with e.g. unificationbased parsers is parsing time. Now if a substantial part of the overall problem is resolved with more simple and efficient techniques, the task of the unification-based parser will become more manageable. In other words, the more expressive but computationally heavier machinery of e.g. the unificationbased parser can be reserved entirely for the analysis of the descriptively hardest problems. The less complex parts of the overall problem can be tackled with more simple and efficient techniques.
Regarding production uses, even lower levels of analysis can be directly useful. For instance, the detection of noun phrases can provide e.g. information management and retrieval systems with a suitable input for index term generation.
Noun phrases can also serve as translation units; for instance, [van der Eijk, 1993] suggests that noun phrases are more appropriate translation units than words or part-of-speech classes.
3
Previous work
This section consists of two subsections. Firstly, a performance-oriented survey of some related systems is presented. Then follows a more detailed presentation of ENGCG, a predecessor of the NPIool parser in an information retrieval system.
Related systems
So far, I have found relatively little documentation on systems whose success in recognising or parsing noun phrases has been reported. I am aware of three systems with some relevant evaluations.
Church's Parts of speech [Church, 1988] performs not only part-of-speech analysis, but it also identities the most simple kinds of noun phrases -mostly sequences of determiners, premodifiers and nominal heads -by inserting brackets around them, e.g.
s Consider for instance the attachment of prepositions]
phrases in general and of ofphrues in particular.
[A/AT ~former/AP top/NN a±de/NN] to/IN [Attorney/NP/NP General/NP/NP Ed~in/NP/NP
Meese/NP/NP] in~erceded/VBD ...
The appendix in [Church, 1988] lists the analysis of a small text. The performance of the system on the text is quite interesting: 0f243 noun phrase bracket, s, only five are omitted. -The performance of PaNs of speech was also very good in part-of-speech analysis on the text: 99.5% of all words got the appropriate tag. The mechanism for noun phrase identification relies on the part-of-speech analysis; the part-ofspeech tagger was more successful on the text than on an average; therefore the average performance of the system in noun phrase identification may not be quite as good as the figures in the appendix of the paper suggest.
Bourigault's LECTER [Bourigault, 1992] is a surface-syntactic analyser that extracts 'maximallength noun phrases' -mainly sequences of determiners, premodifiers, nominal heads, and certain kinds of postmodifying prepositional phrases and adjectives -from French texts for terminology applications. The system is reported to recognise 95% of all maximal-length noun phrases (43,500 out of 46,000 noun phrases in the test corpus), but no figures are given on how much 'garbage' the system suggests as noun phrases. It is indicated, however, that manual validation is necessary.
Rausch, Norrback and Svensson [1992] have designed a noun phrase extractor that takes as its input part-of-speech analysed Swedish text, and inserts brackets around noun phrases. In the recognition of 'Nuclear Noun Phrases' -sequences of determiners, premodifiers and nominal heads -the system was able to identify 85.9% of all nuclear noun phrases in a text collection, some 6,000 words long in all, whereas some 15.7% of all the suggested noun phrases were false hits, i.e. the precision t' of the system was 84.3%. The performance of a real application would probably be lower because potential misanalyses due to previous stages of analysis (morphological analysis and part-of-speech disarnbiguation, for instance) are not accounted for by these figures.
3.2 ENGCG and the SIMPB. project SIMPR, Structured Information Management: Processing and Retrieval, was a 64 person year ESPRIT II project (Nr. 2083(Nr. , 1989(Nr. -1992, whose objective was to develop new methods for the management and retrieval of large amounts of electronic texts. A central function of such a system is to recognise those words in the stored texts that represent it in a concise fashion -in short, index terms.
Term indices created with traditional methods 7 are based on isolated, perhaps truncated words. eFor definitions of the terms recall and preciJion, see Section 6. 7See e.g. [Stlton and McGill, 1983].
These largely string-based statistical methods are somewhat unsatisfactory because many content identifiers consist of word sequences -compounds, headmodifier constructions, even simple verb -noun phrase sequences. One of the SIMPR objectives was also to employ more complex constructions, the recognition of which would require a shallow grammatical analysis. The Research Unit for Computational Linguistics at the University of Helsinki participated in this project, and ENGTWOL, a Twolstyled morphological analyser as well as ENGCG, a Constraint Grammar of English, were written 1989-1992 by Voutilainen, Heikkil~i and Anttila [forthcoming]. The resultant SIMPR system is an improvement over previous systems [Smart (Ed.), forthcoming] -it is not only reasonably accurate, but also it operates on more complex constructions, e.g. postmodifying constructions and simple verb-object constructions. There were also some persistent problems. The original plan was to use the output of the whole ENGCG parser for the indexing module. However, the last module of the three sequential modules in the ENGCG grammar, namely Constraint Syntax proper, was not used in the more mature versions of the indexing module -only lexical analysis and morphological disambiguation were applied. The omission of the syntactic analysis was mainly due to the somewhat high error rate (3--4% of all words lost the proper syntactic tag) and the high rate of remaining ambiguities (15-25% of all words remained syntactically ambiguous.
Here, we will not go into a detailed analysis of the problems s, suffice it to say that the syntactic grammar scheme was unnecessarily ambitious for the relatively simple needs of the indexing application. One of the improvements in NPtoal is a more optimal syntactic grammar scheme, as will be seen in Section 5.1.
4
NPtool in outline
In this section, the architecture of NPtool is presented in outline. Here is a flow chart of the system: In the rest of this section, we will observe the analysis of the following sample sentence, taken from a car maintenance manual:
The ~n]e~ and exhaust manifolds are mounted on opposite sides of the cylinder head, the exhaust manifold channelling the gases to a single exhaust pipe and silencer system.
Preprocessing and morphological
analysis The input ASCII text, preferably SGML-annotated, is subjected to a preprocessor that e.g. determines sentence boundaries, recognises fixed syntagms 9, normalises certain typographical conventions, and verticalises the text.
This preprocessed text is then submitted to morphological analysis. ENGTWOL, a morphological analyser of English, is a Koskenniemi-style morphological description that recognises all inflections and central derivative forms of English. The present lexicon contains some 56,000 word stems, and altogether the analyser recognises several hundreds of thousands of different word-forms. The analyser also employs a detailed parsing-oriented morphosyntactic description; the feature system is largely derived from [Quirk, Greenbaum, Leech and Svartvik, 1985].
Here is a small sample:
("<*the>" ("the" DET CENTRAL ART SG/PL (©>7))) ("<inlet>" ("inlet" N lfOM SG)) ("<and>"
("and" cc (ecc))) ( "<exhaust>" ("exhaust" <SVO> V SUBJUNCTIVE VFIN (~V)) ("exhaust" <SVO> V IMP VFIN (~V)) ("exhaust" <SVO> V INF) ("exhaust" <SVO> V PRE$ -SG3 VFIN (@V)) ("exhaust" N NOM SG)) ( "<manif olds>" ("manifold" N NOM PL))
All running-text word-forms are given on the lefthand margin, while all analyses are on indented lines of their own. The multiplicity of these lines for a word-form indicates morphological ambiguity. For words not represented in the ENGTWOL lexicon, there is a 99.5% reliable utility that assigns ENGTWOL-style descriptions. These predictions are based on the form of the word, but also some heuristics are involved.
Constraint Grammar parsing
The next main stage in NPtoei analysis is Constraint Grammar parsing. Parsing consists of two main phases: morphological disambiguation and Constraint syntax.
°e.g. multiword prepositions and compounds
• Morphological disambiguation. The task of the morphological disambiguator is to discard all contextually illegitimate morphological readings in ambiguous cohorts. For instance, consider the following fragment:
("<aT" ("a" <Indef> DET CESTRAL ART SG (a>S))) ( "<s ingle>"
("single" <SVO> V IMP VFIS (av)) ("single" <SVO> V IIF)
("single" A ABS))
Here an unambiguous determiner is directly followed by a three-ways ambiguous word, two of the analyses being verb readings, and one, an adjective reading.
-A determiner is never followed by a verbl°; one of the 1,100-odd constraints in the disambiguation grammar [Voutilainen, forthcoming a] expresses this fact about English grammar; so the verb readings of single are discarded here. The morphological disambiguator seldom discards an appropriate morphological reading: after morphological disambiguation, 99.7-100% of all words retain the appropriate analysis. On the other hand, some 3-6% of all words remain ambiguous, e.g. head in this sentence. There is also an additional set of some 200 constraints -after the application of both constraint sets, 97-98% of all words become fully disambiguated, with an overall error rate of up to 0.4% [Voutilainen, forthcoming b]. The present disambiguator compares quite favourably with other known, typically probabilistic, disambiguators, whose maximum error rate is as high as 5%, i.e. some 17 times as high as that of the ENGCG disambiguator.
• Constraint syntax. After morphological disambiguation, the syntactic constraints are applied.
In the NPtool syntactic description, all syntactic ambiguities are introduced directly in the lexicon, so no extra lookup module is needed. Like disambiguation constraints, syntactic constraints seek to discard all contextually illegitimate syntactic function tags.
Here is the syntactic analysis of our sample sentence, as produced by the current parser. To save space, most of the morphological codes are omitted.
("<*the>" ("the" O~ (©>N))) ("<inlet>"
("inlet" i (@>I ~NH))) ("<and>" ("and" CC (eCC))) ( "<exhaus 1;>" ("exhaust" I (@>N))) ("<manifolds>" ("manifold" I (aIll))) ("<are>*' l°save for no, which can be followed by an -ing-form; d. no in There is no going home ("be" V (av))) ("<mounted>"
("mount" PCP2 (av))) ( "<on>" ("on" PREP (aAH))) ("<opposite>" ("opposite" A (a>S))) ("<sides>"
("side" S CASH))) (*'<of>" ("of" PREP (©S<))) ("<the>" ("the" DET (a>N))) ("<cylinder>" ("cylinder" I (a>s asH))) ( "<head>"
("head" V (av)) ("head" S (aSH))) (,,<$,>,') ("<the>" ("the" DET Ca>S))) ("<exhaust>" ("exhaust" N (©>S))) ("<manifold>" ("manifold" N (aSH))) ("<channelling>"
("char-el" PCP1 (av))) ("<the>'° ("the" DET (a>I))) ("<gases>" ("gas" I (aNH))) ("<¢o>" ("to" PREP (aAH))) ("<a>" ("a" DET (a>I))) ("<single>" ("single" A Ca>I))) ("<oxhaust>" ("exhaust" I Ca>I))) ("<pips>" ("pipe" X (aN'H))) ( "<and>" ("and" cc (acc))) ("<silencer>" ("silencer" I Ca>N))) ("<system>" ("system" N (@NH))) (,,<$.>,,)
All syntactic-function tags are flanked with '@'. For instance, the tag '@>N' indicates that the word is a determiner or a premodifier of a nominal in the right-hand context (e.g. fhe). The second word, in-#or, remains syntactically ambiguous due to a premodifier reading and a nominal head @NH reading note that the ambiguity is structurally genuine, a coordination ambiguity. The tag @V is reserved for verbs and auxiliaries, cf. are as well as mounted. The syntactic description will be outlined below.
Pasi Tapanainen 11 has recently made a new implementation of the Constraint Grammar parser that performs morphological disambiguation and syntactic analysis at a speed of more than 1,000 words per second on a Sun SparcStation 10, Model 30.
Treatment of remaining ambiguities
The Constraint Grammar parser recognises only word-level ambiguities, therefore some of the troversale through an ambiguous sentence representation may be blatantly ill-formed.
NPtool eliminates locally unacceptable analyses by using a finite-state parser [Tapanainen, 1991] 1~ as a kind of 'post-processing module' that distinguishes between competing sentence readings. The parser employs a small finite-state grammar that I have written. The speed of the finite-state parser is comparable to that of the Constraint Grammar parser.
The finite-state parser produces all sentence readings that are in agreement with the grammar. Cam sider the following two adapted readings from the beginning of our sample sentence:
the/¢>N inlet/©>N and/©CC exhaust/©>N manifolds/©NH are/BY mounted/©V on/©AH opposite/©>N sides/©NH of/@N< the/@>N cylinder/@NH head/BY
the/~>N inlet/¢>N and/%CC exhaust/©>N
manifolds/©NH axe/QV moun~ed/@V on/©AH opposite/Q>N sides/©NH of/QN< the/@>N cylinder/@>N head/©NH
The only difference is in the analysi s of cylinder head: the first analysis reports cylinder as a noun phrase head which is followed by the verb head, while the second analysis considers cylinder head as a noun phrase. Now the last remaining problem is, how to deal with ambiguous analyses like these: should cylinder be reported as a noun phrase, or is cylinder head the unit to be extracted?
The present system provides all proposed noun phrase candidates in the output, but each with an indication of whether the candidate noun phrase is unambiguously analysed as such, or not. In this solution, I do not use all of the multiple analyses proposed by the finite-state parser. For each sentence, no more than two competing analyses are selected for further processing: one with the highest number of words as part of a maximally long noun phrase analysis, and the other with the lowest number of words as part of a maximally short noun phrase analysis.
This 'weighing' can be done during finite-state parsing: the formalism employs a mechanism for imposing penalties on regular expressions, e.g. on tags.
nKesearch Unit for Computational Linguistics, University of Helsinki 12For other work m this approach, see also [Koskenniemi, 1990;Koskenniemi, Tapanalnen and Voutilainen, 1992;Voutilainen and Tapanalnen, 1993].
A penalised reading is not discarded as ungrammatical, only the parser returns all accepted analyses in an order where the least penalised analyses are produced first and the 'worst' ones last.
Thus there is an 'NP-hostile' finite-state parser that penalises noun phrase readings; this would prefer the sentence reading with cylinder/@NH head/@V. The 'NP-friendly' parser, on the other hand, penalises all readings which are not part of a noun phrase reading, so it would prefer the analysis with eylinder/@>N head/@NIY. Of all analyses, the selected two parses are maximally dissimilar with regard to NP-hood. The motivation for selecting maximally conflicting analyses in this respect is that a candidate noun phrase that is agreed upon as a noun phrase by the two finite-state parsers systems just as it isneither longer nor shorter -is likely to be an unambiguously identified noun phrase. The comparison of the outputs of the two competing finite-state parsers is carried out during the extraction phase.
Extraction of noun phrases
An unambiguous sentence reading is a linear sequence of symbols, and extracting noun phrases from this kind of data is a simple pattern matching task.
In the present version of the system, I have used the gawk program that allows the use of regular expressions. With gawk's gsub function, the boundaries of the longest non-overlapping expressions that satisfy the search key can be marked. If we formulate our search query as something like the following schematic regular expression: The proposed noun phrases are given on indented lines, each marked with the symbol 'np:'. The candidate noun phrases are then subjected to further routines: all candidate noun phrases with at least one occurrence in the output of both the NP-hostile and NP-friendly parsers are labelled with the symbol 'ok:', and the remaining candidates are labelled as uncertain, with the symbol '?:'. From the outputs given above, the following list can be produced:
ElM>N+
ok: inlet and exhaust manifold ok: exhaust manifold ok: gas ok: single exhaust pipe ok: silencer system ?: opposite side of the cylinder ?: opposite side of the cylinder head
The linguistic analysis is relatively neutral as to what is to be extracted from it. Here we have concentrated on noun phrase extraction, but from this kind of input, also many other types of construction could be extracted, e.g. simple verb-argument structures.
5
The syntactic description This section outlines the syntactic description that I have written for 2gPtool purposes. The ENGTWOL lexicon or the disambiguation constraints will not be described further in this paper; they have been documented extensively elsewhere (see the relevant articles in Karlsson & al. [forthcoming]). According to the SIMP/t experiences, the vast majority of index terms represent relatively few constructions. By far the most common construction is a nominal head with optional, potentially coordinated premodifiers and postmodifying prepositional phrases, typically of phrases. The remainder, less than 10%, consists almost entirely of relatively simple verb-NP patterns.
The syntactic description used in SIMPR employed some 30 dependency-oriented syntactic function tags, which differentiate (to some extent) between various kinds of verbal constructions, syntactic functions of nominal heads, and so on. Some of the ambiguity that survives ENGCG parsing is in part due to these distinctions [Anttila, forthcoming].
The relatively simple needs of an index term extraction utility on the one hand, and the relative abundance of distinctions in the ENGCG syntactic description on the other, suggest that a less distinctive syntactic description might be more optimal for the present purposes: a more shallow description would entail less remaining ambiguity without unduly compromising its usefulness e.g. for an indexing application.
5.1 Syntactic tags I have designed a new syntactic grammar scheme that employs seven function tags. These tags capitalise on the opposition between noun phrases and other constructions on the one hand, and between heads and modifiers, on the other. Here we will not go into details; a gloss with a simple illustration will suf~ce.
• ~V represents auxiliary and main verbs as well as the infinitive marker to in both finite and nonfinite constructions. For instance:
She should/¢V know/@V what to/QV do/©V.
• ~NH represents nominal heads, especially nouns, pronouns, numerals, abbreviations and -ingforms. Note that of adjectival categories, only those with the morphological feature <Nominal>, e.g. English, are granted the @NH status: all other adjectives (and -ed-forms) are regarded as too unconventional nominal heads to be granted this status in the present description. An example:
The English/@Ne may like the conventional.
• Q>N represents determiners and premodifiers of nominals (the angle-bracket '>' indicates the direction in which the head is to be found). The head is the following nominal with the tag @NH, or a premodifier in between. An example: the/@>N fat/@>l| butchsr's/@>N wife • ON< represents prepositional phrases that unambiguously postmodify a preceding nominal head. Such unambiguously postmodifying constructions are typically of two types: (i) in the absence of certain verbs like 'accuse', postnominal of-phrases and (ii) preverbal NP--PP sequences, e.g.
The man in/¢~< 'the moon had a glass of/@N< ale.
Currently the description does not account for other types of postmodifier, e.g. postmodifying adjectives, numerals, other nominals, or clausal constructions.
• ~CC and @CS represent co-ordinating and subordinating conjunctions, respectively: Either/CCC you or/CCC I will go if/COS necessary.
• @AH represents the 'residual': adjectival heads, adverbials of various kinds, adverbs (also intensifiers), and also those of the prepositional phrases that cannot be dependably analysed as a postmodifier. An example is in order:
There/CAH have al~ays/©AH been very/CAH many people in/QAH Shis area.
Syntactic constraints
The syntactic grammar contains some 120 syntactic constraints. Like the morphological disambiguation constraints, these constraints are essentially negative partial linear-precedence definitions of the syntactic categories.
The present grammar is a partial expression of four general grammar statements:
1. Part of speech determines the order of determiners and modifiers.
2. Only likes coordinate.
A determiner or a modifier has a head.
4. An auxiliary is followed by a main verb.
We will give only one illustration of how these general statements can be expressed in Constraint Grammar. Let us give a partial paraphrase of the statement Part of speech determines the order of de. termiuers and modifiers: 'A premodifying noun occurs closest to its head'. In other words, premodifiers from other parts of speech do not immediately follow a premodifying noun. Therefore, a noun in the nominative immediately followed by an adjective is not a premodifier. Thus a constraint in the grammar would discard the @>N tag of Harry in the following sample sentence, where Harry is directly followed by an unambiguous adjective:
("<*iU>" ("be" <SVC/N> <SVC/A> V PKES $G3 (@V))) ( "<*harry>" ("harry" <Proper> N N0M SG (eNH @>N))) ("<foolish>"
("foolish" £ £BS (@AH))) (,,<¢?>,,)
We require that the noun in question is a nominative because premodifying nouns in the genitive can occur also before adjectival premodifiers; witness Harry's in Harry's foolish self.
Evaluation
The present syntax has been applied to large amounts of journalistic and technical text (newspapers, abstracts on electrical engineering, manuals on car maintenance, etc.), and the analysis of some 20,000-30,000 words has been proofread to get an estimate of the accuracy of the parser.
After the application of the NPtool syntax, some 93-96% of all words become syntactically unambiguous, with an error rate of less than i% 14 .
To find out how much ambiguity remains at the sentence level, I also applied a 'NP-neutral' version 15 of the finite-state parser on a 25,500 word text from
The Grolier Electronic Encyclopaedia. The results are given in Figure 1. Some 64% (960) of the 1,495 sentences became syntactically unambiguous, while only some 2% of all sentences analyses contain more than ten readings, the worst ambiguity being due to 72 analyses. This compares favourably with the ENGCG performance: after ENGCG parsing, 23.5% of all sentences remained ambiguous due to a number of sentence readings greater than the worst case in NPtool syntax.
Performance of NPtool
Various kinds of metrics can be proposed for the evaluation of a noun phrase extractor; our main metrics are recall and precision, defined as followslU:
• Recall: the ratio 'retrieved intended NPs '17 / 'all intended NPs'
• Precision: the ratio 'all retrieved NPs' / 'retrieved intended NPs' 14This figure also covers errors due to previous stages of analysis.
zSi.e. • parser which does not contain the mechanism for penalising or fsvouring noun phrue analyses; see Section 4.3 •hove.
16Thls definition also agrees with that used in Rausch k al. [1992].
17An 'intended NP' is the longest non-overlapping match of the ¢eaxch query given in extraction phue.
To paraphrase, a recall of less than 100% indicates that the system missed some of the desired noun phrases, while a precision of less than 100% indicates that the system retrieved something that is not regarded as a correct result.
The performance of the whole system has been evaluated against several texts from different domains. In all, the analysis of some 20,000 words has been manually checked.
If we wish to extract relatively complex noun phrases with optional coordination, premodifiers and postmodifiers (see the search query above in Section 4.4), we reach a recall of 98.5-100%, with a precision of some 95-98%.
As indicated in Section 4.4, the extraction utility annotates each proposed noun phrase as a 'sure hit' ('ok:') or as an 'uncertain hit' ('?:'). This distinction is quite useful for manual validation: approximately 95% of all superfluous noun phrase candidates are marked with the question mark.
Conclusion
In terms of accuracy, NPtool is probably one of the best in the field. In terms of speed, much remains to be optimised. Certainly the computationally most demanding tasks -disambiguation and parsing -are already carried out quite efficiently, but the more trivial parts of the system could be improved.
8 Acknowledgements I wish to thank Krister Linden, Pasi Tapanainen and two anonymous referees for useful comments on an earlier version of this paper. The usual disclaimers hold.
[CC M>N+]*]* HEAD IN< [D/M>N+ [CC D/M>N+]*]* HEAD]for grouping, stands for one or more occurrences of its argument, stands for zero or more occurrences of its axgmnen$, stands for premodifiers, stands for determiners and premodifiers, stands for nominal heads except pronouns, stands for prepositions starting a poszmodifying prepositional phrase, and do some additional formatting and 'cleaning', the above two finite-state analyses will look like the following13: the np: inlet and exhaust manifold 13Note that the noun phrase heads are here ~ven in the bLse form, hence the absence of the plural form of e.g. 'manifold'.
Figure 1 :
1Ambiguity rates after finite-state parsing in a text of 1,495 sentences (25,500 words). R indicates the number of analyses per sentence, and F indicates the frequency of these sentences.
SSee e.g. [VoutLla/aen, Heikkil£ and AnttAIa, 1992] for details.Preprocessing
V
Morphological analysis
V
Constraint Grammar parsing
%/
V
NP-hostile finite
IP-friendly finite
state parsing
state parsing
V
V
IP extraction
liP extraction
V
V
Intersection of noun phrase sets
? : output data
No. 21, Department of General Linguistics, University of Helsinki.[Voutilainen and Tapanainen, 1993]Voutilninen, A. andTapanainen, P. 1993. Ambiguity resolution in a reductionistic parser. Proceedings of EACL'93.Utrecht, Holland. APPENDIX Here is given the NPtool analysis of a small sample from the CACM text collection. -Here is the input text:The binary number system offers many advantages over a decimal representation for a high-performance, general-purpose computer. The greater simplicity of a binary arithmetic unit and the greater compactness of binary numbers both contribute directly to arithmetic speed. Less obvious and perhaps more important is the ray binary addressing and instruction formats can increase the overall performance. Binary addresses are also essential to certain poeerful operations which are not practical eith decimal instruction formats. On the other hand, decimal numbers are essential for communicating betgeen man and the computer. In applications requiring the processing of a large volume of inherently decimal input and output data, the time for decimal-binary conversion needed by a purely binary computer may be significant. £ slower decimal adder may take less time than a fast binary adder doing an addition and 'two conversions. A careful review of the significance of decimal and binary addressing and both binary and decimal data arithmetic, supplemented by efficient conversion instructions.Here is the list of noun phrases extracted by NPtool. processing of a large volume of inherently decimal input ?:processing of a large volume of ~-herently decimal input and output data
Surface grammatical analysis for the extraction of terminological noun phrases. Anttila, A Anttila, D Bourigault, K Church, Proceedings of the fifteenth International Conference on Computational Linguistics. COLING-g2. the fifteenth International Conference on Computational Linguistics. COLING-g2Nantes, France; SimmonsMichigan Slavic StudiesSboruik praci: In Honor of Henry Ku~eraAnttila, fortheoming] Anttila, A. (forthcoming). How to recognise subjects in English. In Karlsson & at. [Bourigault, 1992] Bourigault, D. 1992. Surface grammatical analysis for the extraction of termi- nological noun phrases. In Proceedings of the fif- teenth International Conference on Computational Linguistics. COLING-g2, Vol. IIL Nantes, France. 977-981. [Church, 1988] Church, K. 1988. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. Proceedings of the Second Conference on Ap- plied Natural Language Processing, A CL. 136-143. [Church, 1992] Church, K. 1992. Current Practice in Part of Speech Tagging and Suggestions for the Future, in Simmons (ed.), Sboruik praci: In Honor of Henry Ku~era. Michigan Slavic Studies.
Automat,-ing the acquisition of bilingual terminology. Proceedings of EACL'93. ; Der Eijk, P Van Der Eijk, Karlsson & atUtrecht, The NetherlandsA TWOL-Based Lexicon and Feature System for Englishder Eijk, 1993] van der Eijk, P. 1993. Automat,- ing the acquisition of bilingual terminology. Pro- ceedings of EACL'93. Utrecht, The Netherlands. [Heikkil~i, forthcoming a] Heikkil~i, J. (forthcoming a). A TWOL-Based Lexicon and Feature System for English. In Karlsson & at.
ENGTWOL English lexicon: solutions and problems. Heikkil~, J Heikkil~i, F Karlsson, Karlsson & al. [Karlssonforthcoming b[Heikkil~, forthcoming b] Heikkil~i, J. (forthcoming b). ENGTWOL English lexicon: solutions and problems. In Karlsson & al. [Karlsson, 1990] Karlsson, F. 1990.
Constraint Grammar as a Framework for Parsing Running Text. Papers presented to the 13th International Conference on Computational Linguistics. H. Karlgren3Constraint Grammar as a Framework for Pars- ing Running Text. In H. Karlgren (ed.), Papers presented to the 13th International Conference on Computational Linguistics, Vol. 3. Helsinki. 168- 173.
Designing a parser for unrestricted text. F Karlsson, Karlsson & at. Karlsson, forthcoming. forthcoming[Karlsson, forthcoming] Karlsson, F. (forthcoming). Designing a parser for unrestricted text. In Karls- son & at.
Constraint Grammar: a Language-Independent System for Parsing Unrestricted Tezt. Mouton de Gruyter. [ Karlsson, COLING-90. Papers presented to the 13th International Conference on Computational Linguistics. Karlgren, H.2Finite-state parsing and disambiguation[Karlsson et al., forthcoming] Karlsson, F., Vouti- lainen, A., Heikkil~i, J. and Anttila, A. Con- straint Grammar: a Language-Independent Sys- tem for Parsing Unrestricted Tezt. Mouton de Gruyter. [Koskenniemi, 1990] Koskenniemi, K. (1990). Finite-state parsing and disambigua- tion. In Karlgren, H. (ed.) COLING-90. Papers presented to the 13th International Conference on Computational Linguistics, Vol. 2. Helsinki, Fin- land. 229-232.
Compiling and using finite-state syntactic rules. [ Koskenniemi, Tapanainen, K Voutilainen ; Koskenniemi, P Tapanainen, A Voutilainen, R Quirk, S Greenbanm, G Leech, J Svartvik, Proceedings of the fifteenth laternational Conference on Computational Linguistics. COLING-9~. the fifteenth laternational Conference on Computational Linguistics. COLING-9~Nantes, France; London & New YorkLongmanA Comprehensive Grammar of the English Language[Koskenniemi, Tapanainen and Voutilainen, 1992] Koskenniemi, K., Tapanainen, P. and Voutilainen, A. (1992). Compiling and using finite-state syn- tactic rules. In Proceedings of the fifteenth later- national Conference on Computational Linguist- ics. COLING-9~, Vol. L Nantes, France. 156-162. [Quirk, Greenbaum, Leech and Svartvik, 1985] Quirk, R., Greenbanm, S., Leech, G. and Svartvik, J. 1985. A Comprehensive Grammar of the English Language. London & New York: Longman.
Excerpering av nominMfraser ur 15pande text. B Bausch, R Norrback, T Svensson, Manuscript. Stockholms universitet. Institutionen F6r Lingvistik[Bausch, Norrhack and Svensson, 1992] Bausch, B., Norrback, R., and Svensson, T. 1992. Excerper- ing av nominMfraser ur 15pande text. Manuscript. Stockholms universitet, Institutionen F6r Lingvis- tik.
Introduction to Modern Information Retrieval. Salton, G Mcgill ; Smton, M Mcgill, McGraw-HillAuckland[Salton and McGill, 1983] SMton, G. and McGill, M. 1983. Introduction to Modern Information Re- trieval. McGraw-Hill, Auckland.
Probabilistic Models of Analysis. G Sampson ; Sampson, Garside, Leech and SampsonSampson, 1987a] Sampson, G. 1987. Probabilistic Models of Analysis. In Garside, Leech and Samp- son (eds.) 1987. 16-29.
The grammatical database and parsing scheme. G Sampson, Garside, Leech and SampsonSampson, 1987b[Sampson, 1987b] Sampson, G. 1987. The grammat- ical database and parsing scheme. In Garside, Leech and Sampson (eds.) 1987.82-96.
Structured Information Management: Processing and Retrieval. forthcoming] Smart (Ed.) (forthcoming). provisional title[Smart (Ed.), forthcoming] Smart (Ed.) (forthcom- ing). Structured Information Management: Pro- cessing and Retrieval. (provisional title).
~.grellisin~. automaatteina esitettyjen kielioppis~£~.ntSjen soveltaminen luonnollisen kielen j~ent~.j~.s.sg (Natural language parsing with finite-state syntactic rules). P Tapanainen, Dept. of computer science, University of HelsinkiMaster's thesis[Tapanainen, 1991] Tapanainen, P. 1991..~.grellisin~. automaatteina esitettyjen kielioppis~£~.ntSjen so- veltaminen luonnollisen kielen j~ent~.j~.s.sg (Nat- ural language parsing with finite-state syntactic rules). Master's thesis. Dept. of computer science, University of Helsinki.
The Syntactic Regularity of English Noun Phrases. Graver Taylor, L Briscoe ; Taylor, C Graver, T Briscoe, Proceedings of the Fourth Conference of the European Chapter of the ACL. the Fourth Conference of the European Chapter of the ACLTaylor, Graver and Briscoe, 1989] Taylor, L., Gra- ver, C. and Briscoe, T. 1989. The Syntactic Regu- larity of English Noun Phrases. In Proceedings of the Fourth Conference of the European Chapter of the ACL. 256-263.
forthcoming a). Context-sensitive disambiguation. A Voutilainen, Karlsson & al.[Voutilainen, forthcoming a] Voutilainen, A. (forth- coming a). Context-sensitive disambiguation. In Karlsson & al.
forthcoming b). Experiments with heuristics. A Voutilainen, Karlsson & al.Voutilainen, forthcoming 1993] Voutilainen, A. (forthcoming 1993). Designing a parsing grammar[Voutilainen, forthcoming b] Voutilainen, A. (forth- coming b). Experiments with heuristics. In Karls- son & al. [Voutilainen, forthcoming 1993] Voutilainen, A. (forthcoming 1993). Designing a parsing grammar.
Constraint Grammar of English: A Performance. A Voutilainen, J Heikkil~, A Anttila, Oriented Introduction. Publication[Voutilainen, Heikkil~i and Anttila, 1992] Voutilainen, A., Heikkil~, J. and Anttila, A. (1992). Constraint Grammar of English: A Performance.Oriented Introduction. Publication
| [] |
[
"A Combined CNN and LSTM Model for Arabic Sentiment Analysis",
"A Combined CNN and LSTM Model for Arabic Sentiment Analysis"
] | [
"Abdulaziz M Alayba \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n",
"Vasile Palade \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n",
"Matthew England \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n",
"Rahat Iqbal \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n",
"Alaybaa@uni Coventry Ac Uk \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n",
"{vasile Palade \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n",
"Matthew England \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n",
"R Iqbal \nSchool of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK\n"
] | [
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK",
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK",
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK",
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK",
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK",
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK",
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK",
"School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing\nCoventry University\nUK"
] | [] | Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels. | 10.1007/978-3-319-99740-7_12 | [
"https://arxiv.org/pdf/1807.02911v3.pdf"
] | 49,653,932 | 1807.02911 | eedb02a40212b018ae64291549d2025058e7b39d |
A Combined CNN and LSTM Model for Arabic Sentiment Analysis
Abdulaziz M Alayba
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
Vasile Palade
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
Matthew England
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
Rahat Iqbal
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
Alaybaa@uni Coventry Ac Uk
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
{vasile Palade
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
Matthew England
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
R Iqbal
School of Computing, Electronics and Mathematics Faculty of Engineering, Environment and Computing
Coventry University
UK
A Combined CNN and LSTM Model for Arabic Sentiment Analysis
Arabic Sentiment ClassificationCNNLSTMNatural Lan- guage Processing(NLP)
Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.
Introduction
In the past decade, social media networks have become a valuable resource for data of different types, such as texts, photos, videos, voices, GPS reading, etc. The explosion of data we experience today in many areas has led researchers in data science to develop new machine learning approaches. There were improvements in different areas, such as: Neural Networks, Deep Learning, Natural Language Processing (NLP), Computer Vision, Geolocation Detection, etc. Sentiment Analysis is one of the topics that attracted much attention from NLP and machine learning researchers. Sentiment analysis deals with the texts or the reviews of people that include opinions, sentiments, attitudes, emotions, statements about products, services, foods, films, etc. [1].
There is a certain sequence of steps to perform supervised learning for sentiment analysis, i.e., converting the text to numeric data and mapping with labels, performing feature extraction/selection to train some classifiers using a training dataset and then estimate the error on the test dataset. Sentiment analysis has various analytic levels that are: document level, sentence level, aspect level [2] [3], word level, character level [4] and sub-word level [5]. Deep neural networks have shown good performance in this area in [6], [7] and [8].
We have also obtained good results on using deep neural networks for sentiment analysis on our own dataset, an Arabic Health Services dataset, reported in [9] and [10]. We have obtained an accuracy between 0.85 and 0.91 for the main dataset in [9] using SVM, Naïve Bayes, Logistic Regression and CNNs. Also, using merged lexicon with CNNs and pre-trained Arabic word embedding, the accuracy for the main dataset was improved to 0.92, and for a Sub-dataset (as described in [10]) the obtained accuracy was between 0.87 and 0.95.
The sentiment analysis approach in this paper is a combination of two deep neural networks, i.e., a Convolutional Neural Network (CNN) and a Long Short Term Memory (LSTM) network. Kim [6] defined CNNs to have convolving filters over each input layer in order to generate the best features. CNNs have shown improvements in computer vision, natural language processing and other tasks. Athiwaratkun and Kang [11] confirmed that the CNN is a powerful tool to select features in order to improve the prediction accuracy. Gers et al. [12] showed the capabilities of LSTMs in learning data series by considering the previous outputs.
This paper first presents some background on deep neural networks and Arabic sentiment classification in Section 2. Section 3 describes the Arabic sentiment datasets we use. Section 4 illustrates the architecture of the proposed merged CNN-LSTMs Arabic sentiment analysis model. The results of the sentiment classification using the model will be presented in Section 5, which will be compared with other results. Section 6 concludes the study and the experiments, and outlines the future work.
Background and Related Work
Deep neural network models have had great success in machine learning, particularly in various tasks of NLP. For example, automatic summarization [13], question answering [14], machine translation [15], words and phrases distributed representations [16], sentiment analysis [6] and other tasks. Kim [6] proposed a deep learning model for sentiment analysis using CNNs with different convolutional filter sizes. Wang et al. [17] applied an attention-based LSTMs model for aspect-level sentiment analysis.
Arabic sentiment analysis has become a research area of interest in recent years. Abdul-Mageed et al. [18] studied the effect at sentence level on the subjectivity and sentiment classification for Modern Standard Arabic language (MSA) using an SVM classifier. Shoukry and Rafea [19] applied SVM and Naïve Bayes at sentence level for sentiment classification using 1000 tweets. Abdulla et al. [20] compared corpus-based and lexicon-based approaches for sentiment analysis. Abdulla et al. [21] addressed the challenges of lexicon construction and sentiment analysis. Badaro et al [22] created a large Arabic sentiment lexicon using English-based linking to the ESWN lexicon and WordNet approach. Duwairi et al. [23] collected over 300,000 Arabic tweets and labeled over 25,000 tweets using crowdsourcing. Al Sallab et al. [24] employed three deep learning methods for Arabic sentiment classification. Ibrahim et al. [25] showed sentiment classifications for MSA and the Egyptian dialect using different types of text data such as tweets, product reviews, etc. Dahou et al. [26] reported on the usage of Arabic pre-trained word representation with CNN increased sentiment classification performance. Tartir and Abdul-Nabi [27] concluded that a semantic approach leads to good sentiment classification results even when the dataset size is small. El-Beltagy et al. [28] enhanced the performance of a sentiment classification using a particular set of features.
Datasets
There is a lack of Arabic sentiment datasets in comparison to English. In this paper, four datasets (where one is a subset of another) will be used in the experiments. Each used only two sentiment classes, i.e., Positive and Negative sentiment.
Arabic Health Services Dataset (Main-AHS and Sub-AHS)
This is our own Arabic sentiment analysis dataset collected from Twitter. It was first presented in [9] and it has two classes (positive and negative). The dataset contains 2026 tweets and it is an unbalanced dataset that has 1398 negative tweets and 628 positive tweets. We call this dataset Main-AHS, and we selected a subset of this dataset, called Sub-AHS, which was introduced in [10]. The Sub-AHS dataset contains 1732 tweets, with 502 positive tweets and 1230 negative tweets.
Twitter Data Set (Ar-Twitter)
The authors of [20] have manually built a labeled sentiment analysis dataset from Twitter using a crawler. The dataset contains 2000 tweets with two classes (Positive and Negative) and each class contains 1000 tweets. The dataset covered several topics in Arabic such as politics, communities and arts. There are some tweets in the available online dataset are missing and, hence, the used size of the dataset in our experiments is 975 negative tweets and 1000 positive tweets.
Arabic Sentiment Tweets Dataset (ASTD)
The authors of [29] presented a sentiment analysis dataset from Twitter that contains over 54,000 Arabic tweets. It has four classes (objective, subjective positive, subjective negative, and subjective mixed). However, in this paper only two classes (positive and negative) will be used and the numbers of negative and positive tweets are 1684 and 795 respectively, giving a total of 2479 tweets.
CNN-LSTM Arabic Sentiment Analysis Model
The fundamental architecture of the proposed model is shown in Figure 1 and it outlines the combination of the two neural networks: CNN and LSTM. There are no accurate tools for preprocessing Arabic text, especially non Standard Arabic text like most of the tweets. There are many forms for a single word in Arabic, for example Arabic words are different based on gender, the tenses of the verbs, the speaker voices, etc. [31]. Table 1 shows several examples of a single Arabic verb (and it has more other forms), the pronunciation of the word as Buckwalter translation [30] and the description of the verb's type. There will be three different levels of sentiment analysis for each proposed dataset. The reason of using different levels is to try to expand the number of features in short tweets and to deal with many forms of a single word in Arabic. This is an example tweet " " and the English translation of this tweet is 'Health services are generally good'. The levels are as follows.
Character . At this level, the number of features is increased, such as in the above example, the number of characters is 24 for the Arabic example, and each letter represents one feature.
The second level is Character N -Gram Level (Ch5gram-level): where we measure the length of all the words in each dataset and we calculate the average length of words (which is five characters for all the different datasets). Then, we split any word that, has more than the average number into several sub-words. Whereas, any word that consist of the same average number of characters or less will be kept as it is. The average word's length for each dataset is five characters and a 5-gram example is The input data layer is represented as a fixed-dimension matrix of different vector embeddings based on different sentiment analysis levels. Each sentiment analysis level has different tokens, for example, in the Char-level the token is a single character. In the Ch5gram-level, the token is a whole word if the length of the word is five characters or less. Also, the token for any words that has more than five letters is split into five gram character like in the Ch5 Gram-level example from above. In the Word-level, the tokens are based on the words in each tweet. Each token is represented as a fixed-size vector in the input matrix. Then, the multiple convolutional filters slide over the matrix to produce a new feature map and the filters have various different sizes to generate different features. The Max-pooling layer is to calculate the maximum value as a corresponding feature to a specific filter. The output vectors of the Max-pooling layer become inputs to the LSTM networks to measure the long-term dependencies of feature sequences. The output vectors of the LSTMs are concatenated and an activation function is applied to generate the final output: either positive or negative.
Input Layer
This is the first layer in the model and it represents each tweet as a row of vectors. Each vector represents a token based on the the sentiment analysis level used. Each different level has a different token to be embedded, such as in the Char-level, each character in the tweet will be represented into a specific vector with a fixed size of 100. Each word in the tweet, which is one token in the Wordlevel is embedded into a vector with lengh of 100 and that is the same with each token in the Ch5gram-level. This layer is a matrix of size w ×v, where v is the lengh of the vector and w is the number of tokens in the tweets. The value of w is the maximum length of a tweet. Any tweet that contains less than the maximum number of tokens in the tweet will be padded with <Pad > to have the same lengh with the maximum tweet lengh. For instance, the maximum length of tweets with the character level in the Main-AHS dataset is 241 tokens and any tweets that have less than the maximum number will be padded to 241 to get the same length. Each matrix in the Character level in the Main-AHS dataset has the size of 241×100.
Convolutional Layer
Each input layer contains a sequence of vectors and it is scanned using a fixed size of filter. For example, we used the filter size 3 for Word-level to extract the 3-gram features of words. Also, we used the filter size 20 in the Char-level and the filter size 10 in the Ch5gram-level. The filter strides or shifts only one column and one row over the matrix. Each filter detects multiple features in a tweet using the ReLU [32] activation function, in order to represent them in the feature map.
Max-Pooling Layer
After the Convolutional layer, the Max-pooling layer minimizes and down-samples the features in the feature map. The max operation or function is the most commonly used technique for this layer and it is used in this experiment. The reason of selecting the highest value is to capture the most important feature and reduce the computation in the advanced layers. Then the dropout technique is applied to reduce overfitting with the dropout value is 0.5.
LSTM Layer
One of the advantages of the LSTMs is the ability of capturing the sequential data by considering the previous data. This layer takes the output vectors from the dropout layer as inputs. This layer has a set number of units or cells and the input of each cell is the output from the dropout layer. The final output of this layer have the same number of units in the network.
Fully Connected Layer
The outputs from LSTMs are merged and combined in one matrix and then passed to a fully connected layer. The array is converted into a single output in the range between 0 and 1 using the fully connected layer, in order to be finally classified using sigmoid function [33].
Experiments and Results
These experiments aimed to utilize a very deep learning model using a combination of CNN and LSTM. The learning performance of the model will be measured using the accuracy of the classifier [34].
Acc = (T P + T N ) (T P + T N + F P + F N ) .(1)
Here, TP is the number of tweets that are positive and predicted correctly as positive, TN is the number of tweets that are negative and predicted correctly as negative, FP is the number of tweets that are negative but predicted incorrectly as positive, and FN is the number of tweets that are positive but predicted incorrectly as negative. Table 2. Accuracy comparison of the proposed method with different sentiment levels and other models for the same datasets.
Sentiment Level
Main-AHS Sub-AHS Ar-Twitter ASTD [20] 87.20
All the experiments using different datasets and sentiment analysis levels use the same size of the training and test datasets. The size of the training set is 80% of the whole dataset, and the test set contains the remaining 20% of the dataset. The model is trained using the training set and then the test set is used to measure the performance of the model. The number of epochs is 50 for all the experiments. Table 2 shows the accuracy results in the 50 epochs for the four datasets using different sentiment levels. The best accuracy results for the three different used levels are identified by underlining the best results. Also, Table 2 compares the results of our model with the results published in other papers. It is clear from Table 2 that the proposed model improved the performance of sentiment classification in three datasets: Main-AHS, Sub-AHS, and Ar-Twitter, but it is lower than [26] for the ASTD dataset model (by only a small margin). Figures 2, 3, 4, and 5 illustrate the accuracies on different datasets over 50 epochs. Each line represents different sentiment analysis level. Char-level generally has the lowest accuracy results in the different datasets compared with the other levels, but for Ar-Twitter, it is better than the accuracy obtained on the Ch5gram-level after 23 epochs. Word-level achieves the best accuracy results for the Main-AHS and Ar-Twitter datasets and it has similar results with Ch5gram-level for the Sub-AHS. Fig. 2. Accuracy on the test set for Main-AHS dataset using different sentiment analysis levels.
Conclusions and Future Work
This paper investigated the benefits of combining CNNs and LSTMs networks in an Arabic sentiment classification task. It also explored the effectiveness of using different levels of sentiment analysis because of the complexities of morphology and orthography in Arabic. We used character level to increase the number of features for each tweet, as we are dealing with short messages, which was not an Fig. 3. Accuracy on the test set for Sub-AHS dataset using different sentiment analysis levels. Fig. 4. Accuracy on the test set for Ar-Twitter dataset using different sentiment analysis levels. ideal option for our model. However, using Word-level and Ch5gram-level have shown better sentiment classification results.
This approach has improved the sentiment classification accuracy for our Arabic Health Services (AHS) dataset to reach 0.9424 for the Main-AHS dataset, and 0.9568 for the Sub-AHS dateset, compared to our previous results in [10] which were 0.92 for the Main-AHS dataset and 0.95 for the Sub-AHS dateset. Future work will use some pre-trained word representation models, such as word2vec [16], GloVe [35], and Fasttext [36] for the embedding layer.
Level (Char-level), by converting the sentence into characters instead of words such as [ ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' , ' ' ]. The (Char-level) for the English example is ['H', 'e', 'a', 'l', 't', 'h', 's', 'e', 'r', 'v', 'i', 'c', 'e', 's', 'a', 'r', 'e', 'g', 'e', 'n', 'e', 'r', 'a', 'l', 'l', 'y', 'g', 'o', 'o', 'd']
. The (Ch5gram-level) for the English example is ['Healt', 'ealth', 'servi', 'ervic', 'rvice', 'vices', 'are', 'gener', 'enera', 'neral', 'erall', 'rally', 'good']. This level can be useful in order to deal with many forms of Arabic words, especially for words with more than five letters. Also, the number of the features is expanded in this level too. The third level is Word Level (Wordlevel), where the sentence is divided into words using the space as splitter, Word-level) for the English example is ['Health', 'services', 'are', 'generally', 'good']. This level is the most commonly chosen option in the field of sentiment analysis.
Fig. 1 .
1A Combined CNN-LSTM Model architecture for an Arabic example sentence.
Fig. 5 .
5Accuracy on the test set for ASTD dataset using different sentiment analysis levels.
Table 1 .
1Some examples of multiple forms of a single Arabic verbArabic
word
Buckwalter
Arabic Encoding
Word type
fEl
Masculine Verb -past tense for singular
fElt
Feminine Verb -past tense for singular
yfEl
Masculine Verb -present tense for singular
tfEl
Feminine Verb -present tense for singular
yfElAn
Masculine Verb -present tense for dual
tfElAn
Feminine Verb -present tense for dual
yfElwn
Masculine Verb -present tense for plural
yfEln
Feminine Verb -present tense for plural
Sentiment Analysis and Opinion Mining. B Liu, Morgan & ClaypoolLiu, B.: Sentiment Analysis and Opinion Mining. Morgan & Claypool, (2012).
Levels of sentiment analysis and its challenges: A literature review. P Balaji, O Nagaraju, D Haritha, International Conference on Big Data Analytics and Computational Intelligence (ICBDAC) 2017. Chirala, India6Balaji, P., Nagaraju, O., Haritha, D.: Levels of sentiment analysis and its challenges: A literature review. In: International Conference on Big Data Analytics and Com- putational Intelligence (ICBDAC) 2017, vol. 6, pp. 436-439. Chirala, India, (2017).
Techniques and Applications for Sentiment Analysis. R Feldman, Commun. ACM. 564Feldman, R.: Techniques and Applications for Sentiment Analysis. Commun. ACM 56(4), 82-89 (2013).
GradAscent at EmoInt-2017: Character and Word Level Recurrent Neural Network Models for Tweet Emotion Intensity Detection. E Lakomkin, C Bothe, S Wermter, Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Editor, F., Editor, S.the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisCopenhagen, DenmarkLakomkin, E., Bothe, C., Wermter, S.: GradAscent at EmoInt-2017: Character and Word Level Recurrent Neural Network Models for Tweet Emotion Intensity Detec- tion. In: Editor, F., Editor, S. (eds.) Proceedings of the 8th Workshop on Computa- tional Approaches to Subjectivity, Sentiment and Social Media Analysis 2017, pp. 169-174. ACL, Copenhagen, Denmark (2017).
Towards Sub-Word Level Compositions for Sentiment Analysis of Hindi-English Code Mixed Text. A Joshi, A Prabhu, M Shrivastava, V Varma, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeJoshi, A., Prabhu, A., Shrivastava, M., Varma, V.: Towards Sub-Word Level Com- positions for Sentiment Analysis of Hindi-English Code Mixed Text. Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers 2016, pp. 2482-2491. The COLING 2016 Organizing Committee, Osaka, Japan (2016).
Convolutional Neural Networks for Sentence Classification. Y Kim, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarKim, Y.: Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746-1751. ACL, Doha, Qatar (2014).
Multichannel Variable-Size Convolution for Sentence Classification. W Yin, H Schütze, Proceedings of the Nineteenth Conference on Computational Natural Language Learning. the Nineteenth Conference on Computational Natural Language LearningBeijing, ChinaYin, W., Schütze, H.: Multichannel Variable-Size Convolution for Sentence Classi- fication. Proceedings of the Nineteenth Conference on Computational Natural Lan- guage Learning, pp. 204-214. ACL, Beijing, China (2015).
Lexicon Integrated CNN Models with Attention for Sentiment Analysis. B Shin, T Lee, J D Choi, Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisCopenhagen, DenmarkShin, B., Lee, T., Choi, J. D.: Lexicon Integrated CNN Models with Attention for Sentiment Analysis. Proceedings of the 8th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Media Analysis, pp. 149-158. ACL, Copenhagen, Denmark (2017).
Arabic Language Sentiment Analysis on Health Services. A M Alayba, V Palade, M England, R Iqbal, 2017 1st International Workshop on Arabic Script Analysis and Recognition (ASAR). Nancy, FranceIEEEAlayba, A. M., Palade, V., England, M., Iqbal, R.: Arabic Language Sentiment Analysis on Health Services. In: 2017 1st International Workshop on Arabic Script Analysis and Recognition (ASAR), pp. 114-118, IEEE, Nancy, France (2017).
Improving Sentiment Analysis in Arabic Using Word Representation. A M Alayba, V Palade, M England, R Iqbal, 2nd International Workshop on Arabic and Derived Script Analysis and Recognition (ASAR). London, UKIEEEAlayba, A. M., Palade, V., England, M., Iqbal, R.: Improving Sentiment Analysis in Arabic Using Word Representation. In: 2018 2nd International Workshop on Arabic and Derived Script Analysis and Recognition (ASAR), pp. 13-18, IEEE, London, UK (2018).
B Athiwaratkun, K Kang, arXiv:1507.02313Feature Representation in Convolutional Neural Networks. arXiv preprintAthiwaratkun, B., Kang, K.: Feature Representation in Convolutional Neural Net- works. arXiv preprint arXiv:1507.02313, (2015).
F A Gers, D Eck, J Schmidhuber, Applying LSTM to Time Series Predictable Through Time-Window Approaches. Editor, Tagliaferri R., Marinaro M.Gers F.A., Eck D., Schmidhuber J.: Applying LSTM to Time Series Predictable Through Time-Window Approaches. In: Editor, Tagliaferri R., Marinaro M. (eds.)
Neural Nets WIRN Vietri-01. Perspectives in Neural Computing. LondonSpringer9999Neural Nets WIRN Vietri-01. Perspectives in Neural Computing 2002., vol. 9999, pp. 193-200. Springer, London (2002).
Text Summarization Using Unsupervised Deep Learning. M Yousefi-Azar, L Hamey, Expert Systems with Applications. 68Yousefi-Azar, M., Hamey, L.: Text Summarization Using Unsupervised Deep Learning. Expert Systems with Applications 68, 93-105 (2017).
Semantic Parsing for Single-Relation Question Answering. S W Yih, X He, C Meek, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland, USAShort Papers2Yih, S. W., He, X., Meek, C.: Semantic Parsing for Single-Relation Question An- swering. In: Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics, (vol. 2: Short Papers), pp. 643-648. ACL, Baltimore, Maryland, USA (2014).
Joint Language and Translation Modeling with Recurrent Neural Networks. M W Auli, M Galley, C Quirk, G Zweig, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingWashington, USAAuli, M. W., Galley, M., Quirk, C., Zweig, G.: Joint Language and Translation Modeling with Recurrent Neural Networks. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1044-1054. ACL, Seat- tle, Washington, USA (2013).
Distributed Representations of Words and Phrases and Their Compositionality. T Mikolov, I Sutskever, K Chen, G Corrado, J Dean, Proceedings of the 26th International Conference on Neural Information Processing Systems NIPS'2013. the 26th International Conference on Neural Information Processing Systems NIPS'2013Lake Tahoe, Nevada, USACurran Associates Inc2Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed Representa- tions of Words and Phrases and Their Compositionality. In: Proceedings of the 26th International Conference on Neural Information Processing Systems NIPS'2013, vol. 2, pp. 3111-3119. Curran Associates Inc., Lake Tahoe, Nevada, USA (2013).
Attention-based LSTM for Aspect-level Sentiment Classification. Y Wang, M Huang, L Zhao, X Zhu, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasWang, Y., Huang, M., Zhao, L., Zhu, X.: Attention-based LSTM for Aspect-level Sentiment Classification. In: Proceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing 2016, pp. 606-615. ACL, Austin, Texas (2016).
Subjectivity and Sentiment Analysis of Modern Standard Arabic. M Abdul-Mageed, M T Diab, M Korayem, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies HLT 2011: Short Papers. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies HLT 2011: Short PapersStroudsburg, PA, USA2Abdul-Mageed, M., Diab, M. T., Korayem, M.: Subjectivity and Sentiment Anal- ysis of Modern Standard Arabic. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies HLT 2011: Short Papers -Volume 2, pp. 587-591. ACL, Stroudsburg, PA, USA (2011).
Sentence-Level Arabic Sentiment Analysis. A Shoukry, A Rafea, 2012 International Conference on Collaboration Technologies and Systems (CTS). Denver, CO, USAIEEEShoukry, A., Rafea, A.: Sentence-Level Arabic Sentiment Analysis. In: 2012 Inter- national Conference on Collaboration Technologies and Systems (CTS), pp. 546- 550, IEEE, Denver, CO, USA (2012).
Arabic sentiment analysis: Lexicon-based and corpus-based. N A Abdulla, N A Ahmed, M A Shehab, M Al-Ayyoub, 2013 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT). Amman, JordanIEEEAbdulla, N. A., Ahmed, N. A., Shehab, M. A., Al-Ayyoub, M.: Arabic sentiment analysis: Lexicon-based and corpus-based. In: 2013 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), pp. 1-6, IEEE, Amman, Jordan (2013).
Automatic Lexicon Construction for Arabic Sentiment Analysis. N Abdulla, R Majdalawi, S Mohammed, M Al-Ayyoub, M Al-Kabi, 2014 International Conference on Future Internet of Things and Cloud. Barcelona, SpainIEEEAbdulla, N., Majdalawi, R., Mohammed, S., Al-Ayyoub, M., Al-Kabi, M.: Auto- matic Lexicon Construction for Arabic Sentiment Analysis. In: 2014 International Conference on Future Internet of Things and Cloud, pp. 547-552, IEEE, Barcelona, Spain (2014).
A Large Scale Arabic Sentiment Lexicon for Arabic Opinion Mining. G Badaro, R Baly, H Hajj, N Habash, W El-Hajj, Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP). the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)ACL, Doha, QatarBadaro, G., Baly, R., Hajj, H., Habash, N., El-Hajj, W.: A Large Scale Arabic Sentiment Lexicon for Arabic Opinion Mining. In: Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pp. 165-173, ACL, Doha, Qatar (2014).
Sentiment Analysis in Arabic tweets. R M Duwairi, R Marji, N Sha'ban, S Rushaidat, 2014 5th International Conference on Information and Communication Systems (ICICS). Irbid, JordanIEEEDuwairi, R. M., Marji, R., Sha'ban, N., Rushaidat, S.: Sentiment Analysis in Arabic tweets. In: 2014 5th International Conference on Information and Communication Systems (ICICS), pp. 1-6, IEEE, Irbid, Jordan (2014).
Deep Learning Models for Sentiment Analysis in Arabic. Al Sallab, A Hajj, H Badaro, G Baly, B El Haj, W Shaban, K B , Proceedings of the Second Workshop on Arabic Natural Language Processing. the Second Workshop on Arabic Natural Language ProcessingBeijing, ChinaAl Sallab, A., Hajj, H., Badaro, G., Baly, B., El Haj, W., Shaban, K. B.: Deep Learning Models for Sentiment Analysis in Arabic. In: Proceedings of the Second Workshop on Arabic Natural Language Processing, pp. 9-17, ACL, Beijing, China (2015).
Sentiment Analysis for Modern Standard Arabic and Colloquial. H F Ibrahim, S M Abdou, M Gheith, International Journal on Natural Language Computing (IJNLC). 42Ibrahim, H. F., Abdou, S. M., Gheith, M.: Sentiment Analysis for Modern Stan- dard Arabic and Colloquial. International Journal on Natural Language Computing (IJNLC) 4(2), 95-109 (2015).
Word Embeddings and Convolutional Neural Network for Arabic Sentiment Classification. A Dahou, S Xiong, J Zhou, M H Haddoud, P Duan, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeDahou, A., Xiong, S., Zhou, J., Haddoud, M. H., Duan, P.: Word Embeddings and Convolutional Neural Network for Arabic Sentiment Classification. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 2418-2427, The COLING 2016 Organizing Committee, Osaka, Japan (2016).
ASemantic Sentiment Analysis in Arabic Social Media. S Tartir, I Abdul-Nabi, Journal of King Saud University -Computer and Information Sciences. 292Tartir, S., Abdul-Nabi, I.: ASemantic Sentiment Analysis in Arabic Social Media. Journal of King Saud University -Computer and Information Sciences 29(2), 229- 233 (2017).
Combining Lexical Features and a Supervised Learning Approach for Arabic Sentiment Analysis. S R El-Beltagy, T Khalil, A Halaby, M Hammad, Computational Linguistics and Intelligent Text Processing. CICLing. Gelbukh A.ChamSpringer9624El-Beltagy S.R., Khalil T., Halaby A., Hammad M.: Combining Lexical Features and a Supervised Learning Approach for Arabic Sentiment Analysis. In: Gelbukh A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2016. Lecture Notes in Computer Science, vol 9624. Springer, Cham (2018).
ASTD: Arabic Sentiment Tweets Dataset. M Nabil, M Aly, A Atiya, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingACL, Lisbon, PortugalNabil, M., Aly, M., Atiya, A.: ASTD: Arabic Sentiment Tweets Dataset. In: Pro- ceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, pp. 2515-2519, ACL, Lisbon, Portugal (2015).
Encode Arabic Online Interface. O Smrž, Smrž, O., Encode Arabic Online Interface, http://quest.ms.mff.cuni.cz/ cgi-bin/encode/index.fcgi. Last accessed 18 June 2018.
Structure and Function of the Arabic Verb. M Bahloul, Routledge. Bahloul, M.: Structure and Function of the Arabic Verb. Routledge, London (2008).
. Keras, Keras, https://keras.io. Last accessed 15 Apr 2018.
The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning. J Han, M Claudio, Proceedings of the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation. the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural ComputationLondon, UKSpringer-VerlagHan, J. and M., Claudio: The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning. In: Proceedings of the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation, pp. 195-201, Springer-Verlag, London, UK (1995).
Introduction to Information Retrieval. C Manning, P Raghavan, H Schütze, Cambridge University PressNew York, NY, USA1st ednManning, C., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. 1st edn. Cambridge University Press, New York, NY, USA (2008).
GloVe: Global Vectors for Word Representation. J Pennington, R Socher, C D Manning, Empirical Methods in Natural Language Processing (EMNLP). Doha, QatarPennington, J., Socher, R., Manning, C. D.: GloVe: Global Vectors for Word Rep- resentation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543, ACL, Doha, Qatar (2014).
Enriching Word Vectors with Subword Information. P Bojanowski, E Grave, A Joulin, T Mikolov, Transactions of the Association for Computational Linguistics. 5Bojanowski, P., Grave, E., Joulin, A. Mikolov, T.: Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics 5, 135-146 (2017).
| [] |
[
"Multilingual Factor Analysis",
"Multilingual Factor Analysis"
] | [
"Francisco Vargas ",
"Kamen Brestnichki ",
"Alex Papadopoulos-Korfiatis ",
"Nils Hammerla ",
"Babylon Health "
] | [] | [] | In this work we approach the task of learning multilingual word representations in an offline manner by fitting a generative latent variable model to a multilingual dictionary. We model equivalent words in different languages as different views of the same word generated by a common latent variable representing their latent lexical meaning. We explore the task of alignment by querying the fitted model for multilingual embeddings achieving competitive results across a variety of tasks. The proposed model is robust to noise in the embedding space making it a suitable method for distributed representations learned from noisy corpora. | 10.18653/v1/p19-1170 | [
"https://arxiv.org/pdf/1905.05547v1.pdf"
] | 153,311,735 | 1905.05547 | 0d28aec557460098967455aa00b007114f7c988e |
Multilingual Factor Analysis
Francisco Vargas
Kamen Brestnichki
Alex Papadopoulos-Korfiatis
Nils Hammerla
Babylon Health
Multilingual Factor Analysis
In this work we approach the task of learning multilingual word representations in an offline manner by fitting a generative latent variable model to a multilingual dictionary. We model equivalent words in different languages as different views of the same word generated by a common latent variable representing their latent lexical meaning. We explore the task of alignment by querying the fitted model for multilingual embeddings achieving competitive results across a variety of tasks. The proposed model is robust to noise in the embedding space making it a suitable method for distributed representations learned from noisy corpora.
Introduction
Popular approaches for multilingual alignment of word embeddings base themselves on the observation in (Mikolov et al., 2013a), which noticed that continuous word embedding spaces (Mikolov et al., 2013b;Pennington et al., 2014;Bojanowski et al., 2017;Joulin et al., 2017) exhibit similar structures across languages. This observation has led to multiple successful methods in which a direct linear mapping between the two spaces is learned through a least squares based objective (Mikolov et al., 2013a;Smith et al., 2017;Xing et al., 2015) using a paired bilingual dictionary.
An alternate set of approaches based on Canonical Correlation Analysis (CCA) (Knapp, 1978) seek to project monolingual embeddings into a shared multilingual space (Faruqui and Dyer, 2014b;Lu et al., 2015). Both these methods aim to exploit the correlations between the monolingual vector spaces when projecting into the aligned multilingual space. The multilingual embeddings from (Faruqui and Dyer, 2014b;Lu et al., 2015) are shown to improve on word level semantic tasks, which sustains the authors' claim that multilingual information enhances semantic spaces.
In this paper we present a new non-iterative method based on variants of factor analysis (Browne, 1979;McDonald, 1970;Browne, 1980) for aligning monolingual representations into a multilingual space. Our generative modelling assumes that a single word translation pair is generated by an embedding representing the lexical meaning of the underlying concept. We achieve competitive results across a wide range of tasks compared to state-of-the-art methods, and we conjecture that our multilingual latent variable model has sound generative properties that match those of psycholinguistic theories of the bilingual mind (Weinreich, 1953). Furthermore, we show how our model extends to more than two languages within the generative framework which is something that previous alignment models are not naturally suited to, instead resorting to combining bilingual models with a pivot as in (Ammar et al., 2016).
Additionally the general benefit of the probabilistic setup as discussed in (Tipping and Bishop, 1999) is that it offers the potential to extend the scope of conventional alignment methods to model and exploit linguistic structure more accurately. An example of such a benefit could be modelling how corresponding word translations can be generated by more than just a single latent concept. This assumption can be encoded by a mixture of Factor Analysers (Ghahramani et al., 1996) to model word polysemy in a similar fashion to (Athiwaratkun and Wilson, 2017), where mixtures of Gaussians are used to reflect the different meanings of a word.
The main contribution of this work is the application of a well-studied graphical model to a novel domain, outperforming previous approaches on word and sentence-level translation retrieval tasks. We put the model through a battery of tests, showing it aligns embeddings across languages well, while retaining performance on monolingual word-level and sentence-level tasks. Finally, we apply a natural extension of this model to more languages in order to align three languages into a single common space.
Background
Previous work on the topic of embedding alignment has assumed that alignment is a directed procedure -i.e. we want to align French to English embeddings. However, another approach would be to align both to a common latent space that is not necessarily the same as either of the original spaces. This motivates applying a well-studied latent variable model to this problem.
Factor Analysis
Factor analysis (Spearman, 1904;Thurstone, 1931) is a technique originally developed in psychology to study the correlation of latent factors z ∈ R k on observed measurements x ∈ R d . Formally:
p(z) = N (z; 0, I), p(x|z) = N (x; W z + µ, Ψ).
In order to learn the parameters W , Ψ of the model we maximise the marginal likelihood p(x|W , Ψ) with respect to W , Ψ. The maximum likelihood estimates of these procedures can be used to obtain latent representations for a given observation E p(z|x) [z]. Such projections have been found to be generalisations of principal component analysis (Pearson, 1901) as studied in (Tipping and Bishop, 1999).
Inter-Battery Factor Analysis
Inter-Battery Factor Analysis (IBFA) (Tucker, 1958;Browne, 1979) is an extension of factor
x v x j x 1 z ... ... N .
Figure 2: Graphical model for MBFA. Latent space z represents the aligned shared space between the multiple vector spaces {x j } v j=1 .
analysis that adapts it to two sets of variables x ∈ R d , y ∈ R d (i.e. embeddings of two languages). In this setting it is assumed that pairs of observations are generated by a shared latent variable z p(z) = N (z; 0, I),
p(x|z) = N (x; W x z + µ x , Ψ x ), p(y|z) = N (y; W y z + µ y , Ψ y ).(1)
As in traditional factor analysis, we seek to estimate the parameters that maximise the marginal likelihood arg max
{Ψ i ,W i } k p(x (k) , y (k) |{Ψ i , W i } i ), subject to Ψ i 0, (W i W i ) 0,(2)
where the joint marginal p(
x k , y k |{Ψ i , W i } i ) is a Gaussian with the form N x y ; µ x µ y , Σ xx Σ xy Σ yx Σ yy , Σ ij = W i W j + δ ij Ψ i ,
and Ψ 0 means Ψ is positive definite. Maximising the likelihood as in Equation 2 will find the optimal parameters for the generative process described in Figure 1 where one latent z is responsible for generating a pair x, y. This makes it a suitable objective for aligning the vector spaces of x, y in the latent space. In contrast to the discriminative directed methods in (Mikolov et al., 2013a;Smith et al., 2017;Xing et al., 2015), IBFA has the capacity to model noise.
We can re-interpret the logarithm of Equation 2
(as shown in Appendix D) as
k log p(x (k) ,y (k) |θ) = C + k (L y|x k +L x k ), (3) L y|x k = − 1 2 ||ỹ (k) − W y E p(z|x (k) ) [z]|| 2 Σ y|x , L x k = − 1 2 ||x (k) − W x E p(z|x (k) ) [z]|| 2 ΨxΣ −1 x Ψx , C = − N 2 (log |2πΣ y|x | + log |2πΣ x |).
The exact expression for Σ y|x is given in the same appendix. This interpretation shows that for each pair of points, the objective is to minimise the reconstruction errors of x and y, given a projection into the latent space E p(z|x k ) [z]. By utilising the symmetry of Equation 2, we can show the converse is true as well -maximising the joint probability also minimises the reconstruction loss given the latent projections E p(z|y k ) [z]. Thus, this forces the latent embeddings of x k and y k to be close in the latent space. This provides intuition as to why embedding into this common latent space is a good alignment procedure.
In (Browne, 1979;Bach and Jordan, 2005) it is shown that the maximum likelihood estimates for {Ψ i , W i } can be attained in closed form
W i = S ii U i P 1/2 , Ψ i = S ii −Ŵ iŴ i , µ x =x,μ y =ȳ, where S xx = 1 m m i=1x (i)x(i) , S yy = 1 m m i=1ỹ (i)ỹ(i) , U i = S −1/2 ii V i , V x P V y = SVD(S −1/2 xx S xy S −1/2 yy ).
The projections into the latent space from x are given by (as proved in Appendix B)
E p(z|x) [z] = (I + W x Ψ −1 x W x ) −1 W x Ψ −1 xx , x = x − µ x .(4)
Evaluated at the MLE, (Bach and Jordan, 2005) show that Equation 4 can be reduced to
E p(z|x) [z] = P 1/2 U x (x − µ x ).
Multiple-Battery Factor Analysis
Multiple-Battery Factor Analysis (MBFA) (Mc-Donald, 1970;Browne, 1980) is a natural extension of IBFA that models more than two views of observables (i.e. multiple languages), as shown in Figure 2.
Formally, for a set of views {x 1 , ..., x v }, we can write the model as
p(z) = N (z; 0, I), p(x i |z) = N (x i ; W i z + µ i , Ψ i ).
Similar to IBFS the projections to the latent space are given by Equation 4, and the marginal yields a very similar form
N x 1 . . . x v ; µ 1 . . . µ v , W 1 W 1 +Ψ 1 . . . W 1 W v . . . . . . . . . W v W 1 . . .W v W v +Ψ v .
Unlike IBFA, a closed form solution for maximising the marginal likelihood of MBFA is unknown. Because of this, we have to resort to iterative approaches as in (Browne, 1980) such as the natural extension of the EM algorithm proposed by (Bach and Jordan, 2005). Defining
M t = I + W t Ψ −1 t W t −1 , B t = M t W t Ψ −1 t , Ψ t+1 = S − SΨ −1 t W t M t W t+1 ,
the EM updates are given by
W t+1 =SB t M t + B t SB t −1 , Ψ t+1 = Bdiag ( Ψ t+1 ) 11 , . . . , ( Ψ t+1 ) vv ,
where S is the sample covariance matrix of the concatenated views (derivation provided in Appendix E). (Browne, 1980) shows that the MLE of the parameters of MBFA is uniquely identifiable (up to a rotation that does not affect the method's performance). We observed this in an empirical study -the solutions we converge to are always a rotation away from each other, irrespective of the parameters' initialisation. This heavily suggests that any optimum is a global optimum and thus we restrict ourselves to only reporting results we observed when fitting from a single initialisation. The chosen initialisation point is provided as Equation (3.25) of (Browne, 1980).
Multilingual Factor Analysis
We coin the term Multilingual Factor Analysis for the application of methods based on IBFA and MBFA to model the generation of multilingual tuples from a shared latent space. We motivate our generative process with the compound model for language association presented by (Weinreich, 1953). In this model a lexical meaning entity (a concept) is responsible for associating the corresponding words in the two different languages. We note that the structure in Figure 3 is very similar to our graphical model for IBFA specified in Figure 1. We can interpret our latent variable as the latent lexical concept responsible for associating (generating) the multilingual language pairs. Most theories that explain the interconnections between languages in the bilingual mind assume that "while phonological and morphosyntactic forms differ across languages, meanings and/or concepts are largely, if not completely, shared" (Pavlenko, 2009). This shows that our generative modelling is supported by established models of language interconnectedness in the bilingual mind.
Intuitively, our approach can be summarised as transforming monolingual representations by mapping them to a concept space in which lexical meaning across languages is aligned and then performing retrieval, translation and similarity-based tasks in that aligned concept space.
Comparison to Direct Methods
Methods that learn a direct linear transformation from x to y, such as (Mikolov et al., 2013a;Artetxe et al., 2016;Smith et al., 2017;Lample et al., 2018) could also be interpreted as maximising the conditional likelihood
k p(y (k) |x (k) ) = k N (y (k) ; W x (k) +µ, Ψ).
As shown in Appendix F, the maximum likelihood estimate for W does not depend on the noise term Ψ. In addition, even if one were to fit Ψ, it is not clear how to utilise it to make predictions as the conditional expectation
E p(y|x (k) ) [y] = W x (k) + µ,
does not depend on the noise parameters. As this method is therefore not robust to noise, previous work has used extensive regularisation (i.e. by making W orthogonal) to avoid overfitting.
Relation to CCA
CCA is a popular method used for multilingual alignment which is very closely related to IBFA, as detailed in (Bach and Jordan, 2005). (Barber, 2012) shows that CCA can be recovered as a limiting case of IBFA with constrained diagonal covariance Ψ x = σ 2
x I, Ψ y = σ 2 y I , as σ 2 x , σ 2 y → 0. CCA assumes that the emissions from the latent spaces to the observables are deterministic. This is a strong and unrealistic assumption given that word embeddings are learned from noisy corpora and stochastic learning algorithms.
Experiments
In this section, we empirically demonstrate the effectiveness of our generative approach on several benchmarks, and compare it with state-of-the-art methods. We first present cross-lingual (wordtranslation) evaluation tasks to evaluate the quality of our multi-lingual word embeddings. As a follow-up to the word retrieval task we also run experiments on cross-lingual sentence retrieval tasks. We further demonstrate the quality of our multi-lingual word embeddings on monolingual word-and sentence-level similarity tasks from (Faruqui and Dyer, 2014b), which we believe provides empirical evidence that the aligned embeddings preserve and even potentially enhance their monolingual quality.
Word Translation
This task is concerned with the problem of retrieving the translation of a given set of source words. We reproduce results in the same environment as (Lample et al., 2018) 1 for a fair comparison. We perform an ablation study to assess the effectiveness of our method in the Italian to English (it-en) setting in (Smith et al., 2017;Dinu et al., 2014).
Method en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en In these experiments we are interested in studying the effectiveness of our method compared to that of the Procrustes-based fitting used in (Smith et al., 2017) without any post-processing steps to address the hubness problem (Dinu et al., 2014).
In Table 1 we observe how our model is competitive to the results in (Lample et al., 2018) and outperforms them in most cases. We notice that given an expert dictionary, our method performs the best out of all compared methods on all tasks, except in English to Russian (en-ru) translation where it remains competitive. What is surprising is that, in the semi-supervised setting, IBFA bridges the gap between the method proposed in (Lample et al., 2018) on languages where the dictionary of identical tokens across languages (i.e. the pseudo-dictionary from (Smith et al., 2017)) is richer. However, even though it significantly outperforms SVD using the pseudo-dictionary, it cannot match the performance of the adversarial approach for more distant languages like English and Chinese (en-zh).
Detailed Comparison to Basic SVD
We present a more detailed comparison to the SVD method described in (Smith et al., 2017). We focus on methods in their base form, that is without post-processing techniques, i.e. crossdomain similarity local scaling ( . We significantly outperform both SVD and CCA, especially when using the pseudo-dictionaries.
Word Similarity Tasks
This task assesses the monolingual quality of word embeddings. In this experiment, we fit both considered methods (CCA and IBFA) on the entire available dictionary of around 100k word pairs. We compare to CCA as used in (Faruqui and These tasks consist of English word pairs that have been assigned ground truth similarity scores by humans. We use the test-suite provided by (Faruqui and Dyer, 2014a) 4 to evaluate our multilingual embeddings on these datasets. This testsuite calculates similarity of words through cosine similarity in their representation spaces and then reports Spearman correlation with the ground truth similarity scores provided by humans. As shown in Table 4, we observe a performance gain over CCA and monolingual word embeddings suggesting that we not only preserve the monolingual quality of the embeddings but also enhance it.
Monolingual Sentence Similarity Tasks
Semantic Textual Similarity (STS) is a standard benchmark used to assess sentence similarity metrics (Agirre et al., 2012(Agirre et al., , 2013(Agirre et al., , 2014(Agirre et al., , 2015(Agirre et al., , 2016. In this work, we use it to show that our alignment procedure does not degrade the quality of the embeddings at the sentence level. For both IBFA and CCA, we align English and one other language (from French, Spanish, German) using the entire dictionaries (of about 100k word pairs each) provided by (Lample et al., 2018). We then use the procedure defined in (Arora et al., 2016) to create sentence embeddings and use cosine similarity to output sentence similarity using those embeddings. The method's performance on each set of 4 https://github.com/mfaruqui/eval-word-vectors embeddings is assessed using Spearman correlation to human-produced expert similarity scores. As evidenced by the results shown in Table 5, IBFA remains competitive using any of the three languages considered, while CCA shows a performance decrease.
Crosslingual Sentence Similarity Tasks
Europarl (Koehn, 2005) is a parallel corpus of sentences taken from the proceedings of the European parliament. In this set of experiments, we focus on its English-Italian (en-it) sub-corpus, in order to compare to previous methods. We report results under the framework of (Lample et al., 2018). That is, we form sentence embeddings using the average of the tf-idf weighted word embeddings in the bag-of-words representation of the sentence. Performance is averaged over 2,000 randomly chosen source sentence queries and 200k target sentences for each language pair. Note that this is a different set up to the one presented in (Smith et al., 2017), in which an unweighted average is used. The results are reported in Table 6. As we can see, IBFA outperforms all prior methods both using nearest neighbour retrieval, where it has a gain of 20 percent absolute on SVD, as well as using the CSLS retrieval metric.
Alignment of three languages
In an ideal scenario, when we have v languages, we wouldn't want to train a transformation between each pair, as that would involve storing O(v 2 ) matrices. One way to overcome this prob- lem is by aligning all embeddings to a common space. In this exploratory experiment, we constrain ourselves to aligning three languages at the same time, but the same methodology could be applied to an arbitrary number of languages. MBFA, the extension of IBFA described in Section 2.2.1 naturally lends itself to this task. What is needed for training this method is a dictionary of word triples across the three languages considered. We construct such a dictionary by taking the intersection of all 6 pairs of bilingual dictionaries for the three languages provided by (Lample et al., 2018). We then train MBFA for 20,000 iterations of EM (a brief analysis of convergence is provided in Appendix G). Alternatively, with direct methods like (Smith et al., 2017;Lample et al., 2018) one could align all languages to English and treat that as the common space.
We compare both approaches and present their results in Table 7. As we can see, both methods experience a decrease in overall performance when compared to models fitted on just a pair of languages, however MBFA performs better overall. That is, the direct approaches preserve their performance on translation to and from English, but translation from French to Italian decreases significantly. Meanwhile, MBFA suffers a decrease in each pair of languages, however it retains competitive performance to the direct methods on English translation. It is worth noting that as the number of aligned languages v increases, there are O(v) pairs of languages, one of which is English, and O(v 2 ) pairs in which English does not participate. This suggests that MBFA may generalise past three simultaneously aligned languages better than the direct methods.
Generating Random Word Pairs
We explore the generative process of IBFA by synthesising word pairs from noise, using a trained English-Spanish IBFA model. We follow the generative process specified in Equation 1 to generate 2,000 word vector pairs and then we find the nearest neighbour vector in each vocabulary and display the corresponding words. We then rank these 2,000 pairs according to their joint probability under the model and present the top 28 samples in ing are dreadful and despair; frightening and brutality; crazed and merry; unrealistic and questioning; misguided and conceal; reactionary and conservatism.
Conclusion
We have introduced a cross-lingual embedding alignment procedure based on a probabilistic latent variable model, that increases performance across various tasks compared to previous methods using both nearest neighbour retrieval, as well as the CSLS criterion. We have shown that the resulting embeddings in this aligned space preserve their quality by presenting results on tasks that assess word and sentence-level monolingual similarity correlation with human scores. The resulting embeddings also significantly increase the precision of sentence retrieval in multilingual settings. Finally, the preliminary results we have shown on aligning more than two languages at the same time provide an exciting path for future research.
References
A Joint Distribution
We show the form of the joint distribution for 2 views. Concatenating our data and parameters as below, we can use Equation (3) of (Ghahramani et al., 1996)
to write m = x y , W = W x W y Ψ = Ψ x 0 0 Ψ y , µ = µ x µ y p(m, z|θ) = N m z ; µ 0 , Σ m,z(5)p(z|x) = N z; W x Σ −1 xx , I − W x Σ −1 x W x E[z|x] = W x Σ −1 xx
C Derivation for the Marginal Likelihood
We want to compute p(x, y|θ) so that we can then learn the parameters θ = {θ x , θ y }, θ i = {µ i , W i , Ψ i , } by maximising the marginal likelihood as is done in Factor Analysis. From the joint p(m, z|θ), again using rules from (Petersen et al., 2008) Sections (8.1.2) we get
p(m|θ) = p(x, y|θ) = N x y ; µ x µ y , W W T + Ψ
For the case of two views, the joint probability can be factored as
p(x, y|θ) = p(x|θ x )p(y|x, θ) p(x|θ x ) = N (x; µ x , Σ x ) p(y|x, θ) = N y; W y W x Σ −1 xx + µ y , Σ y|x = N y; W y E[z|x] + µ y , Σ y|x , where Σ x = W x W x + Ψ x Σ y|x = Σ y − W y W x Σ −1 x W x W y D Scaled Reconstruction Errors log p(x, y|θ) = log p * (x|θ x ) + log p * (y|x, θ) + C C = − 1 2 (log |2πΣ y|x | + log |2πΣ x |) log p * (y|x, θ) = − 1 2 ||ỹ − W y E[z|x]|| 2 Σ y|x (6) log p * (x|θ x ) = − 1 2 ||x − µ x || 2 Σx = − 1 2 ||Σ − 1 2 xx || 2 Setting A = Ψ x Σ −1 x Ψ x ,
we can re-parametrise as This follows the same form as regular factor analysis, but with a block-diagonal constraint on Ψ. Thus by Equations (5) and (6) of (Ghahramani et al., 1996), we apply EM as follows.
log p * (x|θ x ) = − 1 2 ||Ψ x Σ −1 xx || 2 A = − 1 2 ||(Σ x − W x W x )Σ −1 xx || 2 A = − 1 2 ||x − W x W x Σ −1 xx || 2 A = − 1 2 ||x − W x E[z|x]|| 2 A E Expectation Maximisation for MBFA Definex = x 1 − µ 1 . . . x v − µ 1 , W = W 1 . . . W v Ψ = Ψ 1 0 . . .
E-
Step: Compute E[z|x] and E[zz |x] given the parameters θ t = {W t , Ψ t }.
E[z (i) |x (i) ] = B tx (i) E[z (i) z (i) |x (i) ] = I − B t W t + B tx (i)x(i) B t = M t + B tx (i)x(i) B t(7)
where
M t = I + W t Ψ −1 t W t −1 B t = W t (Ψ t + W t W t ) −1 = M t W t Ψ −1 t .(8)m i=1x (i) E[z (i) |x (i) ] = SB t 1 m m j=1 E[z (j) z (j) |x (j) ] = M t + B t SB t ,
update the parameters as follows.
W t+1 = SB t I − B t W t + B t SB t −1 = SB t M t + B t SB t −1 Ψ t+1 = 1 m m i=1x (i)x(i) −W t+1 E[z (i) |x (i) ]x (i) = S − 1 m m i=1 W t+1 B tx (i)x(i) = S − W t+1 B t S = S − SB t W t+1
Imposing the block diagonal constraint,
Ψ t+1 = Bdiag ( Ψ t+1 ) 11 , . . . , ( Ψ t+1 ) vv where (Ψ) ii = Ψ i .
F Independence to Noise in Direct Methods
We are maximising the following quantity with respect to θ = {W , µ, Ψ}
p(Y |X, θ) = i p(y (i) |x (i) , θ) = i N (y (i) ; W x (i) + µ, Ψ) log p(Y |X, θ) =− 1 2 i ||y (i) −W x (i) || 2 Ψ −C ∂ log p(Y |X, θ) ∂W ∝ i Ψ −1 (y (i) −W x (i) )x (i) ∝ Ψ −1 i y (i) x (i) −W i x (i) x (i)
The maximum likelihood is achieved when ∂ log p(Y |X, θ) ∂W = 0, and since Ψ −1 has an inverse (namely Ψ), this means that
W i x (i) x (i) = i y (i) x (i)
It is clear from here that the MLE of W does not depend on Ψ, thus we can conclude that adding a noise parameter to this directed linear model has no effect on its predictions. Figure 4: Training curve of EM algorithm over the first 5,000 iterations. It is clear that the procedure quickly finds a good approximation to the optimal parameters and then slowly converges to the real optimum. Top picture shows the entire training curve, while the bottom picture starts from iteration 100.
G Learning curve of EM Figure 4 shows the negative log-likelihood of the three language model over the first 5,000 iterations. The precision of the learned model is very close when evaluated at iteration 1,000 and at iteration 20,000 as seen in Table 9. This suggests that the model need not be trained to full convergence to work well.
Figure 1 :
1Graphical model for alignment. Latent space z represents the aligned shared space between the two vector spaces x and y.
Figure 3 :
3Weinrich's compound model for lexical association between English and Russian. Image from(Neuser, 2017).
CSLS) (Lample et al., 2018) or inverted softmax (ISF) (Smith et al., 2017). Note that (Smith et al., 2017) used the scikit-learn 2 implementation of CCA, which uses an iterative estimation of partial least squares. This does not give the same results as the standard CCA procedure. In Tables 2, 3 we reproduce the results from (Smith et al., 2017) using the dictionaries and embeddings provided by (Dinu et al., 2014) 3 and we compare our method (IBFA) using both the expert dictionaries from (Dinu et al., 2014) and the pseudo-dictionaries as constructed in (Smith et al., 2017)
287; (Radinsky et al., 2011); MT-771 (Halawi et al., 2012), and MEN-TR (Bruni et al., 2012).
Bdiag(Ψ 1 , . . . , Ψ v ) Hence p(x|z; Ψ, W ) = N (x|W z, Ψ)
Table 1: Precision @1 for cross-lingual word similarity tasks. Rows labelled AdvR are copies of Adversarial -Refine rows in(Lample et al., 2018). Results marked with a * differ from the ones shown in(Lample et al., 2018) due to pre-processing done on their part. SVD and IBFA in the semi-supervised setting use the pseudo-dictionary, while AdvR uses frequency information.Supervised
SVD
77.4
77.3
74.9 76.1 68.4
67.7
47.0
58.2
27.3* 09.3*
IBFA
79.5
81.5
77.3 79.5 70.7
72.1
46.7
61.3
42.9
36.9
SVD+CSLS
81.4
82.9
81.1 82.4 73.5
72.4
51.7
63.7
32.5* 25.1*
IBFA+CSLS
81.7
84.1
81.9 83.4 74.1
75.7
50.5
66.3
48.4
41.7
Semi-supervised
SVD
65.9
74.1
71.0 72.7 60.3
65.3
11.4
37.7
06.8
00.8
IBFA
76.1
80.1
77.1 78.9 66.8
71.8
23.1
39.9
17.1
24.0
AdvR
79.1
78.1
78.1 78.2 71.3
69.6
37.3
54.3
30.9
21.9
SVD+CSLS
73.0
80.7
75.7 79.6 65.3
70.8
20.9
41.5
10.5
01.7
IBFA+CSLS
76.5
83.7
78.6 82.3 68.7
73.7
25.3
46.3
22.1
27.2
AdvR+CSLS
81.7
83.3
82.3 82.1 74.0
72.2
44.0
59.1
32.5
31.4
Table 2 :
2Comparisons without post-
processing of methods. Results re-
produced from (Smith et al., 2017)
for fair comparison.
Table 3 :
3Comparisons without post-
processing of methods in Table 2,
using the pseudo-dictionary from
(Smith et al., 2017).
Table 4 :
4Spearman correlation for English word similarity tasks. First row represents monolingual fasttext vectors(Joulin et al., 2017) in English, the rest are bilingual embeddings.Embeddings STS12 STS13* STS14 STS15 STS16
English
58.1
69.2
66.7
72.6
70.6
IBFS en-de
58.1
70.2
66.8
73.0
71.6
IBFS en-fr
58.0
70.0
66.7
72.8
71.4
IBFS en-es
57.9
69.7
66.6
72.9
71.7
CCA en-de
56.7
67.5
65.7
73.1
70.5
CCA en-fr
56.7
67.9
65.9
72.8
70.8
CCA en-es
56.6
67.8
65.9
72.9
70.8
Table 5 :
5Spearman correlation for Semantic Textual Similarity (STS) tasks in English. All results use the sentence embeddings described in (Arora et al., 2016). First row represents monolingual FastText vectors (Joulin et al., 2017) in English, the rest are bilingual embeddings. *STS13 excludes the proprietary SMT dataset.
Table 8 .
8Note that whilst the sampled pairs are not exact translations, they have closely related meanings. The examples we found interest-English to Italian
Italian to English
@1 @5 @10 @1 @5 @10
Mikolov et. al.
10.5 18.7 22.8 12.0 22.1 26.7
Dinu et al.
45.3 72.4 80.7 48.9 71.3 78.3
Smith et al.
54.6 72.7 78.2 42.9 62.2 69.2
SVD
40.5 52.6 56.9 51.2 63.7 67.9
IBFA (Ours)
62.7 74.2 77.9 64.1 75.2 79.5
SVD + CSLS
64.0 75.8 78.5 67.9 79.4 82.8
AdvR + CSLS
66.2 80.4 83.4 58.7 76.5 80.9
IBFA + CSLS
68.8 80.7 83.5 70.2 80.8 84.8
Table 6 :
6Sentence translation
precision @1 on 2,000 English-
Italian pairs samples from a set
of 200k sentences from Europarl
(Koehn, 2005) on Dinu embed-
dings. AdvR is copied from Ad-
versarial -Refined in (Lample
et al., 2018). Rows with copied
from (Smith et al., 2017).
Table 7 :
7Precision @1 when aligning English, French and Italian embeddings to English.
Table 8 :
8Hannah Neuser. 2017. Source Language of Lexical Transfer in Multilingual Learners: A Mixed Methods Approach. Ph.D. thesis, Department of English, Stockholm University. Aneta Pavlenko. 2009. Conceptual representation in the bilingual lexicon and second language vocabulary learning. The bilingual mental lexicon: Interdisciplinary approaches, pages 125-160. Karl Pearson. 1901. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559-572. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. Kaare Brandt Petersen, Michael Syskind Pedersen, et al. 2008. The matrix cookbook. Technical University of Denmark, 7(15):510. and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337-346. ACM.Random pairs sampled from model, selected
top 28 ranked by confidence. Proper nouns, and
acronyms (names and surnames) were removed from
the list. Third column represents a correct translation
from Spanish to English.
It is clear that this generalises to any number of views of any dimension, as the concatenation operation does not make any assumptions.B Projections to Latent Space E p(z|x)[z] Σ m,z =
W W + Ψ W
W
I
We can query the joint Gaussian in 5 using rules
from (Petersen et al., 2008) Sections (8.1.2, 8.1.3)
and we get
Table 9 :
9Precision @1 between MBFA fitted for 1K iterations and MBFA fitted for 20K iterations. Equation 7 is obtained by applying the Woodbury identity, and Equation 8 by applying the closely related push-through identity, as found in Section 3.2 of(Petersen et al., 2008).M-Step: Update parameters θ t+1 = {W t+1 , Ψ t+1 }.Define
S =
1
m
m
i=1x
(i)x(i)
By first observing
1
m
https://github.com/barnerwothers/MUSE based on https://github.com/facebookresearch/MUSE.
A commonly used Python library for scientific computing, found at (Pedregosa et al., 2011). 3 http://clic.cimec.unitn.it/ georgiana.dinu/down/
| [
"https://github.com/mfaruqui/eval-word-vectors",
"https://github.com/barnerwothers/MUSE",
"https://github.com/facebookresearch/MUSE."
] |
[
"Latent Topology Induction for Understanding Contextualized Representations",
"Latent Topology Induction for Understanding Contextualized Representations"
] | [
"Yao Fu yao.fu@ed.ac.uk \nInstitute for Language\nCognition and Computation\nUniversity of Edinburgh\n\n",
"Mirella Lapata \nInstitute for Language\nCognition and Computation\nUniversity of Edinburgh\n\n"
] | [
"Institute for Language\nCognition and Computation\nUniversity of Edinburgh\n",
"Institute for Language\nCognition and Computation\nUniversity of Edinburgh\n"
] | [] | In this work, we study the representation space of contextualized embeddings and gain insight into the hidden topology of large language models. We show there exists a network of latent states that summarize linguistic properties of contextualized representations. Instead of seeking alignments to existing welldefined annotations, we infer this latent network in a fully unsupervised way using a structured variational autoencoder. The induced states not only serve as anchors that mark the topology (neighbors and connectivity) of the representation manifold but also reveals the internal mechanism of encoding sentences. With the induced network, we: (1). decompose the representation space into a spectrum of latent states which encode fine-grained word meanings with lexical, morphological, syntactic and semantic information; (2). show state-state transitions encode rich phrase constructions and serve as the backbones of the latent space. Putting the two together, we show that sentences are represented as a traversal over the latent network where state-state transition chains encode syntactic templates and stateword emissions fill in the content. We demonstrate these insights with extensive experiments and visualizations.Preprint. Under review. | 10.48550/arxiv.2206.01512 | [
"https://arxiv.org/pdf/2206.01512v1.pdf"
] | 249,375,270 | 2206.01512 | 9171b6fb398dedc887faabb3843155f42061cf7b |
Latent Topology Induction for Understanding Contextualized Representations
Yao Fu yao.fu@ed.ac.uk
Institute for Language
Cognition and Computation
University of Edinburgh
Mirella Lapata
Institute for Language
Cognition and Computation
University of Edinburgh
Latent Topology Induction for Understanding Contextualized Representations
In this work, we study the representation space of contextualized embeddings and gain insight into the hidden topology of large language models. We show there exists a network of latent states that summarize linguistic properties of contextualized representations. Instead of seeking alignments to existing welldefined annotations, we infer this latent network in a fully unsupervised way using a structured variational autoencoder. The induced states not only serve as anchors that mark the topology (neighbors and connectivity) of the representation manifold but also reveals the internal mechanism of encoding sentences. With the induced network, we: (1). decompose the representation space into a spectrum of latent states which encode fine-grained word meanings with lexical, morphological, syntactic and semantic information; (2). show state-state transitions encode rich phrase constructions and serve as the backbones of the latent space. Putting the two together, we show that sentences are represented as a traversal over the latent network where state-state transition chains encode syntactic templates and stateword emissions fill in the content. We demonstrate these insights with extensive experiments and visualizations.Preprint. Under review.
Introduction
Recently, there has been considerable interest in analyzing pretrained language models (PLMs) [24,13,12,3,4,18] due to their huge success. This work aims to investigate the intrinsic geometric and topological properties of contextualized representations. We study how words within sentences, as sequences of vectors, are positioned within the representation space. We hypothesize that there exists a spectrum of latent anchor embeddings (or landmarks) that describe the manifold topology. As a quick first impression, Figure 1 shows the latent states that we will discover in the following sections. Since such structure cannot be straightforwardly observed, we infer the topology as latent variables.
Our approach does not follow the majority of previous literature on probing which usually defines a probe as a classifier supervised by existing annotations like part-of-speech [20], dependency trees [13], CCG supertags [18] and others [26,4]. We do not assume interpretations of the latent topology will be strictly aligned with well-defined linguistic annotations, as they simply do not have to. The dilemma of supervised probing, as pointed out by much previous work [12,11,3], is that it is hard to differentiate whether the discovered linguistic properties are intrinsically within the model, or imposed by the supervision signal. In this work, to maximally reduce external bias, we infer the latent states in a fully unsupervised way. As long as the inferred states are intrinsically meaningful (see Fig. 1 for example states), it does not matter whether they align with well-defined annotations or not.
We use a structured variational autoencoder (VAE) [7] to infer the latent topology, as VAEs are common and intuitive models for learning latent variables. We focus on the manifold where con- Figure 1: We discover a spectrum of latent states with lexical, morphological, syntactic and semantic interpretations. The states also summarize the topological structure of the representation space of language models. See §5 and §6 for how these states are discovered. textualized embeddings lay in (e.g., the last layer outputs of a fixed, not fine-tuned, BERT [6]). We hypothesize there exists a wide spectrum of static latent states within this manifold and assume two basic generative properties of the states: (1). a state should summarize the meaning of its corresponding words and contexts; (2). transitions between different states should encode sentence structure. We model these two properties as emission and transition potentials of a CRF [25] inference network. Since a VAE is a generative model trained by reconstructing sentences, essentially we learn states that are informative enough to generate words and sentences.
The first part of our experiments show how states encode linguistic properties of words ( §5). We show that states summarize rich linguistic phenomena ranging from lexicon, morphology, syntax to semantics. We highlight two intriguing effects of contextualization: (1). before contextualization, function words (e.g., to, for, and, or .etc) are concentrated around few top states; contextualization spread function words over the whole space. (2). before contextualization, certain tokens do not form meaningful states; contextualization makes them "receive" meaning from their neighbors.
The second part of our experiments show that transitions between states form the topological backbone of the representation space and provide further grounding for sentence construction ( §6). We differentiate two types of states within the space: states encoding function words and states encoding content words. We show that function states serve as "hubs" in state transitions and attract content words of similar meanings to be close. State transitions then encode rich types of phrase constructions. Finally, putting everything together, our most important discovery is a step-by-step latent mechanism about how sentences are represented as a traversal over the discovered topology ( §6.2).
Background
Supervised Probing Collectively known as "Bertology" [13,24], the goal of probing is to discover what is intrinsically encoded within large language models. The mainstream approach is to construct a supervised weak classifier (a.k.a. a probe) and fine-tune it with classical linguistic annotations, like part-of-speech tagging [18,14], edge detection [27,26], parsing [13,12,23,4] and sentiment classification [3,29]. The problem here is that it is difficult to tell if the discovered properties are intrinsic to the embedding or imposed by the supervision signal [12,3,11]. Since our method is fully unsupervised, our results are more intrinsic w.r.t. the model.
Unsupervised Methods
To bypass issues with supervised probing, some unsupervised work proposes to extract syntactic [15], geometric [2], cluster-based [5], and other information from PLMs [29,22]. Our focus here is the network topology within the representation space, which is not yet thoroughly studied. Amongst the large volume of Bertology research, the closest unsupervised work to ours are: Dalvi et al. [5] who use clustering to discover latent concepts within BERT, and we will later use their results as a comparison to our discoveries; Michael et al. [22] who discovers latent ontology in an unsupervised way; Cai et al. [2] who study the geometric properties of BERT with a focus on isotropy. There is also supervised method for extracting static embeddings from contextualized embeddings [10]. These work more or less involve cluster structures within BERT. Our major difference is that we take an important further step from word clusters to state-state transitions and show how traversal over states leads to sentence constructions. In the latent variable literature, our inference model uses a classical CRF-VAE formulation [8,21]. Existing work uses this formation for other tasks like structured prediction [1] or sentence generation [17] while we discover latent structures within PLMs.
Method
Latent States within Representation Space Given a sentence x = [x 1 , ..., x T ], we denote its contextualized representations as [r 1 , ..., r T ] = PLM(x) where PLM(·) denotes a pretrained encoder (here we use BERT and our method is applicable to any PLM). Representations r for all sentences lie in one manifold M, namely the representation space of the language model. We hypothesize there exists a set of N static latent states s 1 , ..., s N that function as anchors and outline the space topology (recall Fig. 1). We emphasize that all parameters of the PLM are fixed (i.e., no fine-tuning), so all learned states are intrinsically within M. We focus on two topological relations: (1). state-word relations, which represent how word embeddings may be summarized by their states and how states can be explained by their corresponding words; (2). state-state relations, which capture how states interact with each other and how their transitions denote meaningful word combinations. Taken together, these two relations form a latent network within M ( Fig. 1 and later Fig. 3).
Modeling For state-word relations, we associate each word embedding r t with a latent state indexed by z t ∈ {1, ..., N }. We use an emission potential φ(x t , z t ) to model how z t is likely to summarize x t . The corresponding embedding of z t is then s zt . For state-state relations, we assume a transition matrix Φ(z t−1 , z t ) modeling the affinity about how state z t−1 are likely to transit to state z t . Together φ(x t , z t ) and Φ(z t−1 , z t ) form the potentials of a linear-chain CRF:
log φ(x t , z t ) = r t s zt log Φ(z t−1 , z t ) = s zt−1 s zt(1)
where the vector dot product follows the common practice of fine-tuning contextualized representations. The probability of a state sequence given a sentence is:
q ψ (z|x) = T t=1 Φ(z t−1 , z t )φ(x t , z t )/Z(2)
where Z is the partition function. Note that only embeddings of states ψ = [s 1 , ..., s N ] are learnable parameters of the inference model q ψ . To infer s, we use a common CRF-VAE architecture shared by previous work [1,17,9] and add a generative model on top of the encoder:
p θ (x, z) = t p(x t |z 1:t ) · p(z t |z 1:t−1 ) h t = Dec(s zt−1 , h t−1 ) (3) p(z t |z 1:t−1 ) = softmax(FF([s zt ; h t ])) p(x t |z 1:t ) = softmax(FF(h t ))(4)
where θ denotes the decoder parameters, Dec(·) denotes the decoder (we use an LSTM), h t denotes decoder states, and FF(·) denotes a feed-forward network. We optimize the β-ELBO objective: Table 2: Human evaluation (averaged over 3 annotators) of states not aligned with existing annotations. We note that most of them are still meaningful. BERTZERO covers more lexical information while BERTLAST covers more semantic information. See Table 3 Table 3: Examples of states that are not aligned to existing annotations. N.I. means not interpretable. Subscripts denotes occurrence (e.g, "win 69 " means "win" occurs 69 times within the latent state).
L ELBO = E q ψ (z|x) [log p θ (x, z)] − βH(q ψ (z|x))(5)
Example states from BERTZERO Explanation LEX win69 | won26 | wins19 | winning18 | winbench6 | winword4 Most words stem from "win" utexas11 | ah10 | umich9 | umd9 | uh9 | udel9 | um8 | umn7
Most words start with "u" SYN against141 | near35 | among34 | towards27 | toward24 | unto11
All are prepositions me398 | them54 | him50 | person28
Most are pronouns
SEM information162 | say147 | said91 | saying55 | says33 | statement31 Abount communication buy78 | sell66 | bought43 | cheap37 | sold36 | market30 | expensive21 About business N.I. course37 | however26 | way25 | know14 | said12 | yes12
Intuitively not very related sort56 | definition19 | kinds16 | guilty11 | types11 | symptoms9
Intuitively not very related
We further note that the decoder's goal is for help inducing the latent states, rather than being a powerful sentence generator. After training, we simply drop the decoder and only look at the inferred states. Maximizing p(x t |z 1:t ) encourages z 1:t (thus their embeddings s z ) to reconstruct the sentence and p(z t |z 1:t−1 ) encourages previous z 1:t−1 to be predictive to the next z t (so we can learn transitions). Essentially, this formulation trys to find "generative" states s within the representation space M that are able to predict sentences and their next states.
Experimental Setting
Dataset, Model Parameters, and Learning We perform experiments on the 20News dataset [16], a common dataset for latent variable modeling (initially for topic modeling) as our testbed. We primarily focus on the last layer output of a BERT-base-uncased 1 model (BERTLAST), yet our method is applicable to any larger size, GPT-styled, or encoder-decoder-styled models. In terms of model parameters, the dimension of the states is the same as contextualized embeddings, which is 768. We use a light parameterization of the decoder and set its hidden state dimension to 200. Again, the purpose of the decoder is to help induce the states, rather than being a powerful sentence generator. We set the number of latent states N to 2000. Recall that the vocabulary size of uncased BERT is 30522, which means that if uniform each state approximately corresponds to 15 words, serving as a type of "meta" word. We further note that setting N = 2000 is somehow a sweet point according to our initial experiments: larger N (say 10K) is too fine-grained and under-clusters (words of similar linguistic roles are divided into different states) while smaller N (say 100) over-clusters (words of different roles are gathered into the same cluster). Gradient-based learning of CRFs inference model is challenging due to the intermediate discrete structures. So we use approximate gradient and entropy from Fu et al. [9] which enables memory-efficient differentiable training of our model. During training, we tune the β parameter in Eq. 5 to prevent posterior collapse, which is a standard training technique for VAEs. All experiments are performed on Nvidia 2080Ti GPUs.
Baseline Embeddings We compare against: (1). POSITIONAL states induced from positional embeddings. As there is no content information, we expect the induced structures to be very poor. (2). RANDEMB, fixed random state embeddings sampled from a Gaussian distribution sharing the same mean and variance with BERTLAST embeddings. (3). BERTZERO, static word embeddings from the zeroth layer of BERT. We are particularly interested in the comparison between BERTZERO and BERTLAST, as the differences can shed light on what happens after contextualization.
State-Word Topology
Our study on state-word relations has two steps. In §5.1, we decompose the interpretation of the inferred states. They may align, or not align, with existing annotations. We try to align the states first, then use human evaluation to show even states are not aligned, they still encode lexical/ syntactic/ semantic information. In §5.2, we focus on the effect of contextualization, and show how tokens "not understood" in BERTZERO become "understood" in BERTLAST after contextualization. We discuss the topology of state-state transitions in §6.
Decomposing the Meaning of Inferred States
We align the inferred states with: (1). POS, part of speech tags; (2). ENT, named entities; (3). DEP, labels of dependency edges linking a word to its dependency head; (4). CCG, CCG supertags which contain rich syntax tree information and are usually referred as "almost parsing" [18]; (5). BCN, BERT Concept Net from Dalvi et al. [5], a hierarchy of concepts induced from BERT, mostly about semantics and similar to a topic model. We obtain the POS, ENT, DEP annotations on our 20News dataset using a pretrained parser. 2 We obtain the CCG 3 and BCN 4 annotations from their own websites. After training, we use Viterbi decoding to decode states, and each state may correspond to multiple copies of the same word from different contexts. We say a latent state aligns with a predefined tag if 90% of the word occurrence corresponds to this state also corresponds to a tag. For example, suppose state-0 occurs 100 times in the full validation set, 90 times of which correspond to either "happy" or "sad" and 10 times correspond to other random words. In this case, "happy" and "sad" takes the dominant portion of state-0 (90 out of 100), and both are adjectives, so we say state-0 aligns with the POS tag adjective. We set the threshold to 90% because it is intuitively high enough. We select the model that has the largest number of aligned tags to the union of the five types of annotations during validation. Note that this model selection strategy introduces a slight bias, yet such bias is much weaker than tuning five separately supervised probes. We report all our results on the validation dataset. Table 1 shows the alignment results. First, the results from RANDEMB should be viewed as the intrinsic bias from the modeling power of the LSTM decoder, since it reconstructs the sentence with even random state embeddings. This should not be surprising because previous work also reports neural network's ability to fit random data [19,30]. Then we observe more aligned states inferred from BERTLAST, but they cover fewer word occurrences than BERTZERO. For states not aligned with existing annotations, we ask human annotators (three graduate students with 100+ TOEFL test scores) to annotate if they are: (1). LEX: words that are textually similar. (2). SYN: words A. States Inferred from BERT Zero B. States Inferred from BERT Last Figure 2: Mechanism of Contextualization. Each bar represents a latent state with height equals to log frequency. Orange/ blue represents the portion of function/ content word corresponding to a state. In BERTZERO, most function words are concentrated around head states. After contextualization (BERTLAST), function words mix with content words and spread over all states.
Symbol $2851 | size56 | type49 | numbers38 | number35 $248 | money76 | cost64 | pay54 | love42 | worth41 @13522 | same2110 | ordinary46 | average7 @1184 | com1146 | org232 | address91 | list75 Prefix re5635 | pre559 | mis481 | co258 | pr22 old657 | after42 | recently42 | years36 | pre36 un1922 | per871 | di468 | multi237 | #con159 un1524 | in562 | im275 | mis162 | con155 | um148 Suffix #ing1508 | #ting108 | #ley56 | #light36 #ing1563 | running226 | processing118 | writing98 #ly1722 | dear59 | thy36 | #more15 | #rous9 #ly983 | actually645 | exactly325 | simply282 #eg404 | #ed385 | #ve189 | #ize183 | #ig164 | #d1012 | had542 | #ed416 | did320 | used258 #s348 | #l102 | #t98 | #p85 | #m64 | #u62 #s1839 | files225 | books169 | machines123 #s335 | s333 | #t134 | #p120 | #u117 | it98 people4481 | #s682 | those361 | users210 | folks193 Lexicon decided250 | decision211 | decide189 | determine102 | determined99 | decisions82 decision220 | position206 | choose155 | command147 | actions106 | decide94 be7125 | been1591 | being257 | gone1 be7192 | are3099 | am1139 | is165 | become52
share similar morphological-syntactic rules. (3). SEM: words with related meaning. (4). N.I.: Not Interpretable. Example word cluster of this definition is shown in Table 3. The results are show in Table 2. Generally, BERTZERO covers more lexical information while BERTLAST covers more semantics.
The results so far (Table 1 to 3) confirm our hypothesis that latent states indeed exist. The results also support our claim that the linguistic meanings of states do not necessarily align with well-defined annotations (even though we have selected the most aligned checkpoint over different hyperparameters and random seeds). Note that non-aligned states take fair portions in both BERTZERO and BERTLAST (about 27% and 40% coverage respectively), nonetheless the annotators think most of them are still meaningful (recall examples in Table 3) as about only 10+% of the non-aligned states are not interpretable to the annotators. These results highlight the difficulties faced by the mainstream supervised probing w.r.t. the use of external supervision [3,11,12,29] and endorse the application of unsupervised methods.
The Mechanism of Contextualization
Now we study the mechanism of contextualization by taking a closer look on what is encoded in BERTLAST but not in BERTZERO. To this end, we differentiate two types of words: (1). function words (e.g., preposition, conjunction, determiner, punctuation .etc) whose main role is to help sentence construction but do not have concrete meanings on their own; (2). content words (e.g., nouns, adjectives, verbs, adverbs) who have concrete meaning. It turns out that contextualization results in very different behavior about the encoding of these two types. Table 5 for their interpretations. Figure 2 shows how function/ content words are encoded before/ after contextualization. We see two effects of contextualization: (1) before contextualization, most function words are concentrated around a few head states; after contextualization, these function words spread over the full distribution, not just head states. This shows that the meaning of function words is distributed from head states to all states according to their context. (2). before contextualization, most states are either function-only or content-only (as most bars are either orange-only or blue-only); after contextualization, most states contain both function and content states (as most bars have both blue and orange portions). This shows that the meaning of function words is entangled together with their neighbor content words. Intuitively, contextualization helps function words "receive" meaning from their context.
We now revisit Fig. 1 that we briefly mentioned in §1. Figure 1 is produced by t-SNE [28] jointly over the states and embeddings from BERTLAST and illustrates the local topology (because t-SNE preserves more local information) of the representation space. Blue/ white circles in Fig. 1 correspond to blue/ orange bars in Fig. 2 and circle size correspond to bar height. It directly shows how states spread over and "receive" meaning from their neighbor word embeddings and encode to different types of word clusters.
We further highlight certain example clusters before/ after contextualization in Table 4. Before contextualization, we see (1). the symbol $ does not have meaningful neighbors; (2). the suffix #ing and #ed are just ordinary subwords; (3). the word be's neighbor is its morphological variants. After contextualization, we see (1). the symbol $ encodes money; (2). the suffix #ing and #ed encode tense;
(3). be's neighbor becomes linking verbs. Contextualization makes these tokens "receive" meaning from their contexts.
State-State Topology
Now we study the mechanism of contextualization at the phrase and sentence level. We first visualize the transition network in §6.1 to see the backbone of the space topology. Then we show how sentences are constructed as traversals over the discovered latent state network ( §6.2). to-buy21 | to-sell13 | to-build5 | to-purchase5 | to-create4 | to-produce3 to do sth. 1671-1441 free-to14 | willing-to9 | hard-to6 | easy-to4 | happy-to3 | safe-to2 adjective + to 785-7 go-to19 | going-to10 | went-to5 | trip-to4 | moved-to3 | come-to2 movement + to 785-565 come-out9 | come-up6 | went-up4 | go-down3 | went-out3 | went-back1 move + direction caused-by9 | #ed-by8 | #ted-by5 | produced-by4 | made-by4 | driven-by3 passive voice 303-426 is-made2 | be-converted2 | been-formed2 | are-formed2 | been-developed1 passive voice Content word + content word 1537-1537 ms-windows13 | source-code5 | windows-program3 | operating-system3 computers 355-1201 three-years7 | five-years3 | 24-hours3 | ten-years2 | 21-days2 time 593-964 image-processing2 | meter-reading1 | missile-spotting1 | speed-scanning1 v.ing as noun
Overall State Transition Topology
We visualize the induced state-state network in Fig. 3 using t-SNE again (this time without word embeddings). Blue circles represent states with more content words while white circles represent states with more function words. Circle size represents state frequency. To see how states transit to each other, we compute the state transition statistics from the state sequences decoded from the validation dataset. The transition histogram is also shown in Fig. 4. We use blue edges to denote frequent (stronger) transitions and yellow edges to denote less frequent (weaker) transitions. Table 5 shows example transitions and their corresponding word bigram occurrences. In Fig. 3A we see: (1). both nodes and edges follow a long-tail distribution: there are few frequent nodes/ edges taking the head portion of the distribution, and many infrequent nodes/ edges taking the tail portion of the distribution. Note that the yellow background in Fig. 3A consists of many weak edges. (2). frequent states are more inter-connected and tail states are more spread. Fig. 3B zooms in the top states, and we see function states usually as the hub of the edges. This is also evidenced in Table 5, as we can see many different content words transit to the function word to (e.g., free-to, willing-to, go-to, went-to), and the function word "to" can transit to other content words (e.g., to-buy, to-sell, to-build). Here the state encoding to is a hub connecting other states and words. Figure 4 shows transition distribution. The bars here correspond to edges in Fig. 3. Color denote the portion of function/ content words (node color in Fig. 3) and height correspond to edge color in Fig. 3. We observe: (1). transitions are usually mixtures of function and content states. (2). contextualization makes function words less concentrated around head transitions (in Fig. 4 left, BERTLAST has less orange portion than BERTZERO), and more spread within the distribution (in Fig. 4 right, BERTLAST has a longer tail of orange bars than BERTZERO). Figure 5: Four steps illustration of the mechanism about how sentences are represented as a traversal over the latent state network. Numbers mean latent state index. Structurally similar sentences share overlapped paths of latent states.
Sentence Encoding as Traversals over the Latent Network
Now we can finally put everything together and reach the most important discovery of this paper: the latent mechanism of sentence representation within the topology. This mechanism consists of four steps, and is illustrated in Fig.5.
Step 1: there exist function states that correspond to specific function words (as is evidenced in Fig. 2).
Step 2: there exist content states that correspond to content words with similar lexical/ syntactic/ semantic meaning (as is evidenced in Table 1 to 3, Fig. 1).
Step 3: transitions between function and content states correspond to meaningful phrase constructions (as is evidenced in Fig. 3 and 4, Table 5).
Step 4: a traversal of states encodes a sentence within the space (this is a corollary combining step 1-3). Fig. 5 shows sentences sharing overlapped traversal. State transition chains encode the sentence templates and state-word emissions fill in the content. When the transition chains of two sentences overlap, the two sentences tend to be syntactically similar.
Conclusions
In this work, we discover a latent state network intrinsically within the representation space of contextualized representations. Our analytics starts from the hypothesis that there exists a latent network of states that summarize the representation space topology. We verify such states indeed exist, and they do not necessarily align to existing well-defined annotations ( §5.1). Then we reveal the mechanism of contextualization by showing how words within states "receive" meaning from their context and become interpretable after contextualization ( §5.2). We further study how state transitions mark the backbone of the representation space and encode meaningful phrase constructions ( §6.1). Finally, combining the state-word and state-state topology, we reach the latent mechanism about how sentences are encoded as a traversal over the state network ( §6.2).
Due to the space and time limit, our results have the following major limitation: (1). our analysis is more about the topological structure of the space (i.e., how nodes are connected), but less about the distance structure (i.e., how far one node is from another), while the later is also an important geometric property. (2). techniques used in our analysis is more about local topology (e.g., neighbor words around a state), but less about the global topology (e.g., how states and words form hierarchies). (3). there are evidences [2] that the autoregressive-styled transformer (GPT2 or the decoder of T5) have different topologies than bidirectional-styled transformer (BERT and encoder of T5), and we only explore BERT. We leave the exploration of these directions to future work.
Finally, we note that although the literature on Bertology is rich, our understanding of the model behavior is still far from complete, especially for properties discovered with unsupervised methods. We hope this work deepens the understanding of language models, encourages unsupervised analytics, and inspires new modeling techniques based on the topology of the representations. our discovered topology is not specific to the chosen random seeds, it is an intrinsic property of the BERT representation space.
Hardware Generally, one Nvidia 2080Ti will 11G memory would suffice one single run. In our hyperpameter searching, we usually run multiple instances (say, with 8 2080Tis) at the same time. For groups trying to reproduce our results, we would say two 2080Tis would suffice, yet the more the better. Table 6 shows the functions words used in this work. This is a list of stopwords defined by NLTK 5 , and we reuse them here. Other words not in this list are viewed as content words.
A.2 Function Words Used in This Work
A.3 Visualization Procedure
This section describes how we produce the tSNE visualization in Figure 1 and 3 (main paper). We use the sklearn implementation 6 of tSNE. For Fig.1, the word embeddings are obtained by sampling 8,000 contextualized embeddings from the full word occurrences in the 20News dev set. Then we put the sampled word embeddings and the 2,000 states into the tSNE function. The perplexity is set to be 30. An important operation we use, for better separating the states from each other, it to manually set the distances between states to be large, otherwise the states would be concentrated in a sub-region, rather than spread over words. Fig.3 is produced similarly, except we do not use word embeddings as background, and change the perplexity to be 5. All our decisions of hyperparameters are for the purpose of clear visualization which includes reducing overlapping, overcrowding, and other issues. We further note that tSNE itself is a visualization for local topology. Here no single visualization method can reveal the full global structure of high-dimensional data, and any projection to 2-D plain inevitably induces information loss. We leave the investigation of better visualization methods to future work.
B State-Word Topology, More Examples
See Table 7 and 8 for more state-word examples. 5 https://www.nltk.org/book/ch02.html 6 https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html
Figure 3 :
3A: State transitions as the backbone of the representation space. Tail states are more spread and transitions to tail states mostly are from head states (yellow edges). Head states are more inter-connected (blue edges). Blue edges denote more frequent (stronger) transitions and yellow edges denote less frequent (weaker) transitions. B: transitions between top states, function states usually (white circles) serve as hubs of transitions (many white circles at the center). C. Example transitions, see
Figure 4 :
4State transition distribution. Again, function words exist more at top transitions in BERTZERO. After contextualization (BERTLAST), they become more spread within the distribution.
Table 1 :
1Decomposing representation space by aligning to existing linguistic annotation. #s = number of states, %c = percentage of covered word occurrence. POS and ENT can be inferred from the word directly while DEP , CCG and BCN require more context. Generally, there are more aligned states inferred from BERTLAST, but they cover fewer word occurrences than BERTZERO. SeeTable 2for interpretations of non-aligned states.POS
ENT
DEP
CCG
BCN
Not Aligned
#s
%c
#s
%c
#s
%c
#s
%c
#s
%c
#s
%c
POSITIONAL
3
0.44
0
0
3
0.44
1
0.01
1
0.01
1997 99.55
RANDEMB
392 53.02 21 1.06 366 52.36 241 45.20 334 49.16 1159 43.45
BERTZERO
673 65.31 37 2.11 468 53.84 450 51.84 275 48.89 1253 27.69
BERTLAST
804 53.23 51 0.82 740 42.62 583 44.03 628 44.52 1069 40.52
Table 4 :
4Mechanism of contextualization. Try comparing red tokens and see their neighbor words before/ after contextualization. Tokens previously opaque (as their corresponding latent state are not meaningful) gain linguistic clarity (as their corresponding states encode linguistic constructions).Before Contextualization (BERTZERO)
After Contextualization (BERTLAST)
Table 5 :
5Example state transitions. Function words (like "to") serve as hubs that attract content words of similar meanings to be within the same latent state. Subscript numbers denote occurrence.Transition
Example states from BERTLAST
Explanation
Function word + content word
886-989
[ 13 ]
13John Hewitt and Christopher D. Manning. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1419. URL https://aclanthology. org/N19-1419. [14] John Hewitt, Kawin Ethayarajh, Percy Liang, and Christopher Manning. Conditional probing: measuring usable information beyond a baseline. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1626-1639, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10. 18653/v1/2021.emnlp-main.122. URL https://aclanthology.org/2021.emnlp-main. 122. [15] Taeuk Kim, Jihun Choi, Daniel Edmiston, and Sang goo Lee. Are pre-trained language models aware of phrases? simple but strong baselines for grammar induction. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum? id=H1xPR3NtPB. [16] Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 957-966, Lille, France, 07-09 Jul 2015. PMLR. URL https://proceedings. mlr.press/v37/kusnerb15.html.i
me
my
myself
we
our
ours
ourselves you
your
yours
yourself yourselves he
him
his
himself
she
her
hers
herself
it
its
itself
they
them
their
theirs
themselves what
which
who whom this
that
these
those
am
is
are
was
were
be
been
being
have
has
had
having
do
does
did
doing
a
an
the
and
but
if
or
because as
until
while
of
at
by
for
with
about
against
between
into
through
during
before
after above below
to
from
up
down
in
out
on
off
over
under
again
further then
once
here
there when
where
why
how
all
any
both
each
few
more
most
other
some
such
no
nor
not
only own
same
so
than
too
very
s
t
can
will
just
don
should
now
Table 6 :
6Function words used in the experiments.
https://huggingface.co/bert-base-uncased
https://spacy.io/ 3 https://groups.inf.ed.ac.uk/ccg/ccgbank.html 4 https://neurox.qcri.org/projects/bert-concept-net.html
AppendixA Experimental DetailsA.1 Training Details β Parameter To get meaningful convergence of the latent space without posterior collapse, the most sensitive parameter is the β parameter in Eq.5 for entropy regularization. We need to set β in the correct range. A large β force the posterior to collapse to a uniform prior, while a small β encourages the posterior to collapse to a Dirac distribution. To search for a meaningfull β, we start with different order of magnitudes: 0.1, 0.01, 0.001, 0.0001, 0.00001. We find out 0.001 and 0.01 gives the best performance. Then we search within this range using linear interpolation: 0.001, 0.0025, 0.005, 0.0075, 0.01, and find out 0.005 gives the best performance. So we set β to be 0.005.Robustness to Random seeds The results reported in Section 5 are searched over three random seeds and we choose the run that has the largest number of aligned tags to the union of the five types of exising annotation (i.e., POS, ENT, DEP, CCG, BCN). The differences between random seeds are not large. For example inTable 1(main paper) the number of not aligned states with BERTLAST is 1069, and other runs produce about 1059-1069. Generally, our results are robust enough to random seeds. We further note that the example word clusters, the mechanism of contextualization, and state transitions are also consistent w.r.t. random seeds. We can observe similar word clusters and state transitions inFig.1 and 3andTable 4
Conditional random field autoencoders for unsupervised structured prediction. Waleed Ammar, Chris Dyer, Noah A Smith, Proceedings of the 27th International Conference on Neural Information Processing Systems. the 27th International Conference on Neural Information Processing SystemsCambridge, MA, USAMIT Press2Waleed Ammar, Chris Dyer, and Noah A. Smith. Conditional random field autoencoders for unsupervised structured prediction. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, NIPS'14, page 3311-3319, Cambridge, MA, USA, 2014. MIT Press.
Isotropy in the contextual embedding space: Clusters and manifolds. Xingyu Cai, Jiaji Huang, Yuchen Bian, Kenneth Church, International Conference on Learning Representations. Xingyu Cai, Jiaji Huang, Yuchen Bian, and Kenneth Church. Isotropy in the contextual embed- ding space: Clusters and manifolds. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=xYGNO86OWDH.
Probing {bert} in hyperbolic spaces. Boli Chen, Yao Fu, Guangwei Xu, Pengjun Xie, Chuanqi Tan, Mosha Chen, Liping Jing, International Conference on Learning Representations. Boli Chen, Yao Fu, Guangwei Xu, Pengjun Xie, Chuanqi Tan, Mosha Chen, and Liping Jing. Probing {bert} in hyperbolic spaces. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=17VnwXYZyhH.
Finding universal grammatical relations in multilingual BERT. Ethan A Chi, John Hewitt, Christopher D Manning, 10.18653/v1/2020.acl-main.493Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsEthan A. Chi, John Hewitt, and Christopher D. Manning. Finding universal grammatical relations in multilingual BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5564-5577, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.493. URL https://aclanthology.org/2020. acl-main.493.
Discovering latent concepts learned in BERT. Fahim Dalvi, Firoj Abdul Rafae Khan, Nadir Alam, Jia Durrani, Hassan Xu, Sajjad, International Conference on Learning Representations. Fahim Dalvi, Abdul Rafae Khan, Firoj Alam, Nadir Durrani, Jia Xu, and Hassan Sajjad. Discov- ering latent concepts learned in BERT. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=POTMtpYI1xH.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
Auto-encoding variational bayes. Max Welling Diederik, P Kingma, International Conference on Learning Representations. Max Welling Diederik P. Kingma. Auto-encoding variational bayes. In International Con- ference on Learning Representations, 2013. URL https://openreview.net/forum?id= 33X9fd2-9FyZd.
Latent template induction with gumbel-crf. Yao Fu, Chuanqi Tan, Bin Bi, Mosha Chen, Yansong Feng, Alexander M Rush, NeurIPS. Yao Fu, Chuanqi Tan, Bin Bi, Mosha Chen, Yansong Feng, and Alexander M. Rush. Latent template induction with gumbel-crf. In NeurIPS, 2020.
Scaling structured inference with randomization. Yao Fu, John P Cunningham, Mirella Lapata, International Conference on Machine Learning. Yao Fu, John P. Cunningham, and Mirella Lapata. Scaling structured inference with randomiza- tion. In International Conference on Machine Learning, 2022.
Obtaining better static word embeddings using contextual embedding models. Prakhar Gupta, Martin Jaggi, 10.18653/v1/2021.acl-long.408Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics1Long Papers)Prakhar Gupta and Martin Jaggi. Obtaining better static word embeddings using con- textual embedding models. In Proceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5241-5253, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.408. URL https://aclanthology.org/2021.acl-long.408.
A tale of a probe and a parser. Josef Rowan Hall Maudslay, Tiago Valvoda, Adina Pimentel, Ryan Williams, Cotterell, 10.18653/v1/2020.acl-main.659Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsRowan Hall Maudslay, Josef Valvoda, Tiago Pimentel, Adina Williams, and Ryan Cotterell. A tale of a probe and a parser. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7389-7395, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.659. URL https://aclanthology.org/2020. acl-main.659.
Designing and interpreting probes with control tasks. John Hewitt, Percy Liang, 10.18653/v1/D19-1275Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsJohn Hewitt and Percy Liang. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1275. URL https://aclanthology.org/D19-1275.
Posterior control of blackbox generation. Lisa Xiang, Alexander Li, Rush, 10.18653/v1/2020.acl-main.243Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsXiang Lisa Li and Alexander Rush. Posterior control of blackbox generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2731-2743, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main. 243. URL https://aclanthology.org/2020.acl-main.243.
Linguistic knowledge and transferability of contextual representations. Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, Noah A Smith, 10.18653/v1/N19-1112Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073- 1094, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1112. URL https://aclanthology.org/N19-1112.
What do neural networks learn when trained with random labels. Hartmut Maennel, Ibrahim M Alabdulmohsin, O Ilya, Robert Tolstikhin, Olivier Baldock, Sylvain Bousquet, Daniel Gelly, Keysers, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin33Hartmut Maennel, Ibrahim M Alabdulmohsin, Ilya O Tolstikhin, Robert Baldock, Olivier Bousquet, Sylvain Gelly, and Daniel Keysers. What do neural networks learn when trained with random labels? In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 19693-19704.
Emergence of separable manifolds in deep language representations. Jonathan Mamou, Hang Le, Miguel A Del Rio, Cory Stephenson, Hanlin Tang, Yoon Kim, Sueyeon Chung, Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org. the 37th International Conference on Machine Learning, ICML'20. JMLR.orgJonathan Mamou, Hang Le, Miguel A Del Rio, Cory Stephenson, Hanlin Tang, Yoon Kim, and SueYeon Chung. Emergence of separable manifolds in deep language representations. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org, 2020.
Differentiable dynamic programming for structured prediction and attention. Arthur Mensch, Mathieu Blondel, International Conference on Machine Learning. PMLRArthur Mensch and Mathieu Blondel. Differentiable dynamic programming for structured prediction and attention. In International Conference on Machine Learning, pages 3462-3471. PMLR, 2018.
Asking without telling: Exploring latent ontologies in contextual representations. Julian Michael, Jan A Botha, Ian Tenney, 10.18653/v1/2020.emnlp-main.552Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsJulian Michael, Jan A. Botha, and Ian Tenney. Asking without telling: Exploring latent ontologies in contextual representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6792-6812, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.552. URL https://aclanthology.org/2020.emnlp-main.552.
Visualizing and measuring the geometry of bert. Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, Been Kim, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. Visualizing and measuring the geometry of bert. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper/2019/file/159c1ffe5b61b41b3c4d8f4c2150f6c4-Paper.pdf.
A primer in BERTology: What we know about how BERT works. Anna Rogers, Olga Kovaleva, Anna Rumshisky, 10.1162/tacl_a_00349Transactions of the Association for Computational Linguistics. 8Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842- 866, 2020. doi: 10.1162/tacl_a_00349. URL https://aclanthology.org/2020.tacl-1. 54.
An introduction to conditional random fields. Foundations and Trends® in Machine Learning. Charles Sutton, Andrew Mccallum, 4Charles Sutton, Andrew McCallum, et al. An introduction to conditional random fields. Foundations and Trends® in Machine Learning, 4(4):267-373, 2012.
BERT rediscovers the classical NLP pipeline. Ian Tenney, Dipanjan Das, Ellie Pavlick, 10.18653/v1/P19-1452Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsIan Tenney, Dipanjan Das, and Ellie Pavlick. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1452. URL https://aclanthology.org/P19-1452.
What do you learn from context? probing for sentence structure in contextualized word representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, Thomas Mccoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, Ellie Pavlick, International Conference on Learning Representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations, 2019. URL https://openreview. net/forum?id=SJzSgnRcKX.
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 911Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. Zhiyong Wu, Yun Chen, Ben Kao, Qun Liu, 10.18653/v1/2020.acl-main.383Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsZhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4166-4176, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.383. URL https://aclanthology.org/2020.acl-main.383.
president 378 ; clinton 293 ; ##resh 268 ; myers 165 ; attorney 84 ; general 79 ; morris 78 ; smith 76 ; paul 75 ; bush 74 ; manager 64 ; hitler 56 ; ##ey 52 ; ##i 48 ; ##man 45 964 cut 132 ; plug 121 ; break 73 ; thread 73 ; cable 63 ; hole 59 ; holes 54 ; chip 49 ; fix 48. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, Communications of the ACM. 643Understanding deep learning (still) requires rethinking generalization. clutch 48 ; stick 46 ; connector 42 ; blow 42 ; box 41 ; screw 40 ; pin 40 ; hit 40 1756 see 1721 ; look 858 ; seen 618 ; read 302 ; saw 274 ; display 205 ; image 199 ; looks 197 ; looking 196 ; looked 188 ; screen 177 ; watch 161 ; view 153 ; monitor 149 ; images 132 585 day 779 ; sun 686 ; ##n 556 ; today 310 ; night 276 ; week 269 ; days 264 ; city 161. morning 145Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3): 107-115, 2021. president 378 ; clinton 293 ; ##resh 268 ; myers 165 ; attorney 84 ; general 79 ; morris 78 ; smith 76 ; paul 75 ; bush 74 ; manager 64 ; hitler 56 ; ##ey 52 ; ##i 48 ; ##man 45 964 cut 132 ; plug 121 ; break 73 ; thread 73 ; cable 63 ; hole 59 ; holes 54 ; chip 49 ; fix 48 ; clutch 48 ; stick 46 ; connector 42 ; blow 42 ; box 41 ; screw 40 ; pin 40 ; hit 40 1756 see 1721 ; look 858 ; seen 618 ; read 302 ; saw 274 ; display 205 ; image 199 ; looks 197 ; looking 196 ; looked 188 ; screen 177 ; watch 161 ; view 153 ; monitor 149 ; images 132 585 day 779 ; sun 686 ; ##n 556 ; today 310 ; night 276 ; week 269 ; days 264 ; city 161 ; morning 145 ;
312 ; history 290 ; laws 165 ; study 125 ; policy 105 ; court 103 ; scientific 98 ; physics 97 ; constitution 93 1514 life 663 ; live 363 ; security 192 ; exist 188 ; peace 180 ; dead 175 ; living 164 ; existence 157 ; body 157 ; lives 153 ; exists 137 ; privacy 128 ; death 126 ; die 121 ; safety 112 1208 atheist 165 ; font 144 ; bio 119 ; bb 111 ; homosexual 109 ; ##group 99 ; sy 94 ; mormon 75 ; fed 72 ; manual 56 ; posting 52 ; spec 52 ; ##s 50 ; ##eri 49. research. 73138 ; ma 132 ; ##si 129 ; sera 121 ; des 113 ; fi 75 ; isa 70 ; il 58 ; ny 58 ; po 56 ; la 53 ; tar 48 ; lee 47 ; ti 47 371 drug 212 ; drugs 177 ; food 145 ; health 130 ; medical 121 ; disease 117 ; diet 115 ; cancer 113 ; aids 98 ; homosexuality 96 ; sex 83 ; homosexual 82 ; medicine 82 ; hiv 78 ; treatment 77 1683 money 479. auto 42 ; pointer 41 ; handgun 37 837 another 974 ; last 774 ; next 581 ; else 578 ; second 498 ; others 472 ; third 124 ; first 82 ; final 69 ; later 60 ; rest 49 ; future 47. 2nd 44 ; latter 41 ; previous 40 ; elsewhere 33 1291 lines 1848 ; read 701 ; writes 502 ; line 376 ; book 319 ; books 244 ; write 203 ; written 177 ; text 171 ; reading 157 ; wrote 144 ; article 107 ; quote 92 ; writing 86 ; paper 84 1656 agree 182 ; solution 165 ; advice 128 ; opinion 110. interface 104 ; response 88 ; suggestions 80 ; recommend 75 ; alternative 75 ; discussion 71 ; offer 71 ; argument 70 ; application 69 1874 apple 405 ; chip 386 ; disk 373 ; fbi 289 ; encryption 197 ; ##eg 171 ; hardware 166 ; nsa 154 ; ram 154 ; algorithm 134 ; tape 129 ; nasa 119 ; chips 111 ; ibm 100 ; floppy 98 1966 stanford 269 ; washington 177 ; russian 156 ; cleveland 141 ; berkeley 137 ; california 131 ; chicago 105 ; ##co 96 ; turkey 95 ; york 83 ; boston 74. soviet 71 ; russia 71 603 file 682 ; list 526 ; article 501 ; card 424. bill 237. board 196 ; book 191 ; box 180 ; package 140 ; page 139 ; directory 119 ; section 118 ; group 114 ; library 90 ; files 83 1401 done 644 ; didn 41 ; perform 35. performed 32 ; accomplish 25. accomplished 16 ; could 14. ##d 11 ; conduct 10 ; happen 10 ; say 10. committed 9. finish 9. completed 9. conducted 8 460 clip 186 ; ##op 175 ; com 162 ; news 162 ; posts 109 ; works 106. micro 68 ; sim 66 ; share 66 ; ##yp 58 ; net 58 ; wire 54 ; ##os 48 ; power 43 ; es 40 ; flop 39 ; mac 39 ; tool 39sunday 125 ; ##en 117 ; year 105 ; ##net 96 ; n 92 ; ##on 89 ; weeks 87 ; month 73 66 power 433 ; control 399 ; state 347 ; virginia 202 ; mode 182 ; process 162 ; effect 139 ; period 118 ; action 117 ; authority 91 ; function 87 ; position 84 ; ##v 82 ; force 78 1240 first 1443 ; always 602 ; ever 559 ; never 361 ; ago 321 ; often 319 ; sometimes 284 ; usually 203 ; early 195 ; last 192 ; every 175 ; later 171 ; soon 155 ; recently 146 ; past 145 865 faith 383 ; religion 377 ; believe 211 ; atheist 205 ; ##ism 164 ; islam 159 ; ##sm 158 ; religious 145 ; morality 137 ; belief 126 ; font 115 ; language 114 ; truth 92 ; logic 90 467 ##u 1203 ; point 784 ; happen 234 ; place 201 ; happened 183 ; colorado 141 ; happens 139 ; points 132 ; wait 126 ; ground 94 ; site 94 ; center 86 ; position 78 ; situation 78 ; 1993 76 203 ca 273 ; pub 177 ; ##u 143 ; dod 143 ; au 141 ; mit 138 ; ma 132 ; ##si 129 ; sera 121 ; des 113 ; fi 75 ; isa 70 ; il 58 ; ny 58 ; po 56 ; la 53 ; tar 48 ; lee 47 ; ti 47 371 drug 212 ; drugs 177 ; food 145 ; health 130 ; medical 121 ; disease 117 ; diet 115 ; cancer 113 ; aids 98 ; homosexuality 96 ; sex 83 ; homosexual 82 ; medicine 82 ; hiv 78 ; treatment 77 1683 money 479 ; cost 307 ; pay 274 ; issue 212 ; problem 186 ; matter 175 ; worth 175 ; care 153 ; costs 146 ; tax 108 ; expensive 102 ; responsible 96 ; risk 96 ; spend 95 ; insurance 94 1856 whole 379 ; entire 196 ; full 58 ; every 49 ; everything 42 ; together 37 ; everyone 25 ; ##u 17 ; rest 14 ; ##up 13 ; ##ed 13 ; away 10 ; always 10 ; top 10 ; open 10 ; ##s 9 1584 university 1064 ; government 886 ; law 769 ; science 482 ; ##u 412 ; research 312 ; history 290 ; laws 165 ; study 125 ; policy 105 ; court 103 ; scientific 98 ; physics 97 ; constitution 93 1514 life 663 ; live 363 ; security 192 ; exist 188 ; peace 180 ; dead 175 ; living 164 ; existence 157 ; body 157 ; lives 153 ; exists 137 ; privacy 128 ; death 126 ; die 121 ; safety 112 1208 atheist 165 ; font 144 ; bio 119 ; bb 111 ; homosexual 109 ; ##group 99 ; sy 94 ; mormon 75 ; fed 72 ; manual 56 ; posting 52 ; spec 52 ; ##s 50 ; ##eri 49 ; auto 42 ; pointer 41 ; handgun 37 837 another 974 ; last 774 ; next 581 ; else 578 ; second 498 ; others 472 ; third 124 ; first 82 ; final 69 ; later 60 ; rest 49 ; future 47 ; 2nd 44 ; latter 41 ; previous 40 ; elsewhere 33 1291 lines 1848 ; read 701 ; writes 502 ; line 376 ; book 319 ; books 244 ; write 203 ; written 177 ; text 171 ; reading 157 ; wrote 144 ; article 107 ; quote 92 ; writing 86 ; paper 84 1656 agree 182 ; solution 165 ; advice 128 ; opinion 110 ; interface 104 ; response 88 ; suggestions 80 ; recommend 75 ; alternative 75 ; discussion 71 ; offer 71 ; argument 70 ; application 69 1874 apple 405 ; chip 386 ; disk 373 ; fbi 289 ; encryption 197 ; ##eg 171 ; hardware 166 ; nsa 154 ; ram 154 ; algorithm 134 ; tape 129 ; nasa 119 ; chips 111 ; ibm 100 ; floppy 98 1966 stanford 269 ; washington 177 ; russian 156 ; cleveland 141 ; berkeley 137 ; california 131 ; chicago 105 ; ##co 96 ; turkey 95 ; york 83 ; boston 74 ; bosnia 73 ; soviet 71 ; russia 71 603 file 682 ; list 526 ; article 501 ; card 424 ; bill 237 ; board 196 ; book 191 ; box 180 ; package 140 ; page 139 ; directory 119 ; section 118 ; group 114 ; library 90 ; files 83 1401 done 644 ; didn 41 ; perform 35 ; performed 32 ; accomplish 25 ; accomplished 16 ; could 14 ; ##d 11 ; conduct 10 ; happen 10 ; say 10 ; committed 9 ; finish 9 ; completed 9 ; conducted 8 460 clip 186 ; ##op 175 ; com 162 ; news 162 ; posts 109 ; works 106 ; micro 68 ; sim 66 ; share 66 ; ##yp 58 ; net 58 ; wire 54 ; ##os 48 ; power 43 ; es 40 ; flop 39 ; mac 39 ; tool 39
Transition Bigram -Occurrence 1843-990 is-that 68 ; fact-that 51. Table 8: More state-word examples, continued. so-that 50 ; think-that 46 ; note-that 41 ; say-that 39 ; sure-that 38 ; believe-that 35 ; out-that 33 ; know-that 32 ; seems-that 28 ; mean-that 26Table 8: More state-word examples, continued. Transition Bigram -Occurrence 1843-990 is-that 68 ; fact-that 51 ; so-that 50 ; think-that 46 ; note-that 41 ; say-that 39 ; sure-that 38 ; believe-that 35 ; out-that 33 ; know-that 32 ; seems-that 28 ; mean-that 26
19 ; institute-of 16 ; case-of 15 ; capable-of 14 ; amounts-of 13 ; out-of 12 ; years-of 12 ; department-of 11 ; terms-of 11 960-458 up-to 56 ; down-to 29 ; access-to 25 ; according-to 24 ; due-to 22 ; go-to 17 ; response-to 14 ; subject-to 13 ; related-to 13 ; reference-to 13 ; as-to 12 ; lead-to 12 ; reply-to 12 441-698 have-to 139 ; going-to 116 ; seem-to 114 ; seems-to 68 ; supposed-to 40 ; need-to 30. 12instead-of 40 ; amount-of 33 ; lot-of 26 ; form-of 23 ; lack-of. had-to 26 ; used-to 22 ; want-to 20 ; seemed-to 15 ; tend-to 14 ; appears-to 13 ; likely-to 13 ; appear-to 12 1712-698 trying-to 67 ; try-to 46 ; able-to 43 ; like-to 41 ; hard-to 32 ; seem-to 22 ; seems-to 22 ; want-to 21 ; tend-to 21 ; willing-to 18 ; tried-to 16 ; enough-to 14 ; attempt-to 13 ; continue-to1010-1016 instead-of 40 ; amount-of 33 ; lot-of 26 ; form-of 23 ; lack-of 19 ; institute-of 16 ; case-of 15 ; capable-of 14 ; amounts-of 13 ; out-of 12 ; years-of 12 ; department-of 11 ; terms-of 11 960-458 up-to 56 ; down-to 29 ; access-to 25 ; according-to 24 ; due-to 22 ; go-to 17 ; response-to 14 ; subject-to 13 ; related-to 13 ; reference-to 13 ; as-to 12 ; lead-to 12 ; reply-to 12 441-698 have-to 139 ; going-to 116 ; seem-to 114 ; seems-to 68 ; supposed-to 40 ; need-to 30 ; had-to 26 ; used-to 22 ; want-to 20 ; seemed-to 15 ; tend-to 14 ; appears-to 13 ; likely-to 13 ; appear-to 12 1712-698 trying-to 67 ; try-to 46 ; able-to 43 ; like-to 41 ; hard-to 32 ; seem-to 22 ; seems-to 22 ; want-to 21 ; tend-to 21 ; willing-to 18 ; tried-to 16 ; enough-to 14 ; attempt-to 13 ; continue-to 12
9 ; see-it 8 ; call-it 8 ; believe-it 8 ; ##ing-it 7 ; fix-it 6 1295-523 problem-with 32 ; deal-with 32 ; do-with 28 ; up-with 16 ; problems-with 15 ; came-with 13 ; comes-with 13 ; along-with 12 ; work-with 12 ; contact-with 11 ; wrong-with 10 ; agree-with 10 ; disagree-with 9 628-150 based-on 65 ; depending-on 23 ; is-on 13 ; ##s-on 11 ; down-on 9. 1814-1666 about-it 916have-to 117 ; going-to 45 ; is-to 37 ; had-to 32 ; decided-to 12 ; need-to 11 ; has-to 11 ; having-to 9 ; required-to 9 ; willing-to 8 ; how-to 8 ; ,-to 7 ; reason-to 7 ; forced-to 7 477-1277 is-to 74 ; have-to 43 ; had-to 20 ; used-to 17. required-to 14 ; going-to 14 ; ,-to 13 ; need-to 13 ; as-to 12 ; order-to 11 ; needed-to 11 ; ##s-to 10 ; be-to 10 ; decided-to 10 145-461 believe-that 70 ; claim-that 24 ; evidence-that 18 ; assume-that 17 ; hope-that 15 ; belief-that 11 ; sure-that 9 ; prove-that 9 ; assuming-that 8 ; argue-that 8 ; likely-that 7 ; claims-that 7 278-217 know-of 22 ; end-of 16 ; out-of 14 ; think-of 13 ; ##s-of 10 ; accuracy-of 8 ; top-of 7 ; friend-of 6 ; copy-of 6 ; heard-of 6 ; one-of 4 ; middle-of 4 ; version-of 4 ; beginning-of 4 ; aware-of 4 1820-276 come-out 30 ; came-out 17 ; coming-out 14 ; put-out 12 ; get-out 11 ; find-out 10 ; check-out 9 ; turns-out 7 ; found-out 7 ; turn-out 7 ; turned-out 7 ; comes-out 7 ; go-out 6 ; ##ed-out 6 1142-461 is-that 17 ; fact-that 15 ; understand-that 12 ; see-that 11 ; realize-that 11 ; noted-that 8 ; says-that 81814-1666 about-it 91 ; of-it 71 ; with-it 42 ; to-it 29 ; for-it 27 ; do-it 26 ; on-it 24 ; have-it 12 ; understand-it 11 ; doing-it 9 ; know-it 9 ; see-it 8 ; call-it 8 ; believe-it 8 ; ##ing-it 7 ; fix-it 6 1295-523 problem-with 32 ; deal-with 32 ; do-with 28 ; up-with 16 ; problems-with 15 ; came-with 13 ; comes-with 13 ; along-with 12 ; work-with 12 ; contact-with 11 ; wrong-with 10 ; agree-with 10 ; disagree-with 9 628-150 based-on 65 ; depending-on 23 ; is-on 13 ; ##s-on 11 ; down-on 9 ; effect-on 9 ; are-on 8 ; working-on 8 ; effects-on 7 ; activities-on 7 ; depend-on 7 ; be-on 6 ; run-on 6 ; depends-on 6 477-1414 have-to 117 ; going-to 45 ; is-to 37 ; had-to 32 ; decided-to 12 ; need-to 11 ; has-to 11 ; having-to 9 ; required-to 9 ; willing-to 8 ; how-to 8 ; ,-to 7 ; reason-to 7 ; forced-to 7 477-1277 is-to 74 ; have-to 43 ; had-to 20 ; used-to 17 ; required-to 14 ; going-to 14 ; ,-to 13 ; need-to 13 ; as-to 12 ; order-to 11 ; needed-to 11 ; ##s-to 10 ; be-to 10 ; decided-to 10 145-461 believe-that 70 ; claim-that 24 ; evidence-that 18 ; assume-that 17 ; hope-that 15 ; belief-that 11 ; sure-that 9 ; prove-that 9 ; assuming-that 8 ; argue-that 8 ; likely-that 7 ; claims-that 7 278-217 know-of 22 ; end-of 16 ; out-of 14 ; think-of 13 ; ##s-of 10 ; accuracy-of 8 ; top-of 7 ; friend-of 6 ; copy-of 6 ; heard-of 6 ; one-of 4 ; middle-of 4 ; version-of 4 ; beginning-of 4 ; aware-of 4 1820-276 come-out 30 ; came-out 17 ; coming-out 14 ; put-out 12 ; get-out 11 ; find-out 10 ; check-out 9 ; turns-out 7 ; found-out 7 ; turn-out 7 ; turned-out 7 ; comes-out 7 ; go-out 6 ; ##ed-out 6 1142-461 is-that 17 ; fact-that 15 ; understand-that 12 ; see-that 11 ; realize-that 11 ; noted-that 8 ; says-that 8 ;
1010-1998 lot-of 34 ; set-of 26 ; bunch-of 24 ; lots-of 22 ; series-of 13 ; number-of 10 ; thousands-of 10 ; hundreds-of 10 ; plenty-of 10. full-of 7 ; pack-of 7 ; list-of 6 ; think-of 51010-1998 lot-of 34 ; set-of 26 ; bunch-of 24 ; lots-of 22 ; series-of 13 ; number-of 10 ; thousands-of 10 ; hundreds-of 10 ; plenty-of 10 ; full-of 7 ; pack-of 7 ; list-of 6 ; think-of 5
3 ; about-a 3 ; in-a 2 ; into-a 2 ; were-a 2 ; its-a 1 ; surrounding-a 1 476-1654 written-by 13 ; ##d-by 11 ; caused-by 8 ; ##ed-by 8 ; produced-by 6 ; followed-by 6 ; defined-by 4. 3is-a 86 ; for-a 84 ; to-a 50 ; s-a 16 ; be-a 14 ; ,-a 11 ; as-a 7 ; was-a 5 ; on-a 5 ; with-a 4 ; am-a. committed-by 4 ; hit-by 4 ; supported-by 4 ; led-by 4 ; explained-by 4 ; run-by 4 1812-837 the-other 86 ; the-next 77 ; the-last 62 ; the-second 48 ; the-first 14 ; the-latter 10 ; the-third 9 ; the-latest 7 ; the-rest 6 ; the-previous 6 ; the-final 5 ; the-fourth 3 ; the-nearest1125-843 of-a 124 ; is-a 86 ; for-a 84 ; to-a 50 ; s-a 16 ; be-a 14 ; ,-a 11 ; as-a 7 ; was-a 5 ; on-a 5 ; with-a 4 ; am-a 3 ; about-a 3 ; in-a 2 ; into-a 2 ; were-a 2 ; its-a 1 ; surrounding-a 1 476-1654 written-by 13 ; ##d-by 11 ; caused-by 8 ; ##ed-by 8 ; produced-by 6 ; followed-by 6 ; defined-by 4 ; committed-by 4 ; hit-by 4 ; supported-by 4 ; led-by 4 ; explained-by 4 ; run-by 4 1812-837 the-other 86 ; the-next 77 ; the-last 62 ; the-second 48 ; the-first 14 ; the-latter 10 ; the-third 9 ; the-latest 7 ; the-rest 6 ; the-previous 6 ; the-final 5 ; the-fourth 3 ; the-nearest 3
i-believe 128 ; i-hope 66 ; i-suspect 28 ; i-assume 24 ; i-doubt 18 ; i-suppose 11 ; i-guess 11 ; i-expect 8. i-believe 128 ; i-hope 66 ; i-suspect 28 ; i-assume 24 ; i-doubt 18 ; i-suppose 11 ; i-guess 11 ; i-expect 8 ;
1820-1856 pick-up 14 ; come-up 12 ; came-up 11 ; stand-up 11 ; set-up 11 ; bring-up 8 ; show-up 8 ; comes-up 7 ; screwed-up 7 ; give-up 6 ; wake-up 6 ; speak-up 5 ; look-up 5. back-up 51820-1856 pick-up 14 ; come-up 12 ; came-up 11 ; stand-up 11 ; set-up 11 ; bring-up 8 ; show-up 8 ; comes-up 7 ; screwed-up 7 ; give-up 6 ; wake-up 6 ; speak-up 5 ; look-up 5 ; back-up 5
11 ; longer-than 8 ; ##er-than 7 ; larger-than 6 ; worse-than 6 ; higher-than 6 ; slower-than 6 ; easier-than 4 111-111 of-the 75. 979 more-than 163 ; better-than 33 ; less-than 13 ; faster-than 12 ; greater-than. to-the 34 ; for-the 23 ; on-the 14 ; with-the 12 ; about-the 7 ; part-of 7 ; in-the 5 ; into-the 5 ; like-the 4 ; out-of 4 ; at-the 4 ; by-the 3 ; '-s 3 ; as-the 2 1579-654 talking-about 45 ; talk-about 25 ; concerned-about 14 ; worried-about 9 ; know-about 8 ; stories-about 7 ; worry-about 7 ; talked-about 6 ; rumours-about 5 ; news-about 5 ; feel-about 5 ; care-about 41417-979 more-than 163 ; better-than 33 ; less-than 13 ; faster-than 12 ; greater-than 11 ; longer-than 8 ; ##er-than 7 ; larger-than 6 ; worse-than 6 ; higher-than 6 ; slower-than 6 ; easier-than 4 111-111 of-the 75 ; to-the 34 ; for-the 23 ; on-the 14 ; with-the 12 ; about-the 7 ; part-of 7 ; in-the 5 ; into-the 5 ; like-the 4 ; out-of 4 ; at-the 4 ; by-the 3 ; '-s 3 ; as-the 2 1579-654 talking-about 45 ; talk-about 25 ; concerned-about 14 ; worried-about 9 ; know-about 8 ; stories-about 7 ; worry-about 7 ; talked-about 6 ; rumours-about 5 ; news-about 5 ; feel-about 5 ; care-about 4
State transition examples, with function words Transition Bigram -Occurrence 371-371 health-care 14 ; side-effects 8 ; im-##mun 4 ; infectious-diseases 4 ; yeast-infections 4 ; ##thic-medicine 3 ; treat-cancer 3 ; health-insurance 3 ; barbecue-##d 3 ; hiv-infection 3. 9yeast-syndrome 3Table 9: State transition examples, with function words Transition Bigram -Occurrence 371-371 health-care 14 ; side-effects 8 ; im-##mun 4 ; infectious-diseases 4 ; yeast-infections 4 ; ##thic-medicine 3 ; treat-cancer 3 ; health-insurance 3 ; barbecue-##d 3 ; hiv-infection 3 ; yeast-syndrome 3
1214-1214 orbit-##er 14 ; astro-##physics 7 ; lunar-orbit 7 ; space-shuttle 7 ; earth-orbit 5 ; pioneer-venus 5 ; space-station 5 ; space-##lab 4 ; lunar-colony 4 ; orbit-around 3 ; space-tug 3 ; space-##flight 3 716-1556 mail-##ing 15 ; fra-##ering 12 ; ##mina-##tion 9. bash-##ing 7 ; ##dal-##izing 6 ; ##ras-##ing 51214-1214 orbit-##er 14 ; astro-##physics 7 ; lunar-orbit 7 ; space-shuttle 7 ; earth-orbit 5 ; pioneer-venus 5 ; space-station 5 ; space-##lab 4 ; lunar-colony 4 ; orbit-around 3 ; space-tug 3 ; space-##flight 3 716-1556 mail-##ing 15 ; fra-##ering 12 ; ##mina-##tion 9 ; bash-##ing 7 ; ##dal-##izing 6 ; ##ras-##ing 5 ;
4 ; adapt-##er 4 ; cluster-##ing 4 ; sha-##ding 4 931-931 gamma-ray 17 ; lead-acid 9. ##ing 4 ; ##ress-##ing 4 ; cab-##ling. wild-corn 4 ; mile-long 3 ; smoke-##less 3 ; drip-##py 2 ; diamond-stealth 2 ; cold-fusion 2 ; 3d-wire 2 ; acid-batteries 2 ; schneider-stealth 2 ; quantum-black 2##band-##ing 4 ; ##ress-##ing 4 ; cab-##ling 4 ; adapt-##er 4 ; cluster-##ing 4 ; sha-##ding 4 931-931 gamma-ray 17 ; lead-acid 9 ; wild-corn 4 ; mile-long 3 ; smoke-##less 3 ; drip-##py 2 ; diamond-stealth 2 ; cold-fusion 2 ; 3d-wire 2 ; acid-batteries 2 ; schneider-stealth 2 ; quantum-black 2
1488-1488 law-enforcement 17 ; national-security 5 ; cold-blooded 4 ; health-care 4 ; human-rights 4 ; im-##moral 4 ; prophet-##ic 4 ; social-science 3 ; ethnic-##al 3. turkish-historical 31488-1488 law-enforcement 17 ; national-security 5 ; cold-blooded 4 ; health-care 4 ; human-rights 4 ; im-##moral 4 ; prophet-##ic 4 ; social-science 3 ; ethnic-##al 3 ; turkish-historical 3
1246-1246 bit-##net 35 ; tel-##net 12 ; use-##net 7 ; phone-number 7 ; dial-##og 6 ; ##p-site 5 ; phone-calls 5. bit-block 5 ; net-##com 4 ; bat-##f 4 ; ##t-##net 4 ; phone-call 4 ; arc-##net 31246-1246 bit-##net 35 ; tel-##net 12 ; use-##net 7 ; phone-number 7 ; dial-##og 6 ; ##p-site 5 ; phone-calls 5 ; bit-block 5 ; net-##com 4 ; bat-##f 4 ; ##t-##net 4 ; phone-call 4 ; arc-##net 3
abu-##sing 5 ; ##dal-##izing 4 ; obey-##ing 4 ; robb-##ing 3 ; ##ov-##ing 3 ; dial-##ing 3 ; contend-##ing 3 ; ##upt-##ing 3 ; rough-##ing 3 ; contact-##ing 3 ; bash-##ing 3 ; favor-##ing 2 202-202 western-reserve 21 ; case-western 20 ; ohio-state 19 ; united-states 10. penn-state 5 ; african-american 5 ; north-american 5 ; middle-eastern 5 ; polytechnic-state 4 ; north-carolina 41556-1556 abu-##sing 5 ; ##dal-##izing 4 ; obey-##ing 4 ; robb-##ing 3 ; ##ov-##ing 3 ; dial-##ing 3 ; contend-##ing 3 ; ##upt-##ing 3 ; rough-##ing 3 ; contact-##ing 3 ; bash-##ing 3 ; favor-##ing 2 202-202 western-reserve 21 ; case-western 20 ; ohio-state 19 ; united-states 10 ; penn-state 5 ; african-american 5 ; north-american 5 ; middle-eastern 5 ; polytechnic-state 4 ; north-carolina 4
1912-1912 world-series 9 ; home-plate 7 ; division-winner 4 ; runs-scored 4 ; batting-average 4 ; game-winner 3 ; sports-##channel 3 ; plate-umpire 3. baseball-players 3 ; league-baseball 31912-1912 world-series 9 ; home-plate 7 ; division-winner 4 ; runs-scored 4 ; batting-average 4 ; game-winner 3 ; sports-##channel 3 ; plate-umpire 3 ; baseball-players 3 ; league-baseball 3
3 ; computer-graphics 3 ; wire-##frame 3 ; bitgraphics 2 ; ##eg-file 2 ; access-encryption 2 ; ##frame-graphics 2 ; file-format 2 123-123 health-care 10 ; high-school 6 ; es-##crow 6 ; key-es 5 ; high-power 4 ; local-bus 4 ; low-level 4 ; high-speed 3 ; minor-league 2 ; health-service 2. ##l-bus 5 ; bit-color 5 ; 3d-graphics 4 ; ##p-posting. regular-season 2 ; mother-##board 21461-1461 ##l-bus 5 ; bit-color 5 ; 3d-graphics 4 ; ##p-posting 3 ; computer-graphics 3 ; wire-##frame 3 ; bit- graphics 2 ; ##eg-file 2 ; access-encryption 2 ; ##frame-graphics 2 ; file-format 2 123-123 health-care 10 ; high-school 6 ; es-##crow 6 ; key-es 5 ; high-power 4 ; local-bus 4 ; low-level 4 ; high-speed 3 ; minor-league 2 ; health-service 2 ; regular-season 2 ; mother-##board 2
1702-1702 mile-##age 8 ; engine-compartment 5 ; semi-auto 5 ; manual-transmission 5 ; drive-power 4 ; door-car 3 ; passenger-cars 3 ; sports-car 3. shaft-drive 3 ; mini-##van 3 ; speed-manual 31702-1702 mile-##age 8 ; engine-compartment 5 ; semi-auto 5 ; manual-transmission 5 ; drive-power 4 ; door-car 3 ; passenger-cars 3 ; sports-car 3 ; shaft-drive 3 ; mini-##van 3 ; speed-manual 3
1874-1874 floppy-disk 11 ; jp-##eg 11 ; encryption-algorithm 8 ; ##per-chip 7 ; ##mb-ram 7 ; ##ga-card 6 ; encryption-devices 5 ; silicon-graphics 4. disk-drive 4 ; floppy-drive 41874-1874 floppy-disk 11 ; jp-##eg 11 ; encryption-algorithm 8 ; ##per-chip 7 ; ##mb-ram 7 ; ##ga-card 6 ; encryption-devices 5 ; silicon-graphics 4 ; disk-drive 4 ; floppy-drive 4
1208-1064 atheist-##s 43 ; homosexual-##s 12 ; fed-##s 9 ; libertarian-##s 8 ; ##eri-##s 7 ; ##tile-##s 7 ; azerbaijani-##s 6 ; ##tar-##s 6 ; mormon-##s 5 ; sniper-##s 5. physicist-##s 41208-1064 atheist-##s 43 ; homosexual-##s 12 ; fed-##s 9 ; libertarian-##s 8 ; ##eri-##s 7 ; ##tile-##s 7 ; azerbaijani- ##s 6 ; ##tar-##s 6 ; mormon-##s 5 ; sniper-##s 5 ; physicist-##s 4
. 1710-1710 power-supply 5atomic-energy. 4water-ice 4 ; power-cord 4 ; ##com-telecom 3 ; light-pollution 3 ; light-bulb 3 ; radio-station 3 ; radio-##us 3 ; air-conditioning 3 ; light-##wave 21710-1710 power-supply 5 ; atomic-energy 4 ; water-ice 4 ; power-cord 4 ; ##com-telecom 3 ; light-pollution 3 ; light-bulb 3 ; radio-station 3 ; radio-##us 3 ; air-conditioning 3 ; light-##wave 2
1080-1080 public-access 19 ; via-anonymous 5 ; private-sector 5 ; available-via 4 ; general-public 4 ; communityoutreach 4 ; public-domain 3 ; personal-freedom 3 ; private-property 3 ; private-activities 3 254-1572 jimmy-carter 9 ; george-bush 9. bill-clinton 8 ; bryan-murray 4 ; joe-carter 4 ; henry-spencer 41080-1080 public-access 19 ; via-anonymous 5 ; private-sector 5 ; available-via 4 ; general-public 4 ; community- outreach 4 ; public-domain 3 ; personal-freedom 3 ; private-property 3 ; private-activities 3 254-1572 jimmy-carter 9 ; george-bush 9 ; bill-clinton 8 ; bryan-murray 4 ; joe-carter 4 ; henry-spencer 4 ;
1571-1571 ms-windows 24 ; windows-nt 12 ; ibm-pc 10 ; ms-##dos 7 ; unix-machine 6 ; microsoft-windows 5 ; windows-applications 4 ; run-windows 3 ; apple-monitor 3 ; mac-##s 3 ; desktop-machine 3 66-66 ##ian-1919 3 ; energy-signature 2 ; charlotte-##sville 2 ; environment-variables 2 ; duty-cycle 2 ; second-period 2. spin-state 2 ; power-consumption 2 ; inter-##mission 2 ; power-play 21571-1571 ms-windows 24 ; windows-nt 12 ; ibm-pc 10 ; ms-##dos 7 ; unix-machine 6 ; microsoft-windows 5 ; windows-applications 4 ; run-windows 3 ; apple-monitor 3 ; mac-##s 3 ; desktop-machine 3 66-66 ##ian-1919 3 ; energy-signature 2 ; charlotte-##sville 2 ; environment-variables 2 ; duty-cycle 2 ; second-period 2 ; spin-state 2 ; power-consumption 2 ; inter-##mission 2 ; power-play 2
1683-1683 worth-##while 4 ; nominal-fee 4 ; get-paid 3 ; risk-factors 3 ; scholarship-fund 2. cost-$ 2 ; tax-dollars 2 ; beneficial-item 2 ; bank-account 2 ; take-responsibility 21683-1683 worth-##while 4 ; nominal-fee 4 ; get-paid 3 ; risk-factors 3 ; scholarship-fund 2 ; cost-$ 2 ; tax-dollars 2 ; beneficial-item 2 ; bank-account 2 ; take-responsibility 2
1579-1579 m-sorry 5 ; news-reports 4 ; heard-anything 4 ; ran-##ting 3. short-story 3 ; news-reporters 3 ; pressconference 3 ; heard-something 3 ; tv-coverage 2 ; horror-stories 2 ; heard-horror 21579-1579 m-sorry 5 ; news-reports 4 ; heard-anything 4 ; ran-##ting 3 ; short-story 3 ; news-reporters 3 ; press- conference 3 ; heard-something 3 ; tv-coverage 2 ; horror-stories 2 ; heard-horror 2
1656-1656 urbana-champaign 3 ; peace-talks 3 ; acceptable-solutions 2 ; marriage-partner 2 ; intercontinentalmeetings 2. interested-parties 2 ; conference-calls 2 ; handle-conference 2 ; cooperative-behaviour 21656-1656 urbana-champaign 3 ; peace-talks 3 ; acceptable-solutions 2 ; marriage-partner 2 ; intercontinental- meetings 2 ; interested-parties 2 ; conference-calls 2 ; handle-conference 2 ; cooperative-behaviour 2
Table 10: State transition examples, without function words. Table 10: State transition examples, without function words
| [] |
[
"Supervised Visual Attention for Simultaneous Multimodal Machine Translation",
"Supervised Visual Attention for Simultaneous Multimodal Machine Translation"
] | [
"Veneta Haralampieva veneta.l.haralampieva@gmail.com \nDepartment of Computing\nImperial College London\nUK\n",
"Ozan Caglayan \nDepartment of Computing\nImperial College London\nUK\n",
"Lucia Specia l.specia@ic.ac.uk \nDepartment of Computing\nImperial College London\nUK\n"
] | [
"Department of Computing\nImperial College London\nUK",
"Department of Computing\nImperial College London\nUK",
"Department of Computing\nImperial College London\nUK"
] | [
"Journal of Artificial Intelligence Research"
] | There has been a surge in research in multimodal machine translation (MMT), where additional modalities such as images are used to improve translation quality of textual systems. A particular use for such multimodal systems is the task of simultaneous machine translation, where visual context has been shown to complement the partial information provided by the source sentence, especially in the early phases of translation. In this paper, we propose the first Transformer-based simultaneous MMT architecture, which has not been previously explored in simultaneous translation. Additionally, we extend this model with an auxiliary supervision signal that guides the visual attention mechanism using labelled phrase-region alignments. We perform comprehensive experiments on three language directions and conduct thorough quantitative and qualitative analyses using both automatic metrics and manual inspection. Our results show that (i) supervised visual attention consistently improves the translation quality of the simultaneous MMT models, and (ii) fine-tuning the MMT with supervision loss enabled leads to better performance than training the MMT from scratch. Compared to the state-of-the-art, our proposed model achieves improvements of up to 2.3 BLEU and 3.5 METEOR points. | 10.1613/jair.1.13546 | [
"https://arxiv.org/pdf/2201.09324v2.pdf"
] | 246,240,389 | 2201.09324 | ea200ede92291cca65babb91d6d1c96fc2d11a7a |
Supervised Visual Attention for Simultaneous Multimodal Machine Translation
2022
Veneta Haralampieva veneta.l.haralampieva@gmail.com
Department of Computing
Imperial College London
UK
Ozan Caglayan
Department of Computing
Imperial College London
UK
Lucia Specia l.specia@ic.ac.uk
Department of Computing
Imperial College London
UK
Supervised Visual Attention for Simultaneous Multimodal Machine Translation
Journal of Artificial Intelligence Research
742022Submitted 12/2021; published 07/2022
There has been a surge in research in multimodal machine translation (MMT), where additional modalities such as images are used to improve translation quality of textual systems. A particular use for such multimodal systems is the task of simultaneous machine translation, where visual context has been shown to complement the partial information provided by the source sentence, especially in the early phases of translation. In this paper, we propose the first Transformer-based simultaneous MMT architecture, which has not been previously explored in simultaneous translation. Additionally, we extend this model with an auxiliary supervision signal that guides the visual attention mechanism using labelled phrase-region alignments. We perform comprehensive experiments on three language directions and conduct thorough quantitative and qualitative analyses using both automatic metrics and manual inspection. Our results show that (i) supervised visual attention consistently improves the translation quality of the simultaneous MMT models, and (ii) fine-tuning the MMT with supervision loss enabled leads to better performance than training the MMT from scratch. Compared to the state-of-the-art, our proposed model achieves improvements of up to 2.3 BLEU and 3.5 METEOR points.
Introduction
Simultaneous machine translation (MT) aims at providing a computational framework that reproduces how human interpreters perform simultaneous interpretation. In this task, the duty of the interpreter is to translate speech in near real-time, by constantly maintaining a balance between the time needed to accumulate sufficient context and the translation latency the listeners experience in return. This streaming property is what differentiates simultaneous MT from the conventional MT approaches, which process complete source sentences. Traditional work in simultaneous translation have dealt with this streaming property by relying on syntactic or heuristic constraints (Bub et al., 1997;Ryu et al., 2006;Bangalore et al., 2012) to determine the amount of waiting prior to committing a partial translation. Similar approaches have also been explored using state-of-the-art neural MT (NMT) architectures (Sutskever et al., 2014;Bahdanau et al., 2015;Vaswani et al., 2017), such as rule-based deterministic policies implemented at (greedy) decoding time (Cho & Esipova, 2016) or the wait-k policy which sequentially switches between reading a new word and committing a translation . Adaptive policies, which attempt to learn when to read or commit depending on the context, have also been explored mostly through reinforcement learning-based techniques (Gu et al., 2017;Alinejad et al., 2018;Ive et al., 2021).
In this work, we focus on the translation quality aspect of the simultaneous translation framework and explore whether input contexts other than the linguistic signal can improve the performance of simultaneous MT models. Although such additional information may naturally occur in human simultaneous interpretation through the a priori knowledge of factors such as the topic, speaker or even the venue of the speech, in a computational model any additional context should be explicitly and carefully integrated to explore different inductive biases during model learning. Therefore, to mimic the availability of multiple input modalities for simultaneous MT, we follow the multimodal machine translation (MMT) framework where the objective is to translate image descriptions into different languages, by integrating the images themselves as additional context Sulubacak et al., 2020). Intuitively, the expectation is that as long as there is a correlation between the language and the visual semantics, this way of grounding language can help anticipate future context for better translation quality, and even reduce the latency for adaptive policies.
Although relatively few, several works have explored a similar framework to analyse the benefits of visual grounding to simultaneous MMT: Caglayan et al. (2020a) and Imankulova et al. (2020) approached the problem by integrating visual features into the encoder and/or the decoder of recurrent MMT architectures and coupling them with deterministic wait-k policy and rule-based decoding algorithms (Cho & Esipova, 2016). Ive et al. (2021) introduced reinforcement learning (RL)-based adaptive policies for recurrent MMT architectures.
In this paper, we first propose a Transformer-based (Vaswani et al., 2017) simultaneous MMT model where regional features extracted from a state-of-the-art object detector (Anderson et al., 2018), are fused with the source language representations using an attention based cross-modal interaction (CMI) layer. To implement simultaneous multimodal translation, we adopt the prefix training approach (Niehues et al., 2018;Arivazhagan et al., 2020) and evaluate its performance along with the deterministic wait-k policy . Next, we propose a novel approach to simultaneous MMT to improve the grounding -and therefore the anticipation -ability of our model. The proposed method involves supervising the alignment between the source language representations and the image regions, through the use of labelled phrase-region correspondences. We devise two multi-task learning settings to enable the visual supervision: (i) fine-tuning a pre-trained MMT for a fixed number of epochs, or (ii) training the MMT from scratch. We perform extensive experiments on the three different language pairs (English→{Czech,French,German}) of the Multi30k dataset and conduct thorough quantitative and qualitative analyses to understand the potential impacts of attention supervision. Our results show that (i) prefix training achieves substantially better scores than the wait-k approach, (ii) supervised visual attention consistently improves the translation quality of the MMT models, and (ii) fine-tuning the MMT offers better performance than training the MMT from scratch. Finally, our supervised models achieve up to 2.3 BLEU (Papineni et al., 2002) and 3.5 METEOR (Denkowski & Lavie, 2014) points improvements over the current state-of-the-art on Multi30k dataset.
The remaining of the paper is organised as follows: we provide a detailed account of literature in § 2 and describe our methodology and resources in § 3. In § 4, we present the quantitative and qualitative results across different language pairs and test sets of A baseball player in a black shirt just tagged a player in a white shirt.
Un joueur de baseball en maillot noir vient de toucher un joueur en maillot blanc.
Une joueuse de baseball en maillot noir vient de toucher une joueuse en maillot blanc. Figure 1: Multimodal machine translation (MMT) can help disambiguate the source sentence when translating into gender-marked languages. The example is taken from the Multi30k dataset .
Multi30k. Finally, we conclude our work in § 5 with a discussion on possible directions for future research.
Related Work
The primary goal of Multimodal Machine Translation (MMT) is to improve the quality of machine translation (MT) by incorporating information from additional sources such as images or videos (Sulubacak et al., 2020). Of these two approaches, image-guided MMT ( Figure 1) is substantially more researched than the video-guided MMT, simply due to the availability of more training resources. Since this paper heavily relies on image-guided MMT, we begin this section with a detailed literature overview on MMT first, and then continue with simultaneous MT.
Multimodal Machine Translation
Attentive models. Inspired by the success of the textual attention (Bahdanau et al., 2015) in NMT models, a considerable amount of research has focused on coupling visual attention and textual attention altogether to perform image-guided MMT. These works generally encode the images using state-of-the-art CNN models pre-trained on large computer vision datasets such as ImageNet (Deng et al., 2009). This way, an image is represented as a set of convolutional feature maps of size K × K × C i.e. each output channel C i encodes activations across a uniformly partitioned K × K grid. Using these features, Caglayan et al. (2016), explore shared and separate attention mechanisms to compute a cross-modal latent representation in the multi-modal space. In addition to the language context computed by the textual attention, the translation decoder is now also conditioned on the multi-modal representation. As a follow-up, Libovickỳ and Helcl (2017) proposed various extensions, of which the hierarchical attention variant gained popularity. This method weighs the relevance of textual and visual contexts using a third attention mechanism, instead of simpler fusion strategies such as addition or concatenation. Libovickỳ et al. (2018) later adapted these extensions to Transformer-based (Vaswani et al., 2017) NMT models to enable image-guided Transformer MMTs. Although the majority of the approaches in attentive MMT rely on decoder-side visual attention, encoder-side grounding was also explored, but only for recurrent architectures: Delbrouck and Dupont (2017) propose adding a visual attention layer in the encoder, where its states act as the query and the visual features are taken as keys and values.
Simple grounding. Finally, another line of work investigates the use of pooled visual representations (∈ R C ) for image-guided MMT, instead of the dense convolutional feature maps. These approaches usually initialise the hidden state of the encoder and/or the decoder in the model, using a projection of the visual feature vector (Calixto et al., 2016;Caglayan et al., 2017). Multi-task learning is also explored (Elliott & Kádár, 2017;Zhou et al., 2018) using these simple vectorial representations, where the model is tasked with the reconstruction of visual features using the encoder's output.
Supervision of Attention
Previous work has explored whether supervising the encoder-decoder attention can improve the alignment and translation quality of text-only NMT systems. Liu et al. (2016), Mi et al. (2016) investigate supervising the attention of a recurrent NMT model by adding an alignment loss which is jointly optimised alongside the negative log-likelihood objective. Garg et al. (2019) later extend this to Transformers-based text-only NMT models, showing that supervising a single attention head from the cross-attention layers of the decoder, can outperform existing alignment models without a significant degradation in translation quality.
Recently, Nishihara et al. (2020), Specia et al. (2021) investigate a multimodal coattention mechanism (Lu et al., 2016) in the encoder, which first uses an affinity matrix to capture the relationship between the source tokens and image features and then computes the visual and textual attention weights. Specia et al. (2021) explore the impact of supervising the visual attention weights using ground-truth alignments obtained from the Flickr30k Entities dataset (Plummer et al., 2015) and show that their multimodal systems are better at disambiguating words, compared to their text-only baseline system. For Nishihara et al. (2020), the improvements are marginal unless cross-language alignments are also supervised in addition to the visual attention.
Simultaneous Machine Translation
Early works in simultaneous neural MT (SiMT) explore using a pre-trained full-sentence NMT model at test time, by relying on specific decoding approaches designed for simultaneous interpretation. Of these, Cho and Esipova (2016) propose a greedy decoding algorithm with two different waiting criteria based on simple heuristics which determine whether the model should READ a source token or WRITE a target one. Rather than relying on handcrafted heuristics, several works investigate using Reinforcement Learning (RL) to learn an adaptive policy which maximises the translation quality and minimises the delay/latency. Satija and Pineau (2016) train an agent using Deep Q-Learning while Gu et al. (2017) rely on the policy gradient algorithm (Williams, 1992). Alinejad et al. (2018) later extend the latter by adding a PREDICT action, which enriches the available context by utilising predictions of future source tokens.
A drawback of the approaches discussed so far is the discrepancy between training and test times of the underlying NMT model: the model being trained on full-sentence source contexts, are later exposed to partial contexts at test time. Dalvi et al. (2018) propose mitigating this by fine-tuning the model using either chunked segments or prefix pairs. Next, Ma et al. (2019) explore a fixed wait-k policy which can be used at both training and test times, with the model initially reading k source tokens before proceeding to alternate between writing a single target token and reading a source one. Later, Arivazhagan et al. (2019) extend this to an adaptive policy using an advanced attention mechanism in a Recurrent NMT model and a weighted latency loss, while Ma et al. (2020) further develop this for the multi-head attention used in Transformers. More recently, Arivazhagan et al. (2020) investigate whether training a model on prefix pairs and re-translating previously emitted target words at decoding time improves translation quality or not. Their results show that augmenting the training data with prefix pairs can outperform the wait-k trained systems, with re-translation further increasing the quality.
An alternative method for obtaining an adaptive policy is to train a policy model using Supervised Learning and ground-truth action sequences which Zheng et al. (2019) propose generating using a pre-trained NMT Transformer model, with the ideal action being a WRITE when the ground truth target word is ranked within the top k next word candidates, or READ otherwise. Arthur et al. (2021) rely on a statistical word alignment system to obtain the ground-truth actions and use them to jointly train the translation and policy models.
Simultaneous MMT. Previous work in simultaneous MMT mostly rely on rule-based strategies (Cho & Esipova, 2016;Ma et al., 2019) on the Multi30k dataset for recurrent MMT models. Of these approaches, Imankulova et al. (2020) explored the wait-k policy using a recurrent MMT equipped with a hierarchical multimodal attention. Specifically, for each k, they first conduct a textual pre-training (i.e. without the visual features) until convergence, and then fine-tune the checkpoint with visual features enabled. Caglayan et al. (2020a) conducted a study where they compare object classification and object detection features for two different multimodal architectures: decoder-level visual attention and encoder-level visual attention. As for the simultaneous translation part, they explore both wait-k and rule-based decoding (Cho & Esipova, 2016) methods. Finally, Ive et al. (2021) attempted to learn a multimodal policy through reinforcement learning, for deciding the READ/WRITE actions during simultaneous translation.
Our work resembles to Caglayan et al. (2020a) as we explore the deterministic waitk policy along with the object detection features extracted from salient regions. We also add another policy to our inventory and more importantly, we investigate the impact of supervising the visual attention using human-labelled annotations.
Method
This section presents the approaches explored in this work. We first begin with a description of the text-only Transformers NMT (Vaswani et al., 2017) and how we extended it to accommodate simultaneous translation. Next, we describe our baseline multimodal Trans-former architecture that incorporates visual attention. Finally, we introduce our approach to supervise the visual attention in simultaneous MMT. In what follows, the source sentence and target sentence tokens are denoted with x = [x 1 , x 2 , . . . , x N ] and y = [y 1 , y 2 , . . . , y M ], respectively.
Transformer-based NMT
Transformers NMT (Vaswani et al., 2017) are the state-of-the-art sequence-to-sequence architectures equipped with deep encoder and decoder stacks that rely heavily on feedforward layers in contrast to recurrent NMTs (Sutskever et al., 2014;Bahdanau et al., 2015). Combined with the use of self-attention layers, these changes allow for (i) better gradient dynamics during training and thus deeper architectures, and (ii) different inductive biases than the left-to-right processing nature of recurrent NMTs. The overall diagram of a Transformers-based NMT is given in Figure 2. In the following, we briefly explain the encoder and the decoder blocks of Transformers.
Encoder
The transformer encoder f () encodes the source sentence x into a latent representation h = f (x) using a series of operations. The basic block in the encoder transforms the input using a self-attention layer followed by a feed-forward layer. Layer normalisation (Ba et al., 2016) and residual connections (He et al., 2016) are incorporated to achieve stability and improved training dynamics. This basic block (shown in the left part of Figure 2) is further replicated B times in a vertical fashion so that each layer consumes the output of the previous one. The inputs to the first layer are the embeddings of the source sentence tokens x = [x 1 , x 2 , . . . , x N ], additively shifted by positional encodings. The latter is crucial to embed the positional information of words into the embeddings so that the representation is not invariant to the word order. The output of the last encoder layer is passed through a final layer normalisation.
Attention
A key aspect of the Transformer architecture is the extensive use of several types of attention layers. In its simplest form, the attention mechanism (Bahdanau et al., 2015) allows computing the weighted sum of a set of vectors V ∈ R N ×D where the weights are obtained based on dot product scores between keys K ∈ R N ×D and queries Q ∈ R N ×D . The scores are then normalised with the softmax operator to produce a valid probability distribution and finally multiplied with V as follows:
A(Q, K, V) = softmax QK T √ D V(1)
To further enrich the learned representations, multiple attention representations are computed in parallel, by supplying different projections of queries, keys and values as inputs to each attention function A (i) (Equation 2). With n attention heads, the final multi-head attention output is computed by projecting the concatenation of all attention head outputs Figure 2: The architecture of a Transformers NMT (Vaswani et al., 2017): each block is repeated B times to create a deep model. This is the more stable "pre-norm" variant where layer normalisation is applied prior to each sub-layer. The dashed lines denote the residual connections.
as follows:
A (i) = A(W (i) q Q, W (i) k K, W (i) v V) (2) MHA(Q, K, V) = W o Concat([A (1) , . . . , A (n) ])(3)
In the Transformer-based NMT, each encoder block includes a multi-head self-attention layer to capture the relationship between different source positions. Additionally, the de-coder uses (i) masked multi-head self-attention to model the causal relationship between each position and the ones preceding it and (ii) cross-attention to integrate source semantics crucial to perform the translation. In fact, the only difference between self-attention and cross-attention is that the former sets Q = K = V to the output of the previous layer whereas the latter uses the output of the encoder to set K = V = f (x) (Figure 2).
Decoder
Once the source sentence is encoded into the latent representation h, the decoder sequentially generates the target sentence in an auto-regressive way. This means that the probability of the next target token (P (y t |y <t , h)) is conditioned on the history (y <t ) of target words predicted so far, in addition to the source sentence semantics encoded in h. In this formulation, the whole decoder can be thought as a building block that implements the aforementioned probability term P ().
In terms of computation, a decoder block is very similar to an encoder one except that (i) the self-attention is now masked to enforce that the decoder is causal i.e. it does not mistakenly look at future positions and (ii) a secondary multi-head attention known as cross-attention, integrates information from the encoder through the latent sentence representations h ( § 3.1.2). Finally, we train the model in an end-to-end way and minimise the negative log-likelihood of the sentence pairs in the training set D:
L MT = − |D| i log P (y (i) |x (i) ) where P (y|x) = |y| t=1 P (y t |y <t , h)(4)
Our Baseline
We use the Base Transformer (Vaswani et al., 2017) configuration in all our experiments, where both the encoder and decoder have 6 layers (B = 6 in Figure 2), each attention layer has 8 heads, the model dimension is 512 and the feed forward layer size is 2048. Additionally, we share the parameters of the target and output language embedding matrix (Press & Wolf, 2017). We should note that our implementation applies the "pre-norm" formulation where the layer normalisation is placed prior to each sub-layer rather than after, to increase stability.
During training, we optimise the models using Adam (Kingma & Ba, 2014) and decay the learning rate with the noam scheduler (Vaswani et al., 2017). The initial learning rate, β 1 and β 2 are 0.2, 0.9 and 0.98, respectively. The learning rate is warmed up for 4,000 steps. We use a batch size of 32, apply label smoothing with = 0.1 (Szegedy et al., 2016) and clip the gradients so that their norm is 1 (Pascanu et al., 2014). We train each system 3 times with different random seeds for a maximum of 100 epochs, with early stopping based on the validation METEOR (Denkowski & Lavie, 2014) score, which is the official metric used in all shared tasks in MMT (Barrault et al., 2018). The best checkpoint with respect to validation METEOR is selected to decode test set translations using the greedy search algorithm. Figure 3: wait-k decoding : Initially, the decoder waits k words to be read before committing the first translation. Afterwards, the algorithm switches back and forth between read and write actions until the end-of-sentence marker is generated by the decoder.
Simultaneous NMT
This section describes the different training strategies that are used in this work to realise simultaneous machine translation. Following the notation from Ma et al. (2019), we first define a function g(t) that returns the number of source tokens read so far by the encoder, at a particular decoding timestep t. By definition, 0 ≤ g(t) ≤ |x| for all values of t. We can formulate this by modifying Equation 4 so that the source representation now depends on g(t) as follows:
P (y|x) = |y| t=1 P (y t |y <t , f (x ≤g(t) ))(5)
This generalisation allows us to define the conventional full sentence (consecutive) NMT as well, by using a constant function g(t) = |x| for all t and all sentence pairs.
Wait-k Decoding
The wait-k policy simply amounts to selecting a particular function g k (t) that depends on a pre-determined k value. This value determines the number of source tokens to be initially read by the encoder, before beginning the translation decoding process. Afterwards, the algorithm switches back and forth between read and write actions until the end-of-sentence marker (EOS) is predicted by the decoder. Mathematically speaking, the definition of g k (t) is as follows:
g k (t) = min(k + t − 1, |x|)(6)
The wait-k decoding algorithm is a fixed-delay policy because the translator always lags behind the speaker by k tokens. According to Ma et al. (2019), this policy is inspired by human interpreters who intuitively wait for some context to accumulate prior to starting the translation. For our multimodal purposes, having a fixed policy rather than an adaptive one is useful as fixed latency allows focused systematic analysis of quality improvements. A depiction of the algorithm is given in Figure 3. Traditionally, the encoder in the Transformer NMT is bi-directional due to the selfattention mechanism which allows each position to attend to each other one. In the simultaneous translation framework, however, this can be challenging as it implies that every time Algorithm 1: Prefix training (Niehues et al., 2018;Arivazhagan et al., 2020) inputs : The current mini-batch B output: The modified mini-batchB
1B ← [] 2 for (x, y) in B do 3 c ∼ Uniform(0, 1) // Apply truncation with p = 0.5 4 if c < 0.5 then 5 l x ∼ Uniform(1, |x|) // Randomly sample a source prefix length 6 l y ← Round l x .|y| |x| // Keep the same proportion for the target 7 l y ← max(2, l y ) // Always include the BOS token 8 else 9x ← x;ŷ ← y // No truncation 10 end 11B.append((x,ŷ))
12 end a new source token is read, the encoder's representation h would need to be recomputed, leading to a quadratic runtime cost with respect to the source sentence length. Instead, we implement an uni-directional encoder following Elbayad et al. (2020), by employing a masked self-attention in the encoder. This prevents future positions that are not read so far from being attended to, similar to the self-attention layers in the decoder block.
Simultaneous-aware Training
A simple way to perform simultaneous NMT is by first training a consecutive NMT model (i.e. g(t) = |x|) and then translating the test sets following the wait-k policy, where for each k we would use a different choice of g k (t) at inference time. We refer to this approach as wait-k decoding. However, this creates a discrepancy between training and testing time, as the model will be exposed to partial source sentences at test time, although it was always trained on full sentences. For this reason, Ma et al. (2019) also proposed to use the same g k (t) function at both training and inference time, an approach that we refer to as wait-k training. We follow this approach for our initial set of experiments.
Furthermore, we adopt a second simultaneous-aware training recipe called prefix training (Niehues et al., 2018;Arivazhagan et al., 2020) which employ a simple data processing strategy to mitigate the aforementioned exposure bias between training and testing time. Specifically, for each sentence pair (x, y) in the mini-batch, we flip a fair coin to decide whether we will consider it for truncation or not. If it is considered, we first randomly sample a prefix length l x for the source sentence and truncate the corresponding target sentence with the same proportion as the source side (Algorithm 1). Unlike wait-k training which requires training a separate model for each value of k, we train a single prefix model and decode the final checkpoint using wait-k decoding across different values of k.
Multimodal NMT
In this section, we describe our take on integrating visual information to our consecutive and simultaneous NMT models. For that, we reformulate the encoder-attention variant of Caglayan et al. (2020a) for Transformer-based models. In what follows, we first describe the visual feature extraction pipeline and then present our multimodal NMT architecture in detail.
Visual Features
To represent visual semantics, we explore regional object-detection features extracted using the popular bottom up top down (BUTD) approach (Anderson et al., 2018). BUTD combines the Faster R-CNN object detector (Ren et al., 2015) with a ResNet-101 backend (He et al., 2016) to perform feature extraction. We use the provided model 1 which is pre-trained on the large-scale Visual Genome dataset (Krishna et al., 2017). Having 1,600 object labels in its inventory, the BUTD detector is quite rich and for that reason has been used in most previous work in cross-modal pre-training (Lu et al., 2019;Tan & Bansal, 2019).
For our purposes, we use the default settings of the detector and extract 36 regional features per each image. In other words, alongside the language representation x ∈ R N ×D of a given source sentence, the associated image is represented with a set of regional features v ∈ R 36×2048 . The BUTD extractor is not further trained/fine-tuned during model training.
Multimodality
Our multimodal MT approach reformulates the encoder-attention variant of Caglayan et al. (2020a) for Transformer-based architectures. The main motivation for implementing an encoder-side cross-modal interaction is the nature of the simultaneous MT problem: since the source context is partial and grows gradually, the additional visual information can complement the missing context and allow the model to anticipate target words in a grounded way. Moreover, incorporating cross-modality at encoder side is also crucial for our second set of experiments regarding the supervision of the visual attention module, which aims at better language grounding ( § 3.3.3). Figure 4 summarises the overall architecture of our MMT model, where the upper stream implements the visual representation module. This module (i) extracts the set of regional feature vectors v ∈ R 36×2048 , (ii) projects them to the D-dimensional space yielding v ∈ R 36×D and finally (iii) employs a multi-head 2 attention layer for cross-modal interaction (CMI). The key, value, query configuration of the cross-modal attention determines the nature of the interaction i.e. we set K = V = v whereas the query Q receives the text encoder's output. This way, we get a cross-modal representation out of the attention layer which computes the weighted sum of regional feature projections (V) based on the similarity between language representations (Q) and regional features (K). Since this output is a linear combination of visual vectors only, we augment it with the text encoder's outputs using element-wise addition, similar to a residual connection. A final layer normalisation is applied on top to obtain the multimodal encoding h = f (x, I). The rest of the architecture is the same as Figure 2 in the sense that h is passed to the cross-attention layer of the decoder.
Supervising the Visual Attention
The cross-modal relationship between language and visual representations in attentive MMT models is generally learned in an unsupervised way, as the MMT model is only trained through a sequence-to-sequence cross-entropy objective. This is also the approach that we followed for our MMT model in § 3.3.2.
In this section, we are interested in whether explicit supervision of the visual attention could introduce an inductive bias so that the anticipation ability -and therefore the translation quality -of our simultaneous MMT models is further improved. We devise a multi-task learning scheme where in addition to the cross-entropy objective used for the MT task (Equation 4), we employ an auxiliary loss which tries to bring together the cross-modal attention distribution M computed by the CMI module and the ground-truth alignment M. An illustration of the predicted attention and its ground-truth distribution is provided in Figure 5.
Ground-truth alignments. In order to supervise the visual attention during training, we require a labeled dataset of phrase-region annotations for the Multi30k dataset. Since the Multi30k dataset is derived from the Flickr30k image captioning dataset, we rely on the Flickr30k Entities dataset (Plummer et al., 2015) which provides human-annotated bounding boxes for noun phrases in the English image captions. For instance in Figure 5, 2. Throughout this work, we set the number of heads to 1 for the cross-modal interaction module.
KL divergence loss
Ground-truth alignment distribution M Figure 5: Cross-modal supervision of attention: The attention distribution from the model is pulled towards the ground-truth alignment matrix obtained from the Flickr30k entities dataset, using KL-divergence. Greyed out token positions do not contribute to the loss.
we can see that each of the phrases "a girl", "a jean dress" and "a raised balance beam" is mapped to a bounding box 3 that denotes the object referred.
Since the ground-truth regions are different from the regions predicted by the pre-trained BUTD detector ( § 3.3.1), we re-extract regional features using the same pre-trained BUTD detector but from the set of regions provided by the Entities dataset. On average, we end up with 4.3 bounding box annotations per image, which is substantially lower compared to the fixed number of 36 regions that we extracted from the pre-trained BUTD detector. Finally, we should note that the Entities dataset does not provide any annotations for the test2017 and testCOCO test splits of Multi30k dataset.
Training. For a given source sentence x with N words, first an alignment matrix M ∈ R R×N is formed where R denotes the total number of ground-truth bounding box annotations for the overall sentence. We set M [i,j] = 1 if the word x j is associated with the region i. If k > 1 region associations exist for the word x j , the probability mass assigned to each region becomes 1/k. The columns of the matrix that refer to words without any bounding box associations are not taken into account when computing the final alignment loss.
Alignment loss. The Kullback-Leibler (KL) divergence is a measure of how much a given probability distribution Q differs from a reference distribution P . In other words, minimising the KL-divergence between the predicted cross-modal attention distribution (Q) and the ground-truth alignments from the entities dataset (P ) allows the model to generate a visual attention distribution that is closer to the human-labeled annotations. The final multi-task 3. The number of bounding boxes for a given noun phrase is not limited to one. Table 1: Multi30k dataset statistics: "Words" denote the total number of words in a split whereas "Len" is the average number of words per sentence in that split.
objective combines the MT loss and the alignment loss with coefficients α = β = 1:
L = αL MT + βD KL P = M || Q = M(7)
Fine-tuned supervision. In addition to training the supervised MMT from scratch, we also experiment with taking the best checkpoints of the MMT models with unsupervised visual attention (α = 1, β = 0) and fine-tuning them with the alignment loss enabled (α = β = 1). For these particular variants, we disable the learning rate scheduling and lower the learning rate to 1e − 5. We track performance with respect to validation set METEOR and keep the best checkpoint for further decoding of the test sets.
Dataset
We use the Multi30k dataset 4 , which is a multi-lingual extension to the Flickr30k image captioning dataset (Young et al., 2014). Specifically, Multi30k provides a training set of 29,000 examples where each example corresponds to an image I and its English caption x from Flickr30k, extended with the German translation y of the English caption. In other words, both training and testing samples are triplets in the form of {I, x, y}. The dataset is later extended with French and Czech (Barrault et al., 2018) translations, making it the standard dataset for work on MMT, simultaneous MMT (Caglayan et al., 2020a;Imankulova et al., 2020), as well as other multilingual multimodal tasks. We experiment with all three language pairs, namely, English→German, English→Czech and English→French. The training, validation and the test2016 test sets are available for all language directions, whereas the test sets test2017 and testCOCO sets are only available for German and French. The latter test set is specifically designed to contain at least one ambiguous word per image caption where images are selected from the COCO dataset (Chen et al., 2015) instead of the in-domain Flickr30k dataset. Some common statistics of the dataset are provided in Table 1.
Preprocessing. We use Moses tools (Koehn et al., 2007) to lowercase, punctuationnormalise and tokenise the sentences with hyphen splitting option enabled. We then create 2016 2017 COCO word vocabularies using the training subset of each language. The resulting English, French, German and Czech vocabularies have 9.8K, 11K, 18K and 22.2K words, respectively. We do not use sub-word segmentation approaches to avoid their potential side effects for simultaneous MT and to be able to analyse the grounding capability of the models more easily when it comes to cross-modal attention. We note that word-level MT performs as well as sub-word segmentation on this particular dataset, according to (Caglayan, 2019).
Cons Tr Pref Cons Tr Pref Cons Tr Pref
Results
Unimodal Simultaneous MT
Our first set of experiments focus on how different training regimes impact the translation quality for Transformer-based unimodal (i.e. text-only) simultaneous MT. For this, we compute METEOR scores of English-Czech, English-German and English-French MT systems, across three different test sets of Multi30k. Table 2 summarises the results obtained by performing wait-k decoding policy with k ∈ {1, 2, 3}, across the methods described in § 3.2. First, we observe that the prefix training method yields substantially higher METEOR scores than the other two approaches in general. wait-k training achieves the lowest quality across all language pairs and test sets, an observation in line with the previous findings of Caglayan et al. (2020a). Overall, the quantitative results corroborate our initial hypothesis regarding the exposure bias between training and test time ( § 3.2.2) and show that the prefix training is indeed a good choice in-between the full-sentence (consecutive) training and the aggressive wait-k training approaches.
For English-Czech particularly, we observe that when k = 1, wait-k training is considerably worse (⇓4.1 Meteor) than the other approaches. We hypothesise that the lower SRC: A woman holding a bowl of food in a kitchen.
CS: Muž včerveném triku drží své jídlo. DE: Ein Frau hält eine schüssel mit Essen in einer Küche. FR: Un femme tenant un bol de nourriture dans une cuisine. Table 3: Examples showing the effect of the lack of context for the decoder early in the sentence when using wait-1 decoding: greyed out words are those that have not been read or written yet. Red and blue indicate incorrect and correct translations, respectively.
scores for the English-Czech pair in general is likely to be caused by the fact that, unlike English, German and French, Czech has no articles preceding a noun. This means that when generating the first word of the Czech translation, the probability distribution will always generate the same word, which is likely to be the most common target sentence prefix in the training set. This is illustrated in Table 3 where the Czech model translates "woman" as "Muž" (man) whereas the German and French models are able to read the source word "woman" just before translating it 5 . We believe that the incorporation of the visual modality would allow simultaneous MT models to handle such cases in a better way. Finally, although the lack of context affects all three methods at decoding time, the reason why wait-k training lags dramatically behind the other two for Czech is related to the fact that, wait-k training is ineffective in terms of leveraging the training resources, especially when the vocabulary is sparse. This makes Czech the most challenging pair across the pairs explored as its vocabulary has 22K unique tokens compared to 11K for French.
Multimodal Simultaneous MT
Based on the more promising results of the prefix training regime for the unimodal MT experiments, from now on we focus on prefix training across all multimodal experiments. We begin by analysing the impact of incorporating visual information into our Transformerbased simultaneous NMT models, following the architecture described in § 3.3.2. Table 4 compares METEOR scores between the baseline unimodal NMT and the MMT models across all language pairs and test sets.
Overall, our encoder-based cross-modal interaction method improves METEOR scores up to +1.5 points (En-Fr 2016) when compared to unimodal NMT models. This shows that the addition of the visual modality is beneficial for simultaneous MT, especially in low latency scenarios. In particular, consistent quality improvements are observed on the testCOCO test split, which (i) does not come from the Flickr30k distribution and (ii) contains more ambiguous words by construction.
For certain language pairs such as English→Czech and English→German, the additional modality brings little to no improvements, especially on the test2016 test set. However, we will observe different trends once we add supervision ( § 4.2.1). Finally, the En-5. Although the same issue also affects the article choices for gender-marked languages such as French and German, the arbitrary choices are more likely to match the correct article in the target language, since the number of possible translations for articles is much smaller. glish→French language pair benefits the most from the visual modality on average, in terms of METEOR scores. We find it interesting that as the lexical diversity of the target language (i.e. sparsity of the training set vocabulary) increases, the improvements due to multi-modality decrease. This hints at the fact that the improvements can be more pronounced if the models are (pre-)trained on larger linguistic resources, an exploration that we leave to future work.
En-Cs
Supervised Attention
Our next set of experiments aim to understand whether guiding the alignment between the visual features and source words during training helps improve the translation and grounding ability of the MMT models. We carry out two different experiments: first, we train a multimodal simultaneous translation model from Scratch, using both the translation and attention losses. As an alternative, we also experiment with a Fine-Tuned version of the best (unsupervised visual attention) MMT checkpoints with a reduced learning rate ( §3.3.3). The METEOR scores for supervision experiments are reported in Table 5. Overall, the scores highlight that (i) both variants of supervision are beneficial to the MMT models, which now show more consistent improvements over both the baselines and the MMT models with unsupervised visual attention and (ii) the Fine-Tuned model produces consistently higher scores, with improvements of up to 1.9 METEOR points (En-Fr 2016) with respect to the unimodal baseline.
Interestingly, the Fine-Tuned variant always outperforms the Scratch models. One possible explanation for this is that training the MMT model from scratch with both objectives might bias the encoder towards producing representations that align well with image regions, but not necessarily optimal for the MT decoder. To get a better insight on this, we performed a small study where we repeated the supervision experiments by lowering the alignment loss weight from β = 1 to β ∈ {0.1, 0.5}. The results showed that although there are cases where tuning the coefficient yields improved scores for some language pairs, the scores are not as consistent as β = 1 across different wait-k decodings and test sets. We leave further exploration of this to future work.
Prefix Translation Accuracy
Automatic evaluation metrics are often not very suitable for tasks such as image captioning or MMT, as they can fail to capture non-systematic & minor changes (Caglayan et al., 2020b). This issue is even more pronounced in simultaneous translation, where models are more likely to make mistakes early in the translation (due to limited source context), which could have a negative impact in the overall translation quality. In this section, we provide an analysis focused on the translation accuracy of source prefixes. Specifically, we count the number of matches between the first n words of each translation and its reference sentence, and divide it by the number of test set sentences. Figure 6 shows the unigram prefix accuracy of each wait-k decoded simultaneous MT model, across the three language pairs explored. The results reveal several things: (i) the supervision is globally beneficial, although its contribution diminishes as k increases, and (ii) although not reflected by the METEOR scores (Table 5), the incorporation of the visual modality (and further supervision) boosts the accuracies for English→Czech substantially (especially for wait-1). For the latter, this amounts to a 25% relative improvement by the Fine-tuned supervision with respect to the baseline NMT. Similar trends were observed for bigram and trigram accuracy for English→Czech and English→French. Figure 6: Unigram prefix accuracy of simultaneous wait-k variants: the numbers are obtained from the test2016 test set, averaged across three runs.
Comparison to State-of-the-art
Finally, we conclude our quantitative analyses by comparing the performance of our best MMT models to state-of-the-art in simultaneous multimodal MT. Table 6 provides a summary of the results in terms METEOR and also BLEU (Papineni et al., 2002), since some previous work only reports the latter. Recall that (i) Imankulova et al. (2020) and Caglayan et al. (2020a) rely on recurrent models rather than Transformers, (ii) use different visual features and (iii) train their models using either consecutive or wait-k approaches rather than our best prefix training methodology. According to Table 6, the scores clearly show that when combined with the Fine-Tuned attention supervision variant, our Transformer-based MMT models achieve state-of-the-art BLEU and METEOR scores across the board, with the exception of English→German with larger k. For this language direction, our wait-2 and wait-3 systems slightly lag behind Caglayan et al. (2020a) in terms of BLEU. This is probably due to our reliance on METEOR for early-stopping and best checkpoint selection rather than the perplexity, which was used by Caglayan et al. (2020a).
Qualitative Insights
In this section, we conduct further analysis focusing on qualitative aspects of the trained MMT models in terms of visual understanding.
Grounding Ability
We begin our analyses by measuring the grounding ability of our MMT systems on the test2016 test set, for which we have ground-truth region-phrase annotations from Flickr30k dataset ( § 3.4). More formally, we obtain the cross-modal attention weights from our models during the translation decoding of the test set sentences. Next, similar to Rohrbach et al. (2016) and Wang and Specia (2019), we compute the intersection over union (IoU) between the most-attended region and the ground-truth bounding box. If multiple ground-truth boxes exist for a given phrase, we take the ground-truth box that yields the maximum IoU score. Table 7 provides a summary of the results across the language pairs explored. Unsurprisingly, we notice that the MMT models with unsupervised visual attention obtain the lowest grounding accuracy across the board. The supervision from Scratch achieves the highest accuracy, surpassing the models with unsupervised visual attention by 18% on average. The Fine-Tuned supervision approach seems also quite helpful in terms of grounding, with accuracies slightly lagging (around 2%) behind the Scratch systems. Another interesting observation is that the grounding ability does not seem to depend on the choice of target language, e.g. all Fine-Tuned MMT models achieve around 89% accuracy. This is to be expected, since during training, the source sentences and the provided regions never change across different language directions.
Performance on Butd Regions
The previous experiment relied on regional features that were extracted from the groundtruth bounding boxes at training, inference and evaluation. To understand whether the grounding ability generalises across features extracted from non-gold regions, we now switch to the 36 region proposals that we extracted from the pre-trained BUTD object detector ( § 3.3.1). Instead of working in the visual coordinate space, which may be noisy due to the fine-grained nature of bounding box annotations, given that BUTD also provides a wider range of object categories, we also measure the grounding ability in the linguistic space. Specifically, instead of the coordinates of the BUTD and ground-truth regions, we use the predicted object labelŷ (from the BUTD detector) and the ground-truth noun annotation y, respectively. We then compute exact match accuracy (Em) and cosine similarity 6 Table 8: Grounding ability of wait-1 simultaneous MMT systems using the 36 BUTD region proposals: IoU, Cos and Em denote intersection-over-union, cosine similarity and exact match accuracy, respectively. The scores are obtained from the test2016 test set and averaged across three runs.
(Cos) between the lowercased versions of these words, by using pre-trained GloVe (Pennington et al., 2014) word embeddings. Table 8 reports all three metrics averaged across three runs of each model. We observe that all three metrics show the same trend across model types. Regardless of the variant type, the incorporation of attention supervision dramatically improves the ratio of exact matches by up to 17% in absolute terms. We also see that, unlike the previous experiment where the models were provided the human-labelled regions (Table 7), here we have at least the English→French MMT model with unsupervised visual attention which obtains much better grounding scores than the other two language directions. Now that the models are given way more regions (36) than the human-labelled scenario (∼4.3), we think these results are more representative for the final grounding ability of these models. As expected, these results are a lot lower than Table 7 since (i) we had a lot fewer bounding boxes per image in that case and (ii), the BUTD bounding boxes are different from the gold ones, meaning that even if the model focused on a similar part of the image, the IoU score will be lower than 1.
Visualising the Attention
Our main motivation to supervise the visual attention was to guide the model on learning cross-modal visual-linguistic alignments. After quantifying their ability to do so using metrics such as IoU and cosine similarity, we now visualise the impact of supervision on two real examples. Figure 7 shows that the MMT models with unsupervised visual attention (on the left) are already good at attending on relevant regions to some extent. However, there are still cases of misalignment: in the first example, none of the words in the phrase "raised balance beam" are aligned with the actual region showing the beam. Similar skews also happen in the second example such as the word baseball getting aligned with the region apple. The Fine-Tuned variants (on the right) not only fix these cases of misalignments but also provide much more confident attention distributions. This is particularly evident in the attention probabilities of nouns which, unlike connectives or verbs, have corresponding ground-truth bounding boxes.
Translation Examples
Finally, we now present qualitative translation examples from our simultaneous MT systems, to highlight both the strengths and weaknesses of the models explored. Figure 8 shows an example for each language direction, using the wait-1 decoded NMT, MMT and supervised MMT systems. For the first example in Czech, we see that the NMT system hopelessly relies on dataset biases to always generate "Muž (man)" when the definite article "a" is the only observed word at source side. Although the MMT with unsupervised visual attention Src a woman is playing volleyball. Nmt muž se dívá na volejbal.
(man looks at volleyball.) Mmtžena v bílém oblečení hraje volejbal.
(woman in white clothes playing volleyball.) Supžena jde po ulici hraje volejbal.
(woman goes down the street playing volleyball.) Sup+žena, která hraje volejbal.
(woman playing volleyball.)
Src a woman with long hair is at a graduation ceremony. Nmt ein frau mit langen haaren sitzt bei einem olympischen spielen bei einer konferenz. (a woman with long hair sitting at an olympic game at a conference.) Mmt ein frau mit langen haaren ist an einem partner-konzert.
(a woman with a long hair is in a partner concert.) Sup eine frau mit langen haaren hält sich an einer konferenz vor.
(a woman with a long hair is sitting in front of a conference.) Sup+ eine frau mit langen haaren steht an einem lutscher-bahnhof.
(a woman with a long hair stands at a German railway.)
Src a woman is holding a small white statue. Nmt un femme est en train de tenir une petite statue blanche. (a woman is holding a small white statue.) Mmt un femme fait de la petite femme blanche.
(a woman makes the little white woman.) Sup une femme fait un signeà un petit homme blanc.
(a woman makes a sign to a small white man.) Sup+ une femme est en train de tenir une petite statue blanche.
(a woman is holding a small white statue.) Figure 8: wait-1 translation examples from all model variants: Sup and Sup+ denote the Scratch and Fine-Tuned variants of attention supervision, respectively. blue and red colors indicate correct and wrong lexical choices. "Google Translate" output for each non-English hypothesis is shown in grey. and the supervision from scratch (Sup) both pick the correct translation "Žena (woman)", they somewhat hallucinate the continuation. In contrast, the fine-tuned supervision (Sup+) generates the correct translation without any errors. For the German example, all models struggle to translate "graduation ceremony" into German. Interestingly, they all pick words that refer to places such as "concert" or "conference" rather than purely random lexical choices, hinting at the fact that the models may be relying more on visual information than the language signal. In addition, NMT and the non-supervised MMT also commit an error when translating "a" into German i.e. they both pick the masculine form "ein" instead of "eine". The supervised models are better to integrate visual features in this case, as they correctly pick "eine". Finally, the French example shows similar patterns to the German one as well: only the supervised models are able to pick the correct article form "une" and it is the fine-tuned variant that translates the whole sentence in a correct way. If we further look at examples of wait-3 simultaneous MT models and contrast the Scratch and Fine-Tuned variants (Figure 9), we observe that both variants are more or less equivalent when Src three dogs play with each other out in the field. Sup tři psi si spolu hrají ve sněhu se třemi psy.
(three dogs playing together in the snow with three dogs.) Sup+ tři psi si spolu hrají venku na poli.
(three dogs playing together outside in the field.)
Src a gray and white dog jumping over standing water in the sand. Sup ein grau-weißer hund steht im wasser und springtüber das wasser. (a gray and white dog stands in the water and jumps over the water.) Sup+ ein grau-weißer hund springtüber das wasser im sand.
(a gray and white dog leaps over the water in the sand.)
Src a man is blowing into a plastic ball. Sup un homme porte des lunettes dans un ballon en plastique.
(a man wears glasses in a plastic ball.) Sup+ un homme est en train de souffler dans une balle en plastique.
(a man is blowing into a plastic ball.) translating the early parts of the sentences but the Scratch models commit critical errors in overall, when the final translations are taken into account.
Conclusions
In this paper, we proposed the first Transformer-based simultaneous multimodal MT model and extended it with an auxiliary supervision signal that guides the visual attention mechanism using ground-truth phrase-region alignments. We performed extensive experiments on the three language pairs of the Multi30k dataset and upon conducting thorough quantitative and qualitative analyses, we showed that (i) supervised visual attention consistently improves the translation quality of the MMT models, and (ii) fine-tuning the MMT with the supervision loss results in better performance than training the MMT from scratch.
Figure 4 :
4Transformer-based multimodal encoder for simultaneous NMT: The upper stream encodes visual features, whereas the bottom one is the usual Transformer encoder for the input sentence. Dashed lines denote a collection of vectors.
Figure 7 :
7Regional attention examples produced by the MMT model with unsupervised visual attention (left) and the Fine-Tuned variant (right): Green check marks show cases where supervision fixes incorrect word & region alignments.
Figure 9 :
9wait-3 translation examples comparing two variants of attention supervision:Sup and Sup+ denote the Scratch and Fine-Tuned variants, respectively. blue and red colors indicate correct and wrong lexical choices. "Google Translate" output for each non-English hypothesis is shown in grey.
Table 2 :
2METEOR comparison of consecutive training (Cons), wait-k training (Tr) and prefix training (Pref) approaches for unimodal simultaneous MT. All systems are decoded with the wait-k policy where k ∈ {1, 2, 3}. The scores are averages of three runs with different seeds. Best systems are indicated in bold typeface.
Table 4 :
4METEOR comparison of unimodal and multimodal simultaneous MT under the prefix training framework: the scores are averages of three runs with different seeds. Best systems are indicated in bold typeface.
Table 5 :
5The impact of attention supervision with respect to baseline NMT and MMT systems: METEOR scores are averages of three runs with different seeds. Best systems are indicated in bold typeface.
Table 6 :
6Comparison of our best MMT system (fine-tuned supervised attention) with other state-of-the-art simultaneous MMT models on the test2016 set. Best systems are indicated in bold typeface. BL and MT denote BLEU and METEOR, respectively.
Table 7 :
7Grounding ability of wait-1 simultaneous MMT systems using features from the ground-truth region annotations. The scores presented are intersection over unions (IoU) on the test2016 set, averaged across three runs.En-Cs
En-De
En-Fr
IoU
Cos
Em IoU
Cos
Em IoU
Cos
Em
Mmt
23.1% 0.416
9.8% 20.6% 0.403
8.4% 28.5% 0.488 18.3%
+ Sup (Fine-Tuned) 42.5% 0.581 26.6% 42.0% 0.578 26.1% 41.4% 0.572 25.4%
+ Sup (Scratch)
44.3% 0.593 27.6% 44.3% 0.590 27.4% 44.4% 0.589 27.2%
. https://hub.docker.com/r/airsplay/bottom-up-attention
. https://github.com/multi30k/dataset
. For the cosine similarity approach, we ensure maximum correspondence by lemmatising both words using the NLTK toolkit(Bird & Loper, 2004).
AcknowledgmentsThis article follows from the MSc. Thesis by Veneta Haralampieva, co-supervised by Lucia Specia and Ozan Caglayan. Lucia Specia and Ozan Caglayan were supported by the Mul-tiMT project (H2020 ERC Starting Grant No. 678017). Lucia Specia also received support from the Air Force Office of Scientific Research (under award number FA8655-20-1-7006).
Prediction improves simultaneous neural machine translation. A Alinejad, M Siahbani, A Sarkar, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAlinejad, A., Siahbani, M., & Sarkar, A. (2018). Prediction improves simultaneous neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3022-3027, Brussels, Belgium. Association for Computational Linguistics.
Bottom-up and top-down attention for image captioning and visual question answering. P Anderson, X He, C Buehler, D Teney, M Johnson, S Gould, L Zhang, CVPR. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., & Zhang, L. (2018). Bottom-up and top-down attention for image captioning and visual question answer- ing. In CVPR.
Monotonic infinite lookback attention for simultaneous machine translation. N Arivazhagan, C Cherry, W Macherey, C.-C Chiu, S Yavuz, R Pang, W Li, C Raffel, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsArivazhagan, N., Cherry, C., Macherey, W., Chiu, C.-C., Yavuz, S., Pang, R., Li, W., & Raffel, C. (2019). Monotonic infinite lookback attention for simultaneous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pp. 1313-1323.
Re-translation versus streaming for simultaneous translation. N Arivazhagan, C Cherry, W Macherey, G Foster, Proceedings of the 17th International Conference on Spoken Language Translation. the 17th International Conference on Spoken Language TranslationArivazhagan, N., Cherry, C., Macherey, W., & Foster, G. (2020). Re-translation versus streaming for simultaneous translation. In Proceedings of the 17th International Con- ference on Spoken Language Translation, pp. 220-227, Online. Association for Com- putational Linguistics.
Learning coupled policies for simultaneous machine translation using imitation learning. P Arthur, T Cohn, G Haffari, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsArthur, P., Cohn, T., & Haffari, G. (2021). Learning coupled policies for simultaneous machine translation using imitation learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 2709-2719, Online. Association for Computational Linguistics.
. J L Ba, J R Kiros, G E Hinton, arXiv:1607.064501Layer normalization. arXiv preprintBa, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprint arXiv:1607.06450, 1 (1).
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, Proceedings of the 3rd International Conference on Learning Representations. the 3rd International Conference on Learning RepresentationsBahdanau, D., Cho, K., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations.
Real-time incremental speech-to-speech translation of dialogs. S Bangalore, V K Sridhar, P Kolan, L Golipour, A Jimenez, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaAssociation for Computational LinguisticsBangalore, S., Rangarajan Sridhar, V. K., Kolan, P., Golipour, L., & Jimenez, A. (2012). Real-time incremental speech-to-speech translation of dialogs. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pp. 437-445, Montréal, Canada. Association for Computational Linguistics.
Findings of the third shared task on multimodal machine translation. L Barrault, F Bougares, L Specia, C Lala, D Elliott, S Frank, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational Linguistics2Barrault, L., Bougares, F., Specia, L., Lala, C., Elliott, D., & Frank, S. (2018). Findings of the third shared task on multimodal machine translation. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pp. 308-327, Belgium, Brussels. Association for Computational Linguistics.
NLTK: The natural language toolkit. S Bird, E Loper, Proceedings of the ACL Interactive Poster and Demonstration Sessions. the ACL Interactive Poster and Demonstration SessionsBarcelona, SpainAssociation for Computational LinguisticsBird, S., & Loper, E. (2004). NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pp. 214-217, Barcelona, Spain. Association for Computational Linguistics.
Verbmobil: The combination of deep and shallow processing for spontaneous speech translation. T Bub, W Wahlster, A Waibel, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE1Bub, T., Wahlster, W., & Waibel, A. (1997). Verbmobil: The combination of deep and shallow processing for spontaneous speech translation. In 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp. 71-74. IEEE.
Multimodal Machine Translation. Theses. O Caglayan, Université du MaineCaglayan, O. (2019). Multimodal Machine Translation. Theses, Université du Maine.
LIUM-CVC Submissions for WMT17 Multimodal Translation Task. O Caglayan, W Aransa, A Bardet, M García-Martínez, F Bougares, L Barrault, M Masana, L Herranz, J Van De Weijer, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCaglayan, O., Aransa, W., Bardet, A., García-Martínez, M., Bougares, F., Barrault, L., Masana, M., Herranz, L., & van de Weijer, J. (2017). LIUM-CVC Submissions for WMT17 Multimodal Translation Task. In Proceedings of the Second Conference on Machine Translation, pp. 432-439.
Does multimodality help human and machine for translation and image captioning. O Caglayan, W Aransa, Y Wang, M Masana, M García-Martínez, F Bougares, L Barrault, J Van De Weijer, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersCaglayan, O., Aransa, W., Wang, Y., Masana, M., García-Martínez, M., Bougares, F., Barrault, L., & van de Weijer, J. (2016). Does multimodality help human and machine for translation and image captioning?. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pp. 627-633, Berlin, Germany. Association for Computational Linguistics.
Simultaneous machine translation with visual context. O Caglayan, J Ive, V Haralampieva, P Madhyastha, L Barrault, L Specia, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsCaglayan, O., Ive, J., Haralampieva, V., Madhyastha, P., Barrault, L., & Specia, L. (2020a). Simultaneous machine translation with visual context. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2350-2361, Online. Association for Computational Linguistics.
Curious case of language generation evaluation metrics: A cautionary tale. O Caglayan, P Madhyastha, L Specia, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsCaglayan, O., Madhyastha, P., & Specia, L. (2020b). Curious case of language genera- tion evaluation metrics: A cautionary tale. In Proceedings of the 28th International Conference on Computational Linguistics, pp. 2322-2328, Barcelona, Spain (Online). International Committee on Computational Linguistics.
DCU-UvA multimodal MT system report. I Calixto, D Elliott, S Frank, Proceedings of the First Conference on Machine Translation. the First Conference on Machine Translation2Shared Task PapersCalixto, I., Elliott, D., & Frank, S. (2016). DCU-UvA multimodal MT system report. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pp. 634-638.
Incorporating global visual features into attention-based neural machine translation. I Calixto, Q Liu, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCalixto, I., & Liu, Q. (2017). Incorporating global visual features into attention-based neural machine translation.. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 992-1003.
Doubly-attentive decoder for multi-modal neural machine translation. I Calixto, Q Liu, N Campbell, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Calixto, I., Liu, Q., & Campbell, N. (2017). Doubly-attentive decoder for multi-modal neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1913-1924.
X Chen, H Fang, T.-Y Lin, R Vedantam, S Gupta, P Dollár, C L Zitnick, arXiv:1504.00325Microsoft COCO captions: Data collection and evaluation server. 1arXiv preprintChen, X., Fang, H., Lin, T.-Y., Vedantam, R., Gupta, S., Dollár, P., & Zitnick, C. L. (2015). Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 1.
Can neural machine translation do simultaneous translation. K Cho, M Esipova, arXiv:1606.020121arXiv preprintCho, K., & Esipova, M. (2016). Can neural machine translation do simultaneous transla- tion?. arXiv preprint arXiv:1606.02012, 1.
Incremental decoding and training methods for simultaneous translation in neural machine translation. F Dalvi, N Durrani, H Sajjad, S Vogel, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics2Short PapersDalvi, F., Durrani, N., Sajjad, H., & Vogel, S. (2018). Incremental decoding and training methods for simultaneous translation in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 493-499, New Orleans, Louisiana. Association for Computational Linguistics.
Modulating and attending the source image during encoding improves multimodal translation. J.-B Delbrouck, S Dupont, arXiv:1712.034491arXiv preprintDelbrouck, J.-B., & Dupont, S. (2017). Modulating and attending the source image during encoding improves multimodal translation. arXiv preprint arXiv:1712.03449, 1 (1).
Imagenet: A largescale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeDeng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee.
Meteor universal: Language specific translation evaluation for any target language. M Denkowski, A Lavie, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationAssociation for Computational LinguisticsDenkowski, M., & Lavie, A. (2014). Meteor universal: Language specific translation eval- uation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 376-380. Association for Computational Linguistics.
Efficient Wait-k Models for Simultaneous Machine Translation. M Elbayad, L Besacier, J Verbeek, Proc. Interspeech 2020. Interspeech 2020Elbayad, M., Besacier, L., & Verbeek, J. (2020). Efficient Wait-k Models for Simultaneous Machine Translation. In Proc. Interspeech 2020, pp. 1461-1465.
Findings of the second shared task on multimodal machine translation and multilingual image description. D Elliott, S Frank, L Barrault, F Bougares, L Specia, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational Linguistics2Elliott, D., Frank, S., Barrault, L., Bougares, F., & Specia, L. (2017). Findings of the second shared task on multimodal machine translation and multilingual image description. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pp. 215-233, Copenhagen, Denmark. Association for Computational Linguistics.
Multi30K: Multilingual English-German image descriptions. D Elliott, S Frank, K Sima'an, L Specia, Proceedings of the 5th Workshop on Vision and Language. the 5th Workshop on Vision and LanguageBerlin, GermanyAssociation for Computational LinguisticsElliott, D., Frank, S., Sima'an, K., & Specia, L. (2016). Multi30K: Multilingual English- German image descriptions. In Proceedings of the 5th Workshop on Vision and Lan- guage, pp. 70-74, Berlin, Germany. Association for Computational Linguistics.
Asian Federation of Natural Language Processing. D Elliott, Á Kádár, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Imagination improves multimodal translationElliott, D., & Kádár,Á. (2017). Imagination improves multimodal translation. In Proceed- ings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 130-141, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Jointly learning to align and translate with transformer models. S Garg, S Peitz, U Nallasamy, M Paulik, Conference on Empirical Methods in Natural Language Processing. Hong KongGarg, S., Peitz, S., Nallasamy, U., & Paulik, M. (2019). Jointly learning to align and translate with transformer models. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Hong Kong.
Learning to translate in real-time with neural machine translation. J Gu, G Neubig, K Cho, V O Li, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, SpainAssociation for Computational Linguistics1Gu, J., Neubig, G., Cho, K., & Li, V. O. (2017). Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 1053-1062, Valencia, Spain. Association for Computational Linguistics.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHe, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778.
Towards multimodal simultaneous neural machine translation. A Imankulova, M Kaneko, T Hirasawa, M Komachi, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationOnline. Association for Computational LinguisticsImankulova, A., Kaneko, M., Hirasawa, T., & Komachi, M. (2020). Towards multimodal simultaneous neural machine translation. In Proceedings of the Fifth Conference on Machine Translation, pp. 594-603, Online. Association for Computational Linguistics.
Exploiting multimodal reinforcement learning for simultaneous machine translation. J Ive, A M Li, Y Miao, O Caglayan, P Madhyastha, L Specia, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeIve, J., Li, A. M., Miao, Y., Caglayan, O., Madhyastha, P., & Specia, L. (2021). Exploiting multimodal reinforcement learning for simultaneous machine translation. In Proceed- ings of the 16th Conference of the European Chapter of the Association for Compu- tational Linguistics: Main Volume, pp. 3222-3233, Online. Association for Computa- tional Linguistics.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.69801arXiv preprintKingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 1.
Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume the Demo and Poster SessionsPrague, Czech RepublicAssociation for Computational LinguisticsKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., & Herbst, E. (2007). Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Com- panion Volume Proceedings of the Demo and Poster Sessions, pp. 177-180, Prague, Czech Republic. Association for Computational Linguistics.
Visual genome: Connecting language and vision using crowdsourced dense image annotations. R Krishna, Y Zhu, O Groth, J Johnson, K Hata, J Kravitz, S Chen, Y Kalantidis, L.-J Li, D A Shamma, International journal of computer vision. 1231Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.-J., Shamma, D. A., et al. (2017). Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123 (1), 32-73.
Attention strategies for multi-source sequence-to-sequence learning. J Libovickỳ, J Helcl, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsShort Papers2Libovickỳ, J., & Helcl, J. (2017). Attention strategies for multi-source sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pp. 196-202.
Input combination strategies for multi-source transformer decoder. J Libovickỳ, J Helcl, D Mareček, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersLibovickỳ, J., Helcl, J., & Mareček, D. (2018). Input combination strategies for multi-source transformer decoder. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 253-260.
Neural machine translation with supervised attention. L Liu, M Utiyama, A Finch, E Sumita, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeLiu, L., Utiyama, M., Finch, A., & Sumita, E. (2016). Neural machine translation with supervised attention. In Proceedings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Papers, pp. 3093-3102, Osaka, Japan. The COLING 2016 Organizing Committee.
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. J Lu, D Batra, D Parikh, S Lee, Advances in Neural Information Processing Systems. Lu, J., Batra, D., Parikh, D., & Lee, S. (2019). Vilbert: Pretraining task-agnostic visiolin- guistic representations for vision-and-language tasks. In Advances in Neural Informa- tion Processing Systems, pp. 13-23.
Hierarchical question-image co-attention for visual question answering. J Lu, J Yang, D Batra, D Parikh, Advances in neural information processing systems. Lu, J., Yang, J., Batra, D., & Parikh, D. (2016). Hierarchical question-image co-attention for visual question answering. In Advances in neural information processing systems, pp. 289-297.
STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework. M Ma, L Huang, H Xiong, R Zheng, K Liu, B Zheng, C Zhang, Z He, H Liu, X Li, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMa, M., Huang, L., Xiong, H., Zheng, R., Liu, K., Zheng, B., Zhang, C., He, Z., Liu, H., Li, X., et al. (2019). STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3025-3036.
Monotonic multihead attention. X Ma, J M Pino, J Cross, L Puzon, J Gu, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Ma, X., Pino, J. M., Cross, J., Puzon, L., & Gu, J. (2020). Monotonic multihead attention. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
Supervised attentions for neural machine translation. H Mi, Z Wang, A Ittycheriah, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsMi, H., Wang, Z., & Ittycheriah, A. (2016). Supervised attentions for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2283-2288, Austin, Texas. Association for Computational Linguistics.
Low-latency neural speech translation. J Niehues, N.-Q Pham, T.-L Ha, M Sperber, A Waibel, Proc. Interspeech. InterspeechNiehues, J., Pham, N.-Q., Ha, T.-L., Sperber, M., & Waibel, A. (2018). Low-latency neural speech translation. In Proc. Interspeech 2018, pp. 1293-1297.
Supervised visual attention for multimodal neural machine translation. T Nishihara, A Tamura, T Ninomiya, Y Omote, H Nakayama, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsNishihara, T., Tamura, A., Ninomiya, T., Omote, Y., & Nakayama, H. (2020). Supervised visual attention for multimodal neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, pp. 4304-4314, Barcelona, Spain (Online). International Committee on Computational Linguistics.
BLEU: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsPapineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics.
R Pascanu, C Gulcehre, K Cho, Y Bengio, How to construct deep recurrent neural networks: Proceedings of the second international conference on learning representations (iclr. 2nd International Conference on Learning RepresentationsPascanu, R., Gulcehre, C., Cho, K., & Bengio, Y. (2014). How to construct deep recurrent neural networks: Proceedings of the second international conference on learning repre- sentations (iclr 2014). In 2nd International Conference on Learning Representations, ICLR 2014.
GloVe: Global vectors for word representation. J Pennington, R Socher, C Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsPennington, J., Socher, R., & Manning, C. (2014). GloVe: Global vectors for word repre- sentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543, Doha, Qatar. Association for Com- putational Linguistics.
Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models. B A Plummer, L Wang, C M Cervantes, J C Caicedo, J Hockenmaier, S Lazebnik, 2015 IEEE International Conference on Computer Vision (ICCV). Plummer, B. A., Wang, L., Cervantes, C. M., Caicedo, J. C., Hockenmaier, J., & Lazebnik, S. (2015). Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models. In 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2641-2649.
Using the output embedding to improve language models. O Press, L Wolf, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European Chapter2Short PapersPress, O., & Wolf, L. (2017). Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pp. 157-163.
Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in neural information processing systems. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91-99.
Grounding of textual phrases in images by reconstruction. A Rohrbach, M Rohrbach, R Hu, T Darrell, B Schiele, European Conference on Computer Vision. SpringerRohrbach, A., Rohrbach, M., Hu, R., Darrell, T., & Schiele, B. (2016). Grounding of textual phrases in images by reconstruction. In European Conference on Computer Vision, pp. 817-834. Springer.
Simultaneous English-Japanese spoken language translation based on incremental dependency parsing and transfer. K Ryu, S Matsubara, Y Inagaki, Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. the COLING/ACL 2006 Main Conference Poster SessionsSydney, AustraliaAssociation for Computational LinguisticsRyu, K., Matsubara, S., & Inagaki, Y. (2006). Simultaneous English-Japanese spoken lan- guage translation based on incremental dependency parsing and transfer. In Pro- ceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pp. 683-690, Sydney, Australia. Association for Computational Linguistics.
Simultaneous machine translation using deep reinforcement learning. H Satija, J Pineau, ICML 2016 Workshop on Abstraction in Reinforcement Learning. Satija, H., & Pineau, J. (2016). Simultaneous machine translation using deep reinforcement learning. In ICML 2016 Workshop on Abstraction in Reinforcement Learning.
A shared task on multimodal machine translation and crosslingual image description. L Specia, S Frank, K Sima'an, D Elliott, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsSpecia, L., Frank, S., Sima'an, K., & Elliott, D. (2016). A shared task on multimodal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation, pp. 543-553, Berlin, Germany. Association for Computational Linguistics.
Read, spot and translate. Machine Translation. L Specia, J Wang, Jae Lee, S Ostapenko, A Madhyastha, P , 35Specia, L., Wang, J., Jae Lee, S., Ostapenko, A., & Madhyastha, P. (2021). Read, spot and translate. Machine Translation, 35 (1), 145-165.
Multimodal machine translation through visuals and speech. Machine Translation. U Sulubacak, O Caglayan, S.-A Grönroos, A Rouhe, D Elliott, L Specia, J Tiedemann, 34Sulubacak, U., Caglayan, O., Grönroos, S.-A., Rouhe, A., Elliott, D., Specia, L., & Tiede- mann, J. (2020). Multimodal machine translation through visuals and speech. Ma- chine Translation, 34 (2), 97-147.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in neural information processing systems. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112.
Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSzegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826.
LXMERT: Learning cross-modality encoder representations from transformers. H Tan, M Bansal, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsTan, H., & Bansal, M. (2019). LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5100-5111, Hong Kong, China. Associ- ation for Computational Linguistics.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008.
Phrase localization without paired training examples. J Wang, L Specia, Proceedings of the IEEE/CVF Internaitonal Conference on Computer Vision (ICCV). the IEEE/CVF Internaitonal Conference on Computer Vision (ICCV)Seoul, South KoreaIEEEWang, J., & Specia, L. (2019). Phrase localization without paired training examples. In Proceedings of the IEEE/CVF Internaitonal Conference on Computer Vision (ICCV), Seoul, South Korea. IEEE.
Learning deep transformer models for machine translation. Q Wang, B Li, T Xiao, J Zhu, C Li, D F Wong, L S Chao, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsWang, Q., Li, B., Xiao, T., Zhu, J., Li, C., Wong, D. F., & Chao, L. S. (2019). Learning deep transformer models for machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1810-1822, Florence, Italy. Association for Computational Linguistics.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. R J Williams, Machine learning. 83-4Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8 (3-4), 229-256.
From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. P Young, A Lai, M Hodosh, J Hockenmaier, Transactions of the Association for Computational Linguistics. 2Young, P., Lai, A., Hodosh, M., & Hockenmaier, J. (2014). From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2, 67-78.
Simpler and faster learning of adaptive policies for simultaneous translation. B Zheng, R Zheng, M Ma, L Huang, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingZheng, B., Zheng, R., Ma, M., & Huang, L. (2019). Simpler and faster learning of adap- tive policies for simultaneous translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1349-1354.
A visual attention grounding neural model for multimodal machine translation. M Zhou, R Cheng, Y J Lee, Z Yu, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsZhou, M., Cheng, R., Lee, Y. J., & Yu, Z. (2018). A visual attention grounding neural model for multimodal machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3643-3653, Brussels, Belgium. Association for Computational Linguistics.
| [
"https://github.com/multi30k/dataset"
] |
[
"Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition",
"Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition"
] | [
"Krishan Rajaratnam krajaratnam@uchicago.edu \nDepartment of Computer Science\nThe College University of Chicago Chicago\nUSA\n",
"Jugal Kalita jkalita@uccs.edu \nUniversity of Colorado Colorado Springs\nUSA\n"
] | [
"Department of Computer Science\nThe College University of Chicago Chicago\nUSA",
"University of Colorado Colorado Springs\nUSA"
] | [] | Neural models enjoy widespread use across a variety of tasks and have grown to become crucial components of many industrial systems. Despite their effectiveness and extensive popularity, they are not without their exploitable flaws. Initially applied to computer vision systems, the generation of adversarial examples is a process in which seemingly imperceptible perturbations are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. Due to recent trends in speech processing, this has become a noticeable issue in speech recognition models. In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Limited-vocabulary speech classifiers, such as the Speech Commands model, are used quite frequently in a variety of applications, particularly in managing automated attendants in telephony contexts. As such, adversarial examples produced by this attack could have real-world consequences. While previous work in defending against these adversarial examples has investigated using audio preprocessing to reduce or distort adversarial noise, this work explores the idea of flooding particular frequency bands of an audio signal with random noise in order to detect adversarial examples. This technique of flooding, which does not require retraining or modifying the model, is inspired by work done in computer vision and builds on the idea that speech classifiers are relatively robust to natural noise. A combined defense incorporating 5 different frequency bands for flooding the signal with noise outperformed other existing defenses in the audio space, detecting adversarial examples with 91.8% precision and 93.5% recall.Index Terms-adversarial example detection, speech recognition, deep learning 978-1-5386-7568-7/18/$31.00 | 10.1109/isspit.2018.8642623 | [
"https://arxiv.org/pdf/1812.10061v1.pdf"
] | 56,895,479 | 1812.10061 | a1cf399f7a5dafa083fe1deed6c6c4432b899d6a |
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition
Krishan Rajaratnam krajaratnam@uchicago.edu
Department of Computer Science
The College University of Chicago Chicago
USA
Jugal Kalita jkalita@uccs.edu
University of Colorado Colorado Springs
USA
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition
Index Terms-adversarial example detectionspeech recog- nitiondeep learning
Neural models enjoy widespread use across a variety of tasks and have grown to become crucial components of many industrial systems. Despite their effectiveness and extensive popularity, they are not without their exploitable flaws. Initially applied to computer vision systems, the generation of adversarial examples is a process in which seemingly imperceptible perturbations are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. Due to recent trends in speech processing, this has become a noticeable issue in speech recognition models. In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Limited-vocabulary speech classifiers, such as the Speech Commands model, are used quite frequently in a variety of applications, particularly in managing automated attendants in telephony contexts. As such, adversarial examples produced by this attack could have real-world consequences. While previous work in defending against these adversarial examples has investigated using audio preprocessing to reduce or distort adversarial noise, this work explores the idea of flooding particular frequency bands of an audio signal with random noise in order to detect adversarial examples. This technique of flooding, which does not require retraining or modifying the model, is inspired by work done in computer vision and builds on the idea that speech classifiers are relatively robust to natural noise. A combined defense incorporating 5 different frequency bands for flooding the signal with noise outperformed other existing defenses in the audio space, detecting adversarial examples with 91.8% precision and 93.5% recall.Index Terms-adversarial example detection, speech recognition, deep learning 978-1-5386-7568-7/18/$31.00
I. INTRODUCTION
The growing use of deep learning models necessitates that those models be accurate, robust, and secure. However, these models are not without abusable defects. Initially applied to computer vision systems [1], the generation of adversarial examples (loosely depicted in Fig. 1) is a process in which seemingly imperceptible changes are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. The effectiveness of such attacks is quite high, often resulting in misclassification rates of above 90% in image classifiers [2]. Due to the exploitative nature of these attacks, it can be difficult to defend against adversarial examples while This work is supported by the National Science Foundation under Grant No. 1659788. Fig. 1. A graphic depicting a targeted adversarial attack from "yes" (the source) to "no" (the target). A malicious attacker can add a small amount of adversarial perturbation to a signal such that it is classified by a model as some target class while a human still primarily hears the source class. maintaining general accuracy.
The generation of adversarial examples is not just limited to image recognition. Although speech recognition traditionally relied heavily on hidden Markov models and various signal processing techniques, the gradual growth of computer hardware capabilities and available data has enabled end-to-end neural models to become more popular and even state of the art. As such, speech recognizers that rely heavily on deep learning models are susceptible to adversarial attacks. Recent work has been done on the generation of targeted adversarial examples against a convolutional neural network trained on the widely used Speech Commands dataset [3] and against Mozilla's implementation of the DeepSpeech end-to-end model [4], in both cases generating highly potent and effective adversarial examples that were able to achieve up to a 100% misclassification rate. Due to this trend, the reliability of deep learning models for automatic speech recognition is compromised; there is an urgent need for adequate defense against adversarial examples.
II. RELATED WORK
The attack against Speech Commands described by Alzantot et al. [3] is particularly relevant within the realm of telephony, as it could be adapted to fool limitedvocabulary speech classifiers used for automated attendants. This attack produces adversarial examples using a gradientfree genetic algorithm, allowing the attack to penetrate the non-differentiable layers of preprocessing typically used in automatic speech recognition.
A. Audio Preprocessing Defenses
As adversarial examples are generated by adding adversarial noise to a natural input, certain methods of preprocessing can serve to remove or distort the adversarial noise to mitigate the attack.
Recent work in computer vision has shown that some preprocessing, such as JPEG and JPEG2000 image compression [5] or cropping and resizing [6], can be employed with a certain degree of success in defending against adversarial attacks. In a similar vein, preprocessing defenses have also been used for defending against adversarial attacks on speech recognition. Yang et al. [7] were able to achieve some success using local smoothing, down-sampling, and quantization for disrupting adversarial examples produced by the attack of Alzantot et al. While quantizing with q = 256, Yang et al. were able to achieve their best result of correctly recovering the original label of 63.8% of the adversarial examples, with a low cost to general model accuracy. As quantization causes the amplitudes of sampled data to be rounded to the closest integer multiple of q, adversarial perturbations with small amplitudes can be disrupted.
Work has also been done in employing audio compression, band-pass filtering, audio panning, and speech coding to detect the examples of Alzantot et al. Rajaratnam et al. [8] explored using these forms of preprocessing as a part of both isolated and ensemble methods for detecting adversarial examples. The discussed isolated preprocessing methods are quite simple; they merely check to see if the prediction yielded by the model is changed by applying preprocessing to the input. Despite this simplicity, Rajaratnam et al. achieved their best result of detecting adversarial examples with 93.5% precision and 91.2% recall using a "Learned Threshold Voting" (LTV) ensemble: a discrete voting ensemble composed of all of the isolated preprocessing methods that learns an optimal threshold for the number of votes needed to declare an audio sample as adversarial. They achieved a higher F 1 score for detecting adversarial examples using this voting ensemble when compared to more sophisticated techniques for combining the methods of preprocessing into an ensemble.
B. Pixel Deflection
While the aforementioned defenses focus on removing or distorting adversarial noise, one could also defend against an adversarial example by adding noise to the signal. Artificial neural network (ANN) classifiers are relatively robust to natural noise, whereas adversarial examples are less so. Prakash et al. [9] used this observation and proposed a procedure for defending against adversarial images that involves corrupting localized regions of the image through the redistribution of pixel values. This procedure, which they refer to as "pixel deflection," was shown to be very effective for retrieving the true class of an adversarial attack. The strategy of defense proposed by Prakash et al. is more sophisticated than merely corrupting images by indiscriminately redistributing pixels; they target specific pixels of the image to deflect and also perform a subsequent wavelet-based denoising procedure for softening the corruption's impact on benign inputs. Regardless of the many aspects of the pixel deflection defense that seem to only be directly applicable to defenses within computer vision, the fundamental motivating idea behind this strategy-that ANN classifiers are robust to natural noise on benign inputs relative to adversarial inputs-is an observation that should also hold true for audio classification.
III. METHODS AND EVALUATION
Based off the observation of model robustness to natural noise, it should generally take less noise to change the model's prediction class of an adversarial example than it would to change that of a benign example. One could detect adversarial examples by observing how much noise needs to be added to the signal before the prediction that the model yields changes. Additionally, adversarial noise in audio is not localized to any particular frequency band, whereas much of the information associated with human speech is concentrated along the lower frequencies. As such, flooding particular frequency bands with random noise can be useful for detecting adversarial examples.
The aim of this research can be divided into two parts: testing the effectiveness of simple noise flooding (i.e. flooding the signal with randomly generated noise distributed along a particular band of frequency) for detecting audio adversarial examples, and combining multiple simple noise flooders that target different frequency bands together into an ensemble defense. The adversarial examples are produced using the gradient-free attack of Alzantot et al., against the same pre-trained Speech Commands model [3].
A. Speech Commands Dataset and Model
The Speech Commands dataset was first released in 2017 and contains 105,829 labeled utterances of 32 words from 2,618 speakers [10]. This audio is stored in the Waveform Audio File Format (WAV) and was recorded with a sample rate of 16 kHz. The Speech Commands model is a lightweight model based on a keyword spotting convolutional neural network (CNN) [
B. Simple Noise Flooding
This method for detecting adversarial examples involves calculating a score (that we term "flooding score") from an audio signal that represents how much random noise needs to "flood" the signal in order to change the model's prediction class of the audio signal. By calculating the flooding scores of the adversarial and benign examples in the training dataset, an ideal threshold score of maximum information gain can be found; test examples that have a flooding score less than the threshold are declared adversarial.
1) Flooding Score Calculation: Every audio signal can be represented as an array of n audio samples along with a sample rate. A straightforward method of noising an audio signal with a noise limit is by generating an array of n random integers between − and and adding this array to the original array of n audio samples. The simple noise flooding defense noises audio in a similar manner, except n random integers are passed through a band-pass filter before being added to the original array so that the added noise will be concentrated along a particular frequency band. The smallest found that induces a model prediction change between the noised signal and the original audio signal is used as a flooding score for determining whether the original signal is an adversarial example. The procedure for calculating the flooding score of an audio signal is detailed in Algorithm 1. pred ← classification of x + noise using m 12: return This procedure will make no more than max /s calls to the model when calculating the flooding score of an audio signal. As such, there is an inherent trade-off that comes with the choice of the step size parameter s; a large step size would generally cause the algorithm to terminate quickly with a less precise score, whereas a small step size would result in a more precise score but at a higher computational cost. In this research, a step size of 50 was used, though in practice this parameter could be tuned to suit particular scenarios. A similar trade-off is implicit with the choice of the max parameter.
2) Frequency Bands for Testing: The simple noise flooding procedure can be tested using various bands of frequency for concentrating noise. Considering that the sample rate of files within the Speech Commands dataset is 16 kHz, the Nyquist frequency [12] of this system is 8000 Hz. Considering that the 0-8000 Hz frequency range can be divided into 4 bands of equal width, we are left with the following 5 variations of simple noise flooding for testing: It is worth noting that for unfiltered noise flooding, the noise array is not passed through any band pass filter. As such, the frequency band parameter b (and, along with it, line 10 of Algorithm 1) is unused for calculating an unfiltered noise flooding score.
C. Ensemble Defense
While the above variations of the simple noise flooding defense may be somewhat effective for detecting adversarial examples in isolation, a more robust defense would be to combine the variations into an ensemble. As flooding scores calculated for each band may contain unique information that could be valuable for detecting adversarial examples, a defense that incorporates different varieties of flooding scores should be more effective. The flooding scores can be combined in a variety of configurations.
1) Majority Voting: A somewhat naive, yet direct, approach for combining the simple noise flooding variations is to use a discrete voting ensemble: for every audio signal passed, perform each of the 5 variations of simple noise flooding and tally up the adversarial "votes" each of the methods yield. If there are 3 or more (i.e. a majority) adversarial votes, the signal is declared adversarial.
2) Learned Threshold Voting: This ensemble technique is identical to the homonymous method described in [8]. Although the majority voting technique requires 3 adversarial votes (i.e. a majority) for an adversarial declaration, this voting threshold is arbitrary. The learned threshold voting technique assesses the performance of voting ensembles using all possible voting thresholds on a training dataset, and chooses the threshold that yielded the best performance. For quantifying performance, F 1 scores are used, though one could adjust this F -measure to accommodate one's outlook on the relative importances of recall and precision.
3) Tree-Based Classification Algorithms: The previous ensemble techniques do not discriminate between voters in the ensemble; every vote is considered equal. Considering that human speech information is not distributed evenly among the frequency bands used in the noise flooding ensemble (most human speech information would be distributed along the 0-2000 Hz band), it may be somewhat callow to treat each member of the ensemble equally.
Decision tree-based classification algorithms generally perform well in classifying vectors of features into discrete classes. To avoid discarding information, one could calculate the simple flooding score yielded by each member of the ensemble and concatenate these scores into a 5dimensional flooding score vector and train a tree-based classification algorithm to detect adversarial examples from its flooding score vector. In this work, 3 tree-based classification algorithms will be used, due to their high performance on a variety of discrete classification tasks:
• Adaptive Boosting (AdaBoost) [13], • Random Forest Classification [14], and • Extreme Gradient Boosting (XGBoost) [15].
D. Evaluation
All When applying defenses against adversarial examples, an implied tradeoff between the general usability and security of the model seems to arise. From a security standpoint, it is extremely important to have a high recall in detecting adversarial examples, whereas for the sake of general usability, there should be a high precision when declaring a potentially benign input as adversarial. This research takes the stance that both general usability and security are equally important. As such, F 1 scores are used when evaluating the defenses in order to equally balance precision and recall.
IV. RESULTS
The precisions, recalls, and F 1 scores are evaluated for each of the simple noise flooding defenses in addition to the two best isolated preprocessing defenses from [8] (i.e. the two isolated defenses with the highest F 1 scores) and are shown in Table I. From the results, one can see that the simple noise flooding defenses are all able to achieve higher recalls than the isolated preprocessing detection methods proposed by Rajaratnam et al. in [8].
Additionally, the simple noise flooding methods that target lower frequency bands performed better than those that targeted higher frequency bands. This frequency-based disparity in performance follows reasonably from the fact that human speech information is concentrated in the lower frequencies. While the unfiltered noise flooding method achieved the highest F 1 score, the 0-2000 Hz Noise Flooding defense achieved a higher recall in detecting adversarial examples.
The results of the ensemble noise flooding defenses in addition to the two best ensemble preprocessing defenses from [8] are summarized in Table II. Most of the ensemble techniques achieve higher F 1 scores than any of the individual simple flooding defenses. Understandably, the somewhat naive noise flooding majority voting ensemble yielded the lowest F 1 score of all the ensemble techniques. The noise flooding learned threshold voting ensemble improves from the majority voting ensemble by learning a new voting threshold of 4 (as opposed to 3, which is used in the majority voting ensemble). This higher threshold results in a lower recall in detecting adversarial examples, but results in a markedly higher precision in order to achieve an overall higher F 1 score.
As expected, the tree-based classification algorithms were the most effective for combining the simple noise flooding methods together, as they were able to learn an optimal method for discriminating between the members of the ensemble while the voting ensembles implicitly treated each voter equally. The adaptive boosting ensemble achieved a higher recall than any of the other ensemble noise flooding defenses, whereas the extreme gradient boosting ensemble achieved the highest F 1 score of any detection method. The recall measurements for detecting adversarial examples using the noise flooding extreme gradient boosting ensemble are detailed in Fig. 2.
V. CONCLUSION AND FUTURE WORK
Although the results suggest that an ensemble noise flooding defense is effective in defending against adversarial examples produced by the unmodified algorithm of Alzantot et al., it does not necessarily show that this defense is secure against more complex attacks. While an ensemble defense may provide marginal security over the simple noise flooding methods in isolation, recent work has shown adaptive attacks on image classifiers are able to bypass ensembles of weak defenses [16]; this work could be applied to attack speech recognition models. Future work can be done to adapt noise flooding into a stronger defense that can withstand these types of adaptive adversarial examples, or at least cause the attacks to become more perceptible.
Additionally, this paper only discusses flooding signals with random noise that is effectively sampled from a uniform distribution. Future work can be done in exploring other techniques for producing the noise, perhaps by sampling from a more sophisticated probability distribution or deflecting individual samples.
While the noise flooding techniques were able to yield high recalls and overall F 1 scores for detecting adversarial examples, many of the preprocessing-based defenses described in [8] yielded higher precisions. This suggests that a defense that combines aspects of those defenses with noise flooding may be quite effective in detecting adversarial examples.
Prakash et al. [9] softened the effect that their pixel deflection defense had on benign inputs by applying a denoising technique after locally corrupting the images. Perhaps a denoising technique could be applied after noise flooding to produce a more sophisticated defense that would yield a higher precision.
Future work could also be done in adapting noise flooding into a defense that can restore the original label of adversarial examples, rather than simply detecting adversarial examples. This paper proposed the idea of noise flooding for defending against audio adversarial examples and showed that fairly simple flooding defenses are quite effective in detecting the single-word targeted adversarial examples of Alzantot et al. This paper also showed that simple noise flooding defenses can be effectively combined together into an ensemble for a stronger defense. While these defenses may not be extremely secure against more adaptive attacks, this research aimed ultimately to further discussion of defenses against adversarial examples within the audio domain: a field in desperate need of more literature.
Input: Audio signal x, model m, step size s, maximum noise level max , frequency band b 2: Output: Noise Flooding Score 3: n ← number of samples in x 4: pred orig ← classification of x using m 5: pred ← pred orig 6: ← 0 7: while pred = pred orig and < max do
•
Unfiltered Noise Flooding, • 0-2000 Hz Noise Flooding, • 2000-4000 Hz Noise Flooding, • 4000-6000 Hz Noise Flooding, and • 6000-8000 Hz Noise Flooding.
of the previously mentioned detection methods are evaluated based off their precisions and recalls in detecting adversarial examples from a test set of 856 adversarial examples and 900 benign examples-the remaining 816 adversarial examples and an additional 900 benign examples are used to calculate flooding scores for training.
Fig. 2 .
2A heat map depicting recall values (as percentages) for detecting audio adversarial examples using the noise flooding extreme gradient boosting ensemble. The diagonal of zeroes correspond to trivial sourcetarget pairs for which there were no adversarial examples generated.
11] that achieves a 90% classification accuracy on this dataset. For the purposes of this research, a subset 1 of only 30,799 labeled utterances of 10 words are used, for consistency with previous work regarding the adversarial examples of Alzantot et al. From this subset, 20 adversarial examples are generated for each nontrivial source-target word pair, for a total of 1,800 examples. Each example is generated by implementing the attack with a maximum of 500 iterations through the genetic algorithm. Of these 1,800 generated examples, 128 are classified correctly (i.e. with the original source class) by the model. As such, only the remaining 1,672 examples (that are successful in fooling the model on some level) are used in this research.
TABLE I PERFORMANCE
IOF SIMPLE NOISE FLOODING DEFENSESDetection Method
Precision Recall
F 1
Score
Unfiltered Noise Flooding
89.8%
93.1%
0.914
0-2000 Hz Noise Flooding
88.3%
94.5%
0.913
2000-4000 Hz Noise Flooding
88.3%
92.5%
0.905
4000-6000 Hz Noise Flooding
86.3%
92.5%
0.893
6000-8000 Hz Noise Flooding
82.0%
91.5%
0.865
Isolated Speex Compression a
93.7%
88.5%
0.910
Isolated Panning & Lengthening a
95.8%
82.4%
0.886
a Taken from Rajaratnam et al.
TABLE II PERFORMANCE
IIOF ENSEMBLE DEFENSES LTV is short for the discrete Learned Threshold Voting ensemble. b Taken from Rajaratnam et al.Detection Method
Precision Recall
F 1
Score
Noise Flooding Majority Voting
88.0%
93.6%
0.907
Noise Flooding LTV a
90.8%
92.2%
0.915
Noise Flooding Random Forest
90.9%
93.1%
0.920
Noise Flooding AdaBoost
90.3%
94.2%
0.922
Noise Flooding XGBoost
91.8%
93.5%
0.926
Preprocessing Majority Voting b
96.1%
88.1%
0.919
Preprocessing LTV a b
93.5%
91.2%
0.924
a
The training and test datasets of adversarial and benign examples used in this research are available at http://github.com/LincLabUCCS/Noise-Flooding, along with the code used for implementing and testing the noise flooding defense.
ACKNOWLEDGMENTSWe are thankful to the reviewers for helpful criticism, and the UCCS LINC and VAST labs for general support. We also acknowledge the assistance of Viji Rajaratnam in creatingFig. 1.
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I J Goodfellow, R Fergus, International Conference on Learning Representations. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Good- fellow, and R. Fergus, "Intriguing properties of neural networks," in International Conference on Learning Representations, 2014.
Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, International Conference on Learning Representations. I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harness- ing adversarial examples," in International Conference on Learning Representations, 2015.
Did you hear that? adversarial examples against automatic speech recognition. M Alzantot, B Balaji, M Srivastava, 31st Conference on Neural Information Processing Systems (NIPS). M. Alzantot, B. Balaji, and M. Srivastava, "Did you hear that? adversarial examples against automatic speech recognition," in 31st Conference on Neural Information Processing Systems (NIPS), 2017.
Audio adversarial examples: Targeted attacks on speech-to-text. N Carlini, D Wagner, 1st IEEE Workshop on Deep Learning and Security. N. Carlini and D. Wagner, "Audio adversarial examples: Targeted attacks on speech-to-text," in 1st IEEE Workshop on Deep Learning and Security, 2018.
The effects of JPEG and JPEG2000 compression on attacks using adversarial examples. A E Aydemir, A Temizel, T T Temizel, no. 1803.10418arXiv preprintA. E. Aydemir, A. Temizel, and T. T. Temizel, "The effects of JPEG and JPEG2000 compression on attacks using adversarial examples," arXiv preprint, no. 1803.10418, 2018.
Assessing threat of adversarial examples on deep neural networks. A Graese, A Rozsa, T E Boult, 15th IEEE International Conference on Machine Learning and Applications (ICMLA). A. Graese, A. Rozsa, and T. E. Boult, "Assessing threat of adversarial examples on deep neural networks," in 15th IEEE International Conference on Machine Learning and Applications (ICMLA), 2016.
Towards mitigating audio adversarial perturbations. Z Yang, B Li, P.-Y. Chen, D Song, Z. Yang, B. Li, P.-Y. Chen, and D. Song, "Towards mitigating audio adversarial perturbations," 2018. [Online]. Available: https: //openreview.net/forum?id=SyZ2nKJDz
Isolated and ensemble audio preprocessing methods for detecting adversarial examples against automatic speech recognition. K Rajaratnam, K Shah, J Kalita, arXiv:1809.0439730th Conference on Computational Linguistics and Speech Processing. K. Rajaratnam, K. Shah, and J. Kalita, "Isolated and ensemble audio preprocessing methods for detecting adversarial examples against automatic speech recognition," in 30th Conference on Computational Linguistics and Speech Processing (ROCLING)., 2018, available at arXiv:1809.04397.
Deflecting adversarial attacks with pixel deflection. A Prakash, N Moran, S Garber, A Dilillo, J Storer, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). A. Prakash, N. Moran, S. Garber, A. DiLillo, and J. Storer, "De- flecting adversarial attacks with pixel deflection," in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
Speech commands: A dataset for limited-vocabulary speech recognition. P Warden, no. 1804.03209arXiv preprintP. Warden, "Speech commands: A dataset for limited-vocabulary speech recognition," arXiv preprint, no. 1804.03209, 2018.
Convolutional neural networks for small-footprint keyword spotting. T N Sainath, C Parada, INTERSPEECH. T. N. Sainath and C. Parada, "Convolutional neural networks for small-footprint keyword spotting," in INTERSPEECH, 2015.
Digital Signal Processing Using MATLAB for Students and Researchers. J W Leis, John Wiley & SonsJ. W. Leis, Digital Signal Processing Using MATLAB for Students and Researchers. John Wiley & Sons, 2011.
A decision-theoretic generalization of on-line learning and an application to boosting. Y Freund, R E Schapire, Journal of Computer and System Sciences. 551Y. Freund and R. E. Schapire, "A decision-theoretic generalization of on-line learning and an application to boosting," Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119-139, 1997.
Random forests. L Breiman, Machine Learning. 45L. Breiman, "Random forests," Machine Learning, vol. 45, no. 1, pp. 5-32, 2001.
Xgboost: A scalable tree boosting system. T Chen, C Guestrin, 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. T. Chen and C. Guestrin, "Xgboost: A scalable tree boosting system," in 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 785-794.
Adversarial example defense: Ensembles of weak defenses are not strong. W He, J Wei, X Chen, N Carlini, D Song, 11th USENIX Workshop on Offensive Technologies. W. He, J. Wei, X. Chen, N. Carlini, and D. Song, "Adversarial example defense: Ensembles of weak defenses are not strong," in 11th USENIX Workshop on Offensive Technologies, WOOT 2017, 2017.
| [
"http://github.com/LincLabUCCS/Noise-Flooding,"
] |
[
"Automatic Text Summarization Methods: A Comprehensive Review",
"Automatic Text Summarization Methods: A Comprehensive Review"
] | [
"Divakar Yadav ayadav@nith.ac.in ",
"Jalpa Desai ",
"Arun Kumar ",
"Yadav "
] | [] | [] | One of the most pressing issues that have arisen due to the rapid growth of the Internet is known as information overloading. Simplifying the relevant information in the form of a summary will assist many people because the material on any topic is plentiful on the Internet. Manually summarising massive amounts of text is quite challenging for humans. So, it has increased the need for more complex and powerful summarizers. Researchers have been trying to improve approaches for creating summaries since the 1950s, such that the machine-generated summary matches the human-created summary. This study provides a detailed state-of-theart analysis of text summarization concepts such as summarization approaches, techniques used, standard datasets, evaluation metrics and future scopes for research. The most commonly accepted approaches are extractive and abstractive, studied in detail in this work. Evaluating the summary and increasing the development of reusable resources and infrastructure aids in comparing and replicating findings, adding competition to improve the outcomes. Different evaluation methods of generated summaries are also discussed in this study. Finally, at the end of this study, several challenges and research opportunities related to text summarization research are mentioned that may be useful for potential researchers working in this area. | 10.48550/arxiv.2204.01849 | [
"https://arxiv.org/pdf/2204.01849v1.pdf"
] | 247,958,026 | 2204.01849 | b7564f9d6c8ba0b6a2fa52218b0be0e9f1ee4a7f |
Automatic Text Summarization Methods: A Comprehensive Review
Divakar Yadav ayadav@nith.ac.in
Jalpa Desai
Arun Kumar
Yadav
Automatic Text Summarization Methods: A Comprehensive Review
Keyword: Automatic text summarizationNatural Language ProcessingCategorization of text summarization systemabstractive text summarizationextractive text summarizationHybrid Text SummarizationEvaluation of text summarization system
One of the most pressing issues that have arisen due to the rapid growth of the Internet is known as information overloading. Simplifying the relevant information in the form of a summary will assist many people because the material on any topic is plentiful on the Internet. Manually summarising massive amounts of text is quite challenging for humans. So, it has increased the need for more complex and powerful summarizers. Researchers have been trying to improve approaches for creating summaries since the 1950s, such that the machine-generated summary matches the human-created summary. This study provides a detailed state-of-theart analysis of text summarization concepts such as summarization approaches, techniques used, standard datasets, evaluation metrics and future scopes for research. The most commonly accepted approaches are extractive and abstractive, studied in detail in this work. Evaluating the summary and increasing the development of reusable resources and infrastructure aids in comparing and replicating findings, adding competition to improve the outcomes. Different evaluation methods of generated summaries are also discussed in this study. Finally, at the end of this study, several challenges and research opportunities related to text summarization research are mentioned that may be useful for potential researchers working in this area.
Fig. 3
. Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025 (Arne von See, 2021) There are several valid reasons in favour of the automatic summarization of documents. Here are listed just a few (Ab & Sunitha, 2013) i.
Summaries saves reading time. ii.
Summaries help in the selection of documents when conducting research. iii.
Indexing is more successful when automatic summarization is used. iv.
When compared to human summarizers, automatic summary systems are less biased. v.
Because they provide personalized information, personalized summaries are important in question-answering systems. vi.
Commercial abstract services can improve the number of texts they can process by using automatic or semi-automatic summarizing techniques.
Automatic Text Summarization (ATS) is a relatively new learning issue that has gotten a lot of interest. As research advances, we hope to see a breakthrough that will help with this by giving a timely technique of summarising big texts. We present an overview of text summarising techniques in this work to highlight their usefulness in dealing with enormous data and to assist researchers in using them to address challenges. The fig. 4 shows the number of researcher papers published in domain of text summarization in a particular time interval staring from 1958.
Main Contribution of study
This work provides a concise, current and comprehensible view in the field of text summarization. The major contributions of this study are as under: a. Starting from ground level, make the reader comfortable with the ATS system and why we need an ATS system. Provided examples of ATS systems presented in the literature for each application and illustrated ATS systems' classifications. b. Provided a detailed analysis of the three ATS approaches extractive, abstractive, and hybrid. Furthermore, the review table is built on factors like dataset, approach, performance, advantages, and disadvantages. c. Provided an overview of the standard datasets and provided complete details about evaluation methods available for the ATS system. d. Detailed analysis of challenges and future scopes for text summarization.
This article is arranged into six sections. Section 1 discusses the introduction of an automatic text summarization system with its requirements and applications. The automatic text summarization is divided into many categories discussed in detail in section 2. Next, section 3 focused on Extractive, Abstractive and Hybrid text summarization. The evaluation methods for summaries generated by the system are discussed in section 4. Later that frequently used datasets for summarization task is listed in section 5. Lastly, the conclusion is given in section 6.
Categorization of ATS
There are different classifications for an automatic text summarization (ATS) system based on its input, output, purpose, length, algorithms, domain, and language. There are many other factors that can be considered while discussing the classification of summarization. Different researchers have considered different factors. As per our survey, the detailed categorization of an ATS system is given in fig. 5. A detailed explanation of a particular category are discussed in following subsections as under:
Based on no. of Input documents
Based upon size of input source documents that are used to generate a summary, summarization can be divided in two types:
• Single Document: Single document text summarization is automatic summarization of information a single document (Garner, 1982).
• Multiple Document: Multi-document text summarization is an automatic summarization of information from multiple document (Ferreira et al., 2014).
Multi-document summarization is important where we must put different types of opinions together, and each idea is written with multiple perspectives within a single document. Single document text summarization is easy to implement, but multi-document summarization is a complex task. Redundancy is one of the biggest problems in summarizing multiple documents. Carbonell & Goldstein (1998) has given MMR (Maximal Marginal Relevance) approach, which helps to reduce redundancy. Another main problem for multi-document summarization is heterogeneity within a large set of documents. It is very complex to summarize multiple documents with extractive methods where there are so many conflicts and biases in the real world. Here for multiple documents, abstractive summarization performs far better. However, multi-document summarization also brings issues like redundancy in output summary while working with a huge number of documents. Single document text summarization is used in a limited field like reading the given comprehension and giving an appropriate title or summary. In contrast, multi-document text summarization can be used in the field of news summarization from different sites, customer's product reviews from different vendors, Q&A systems and many more.
SummCoder (Joshi et al., 2019) is a new methodology for generic extractive single document text summarization. The method creates a summary based on three criteria they developed: sentence content relevance, sentence novelty, and sentence position relevance. The novelty metric is produced by utilizing the similarity among sentences represented as embedding in a distributed semantic space, and the sentence content relevance is assessed using a deep auto-encoder network. The sentence position relevance metric is a custom feature that gives the initial few phrases more weight thanks to a dynamic weight calculation method controlled by the document length. In the extractive multi-document text summarization field, Sanchez-Gomez et al. (2021) shows that all feasible combinations of the most prevalent term-weighting schemes and similarity metrics have been implemented, compared, and assessed. Experiments with DUC datasets were conducted, and the model's performance was evaluated using eight ROUGE indicators and the execution time. The TF-IDF weighting scheme and cosine similarity give the best result of 87.5% ROUGE score as a combination.
Based on Summarization Methods
Based on methods that how can summaries are produced, i.e. Just picking up sentences from the source text or generating new sentences after reading source text or a combination of both, summarization can be divided into three types:
• Extractive Automatic Text Summarization: Extractive text summarization is the strategy of concatenating on extracting summary from a given corpus (Rau et al., 1989). • Abstractive Automatic Text Summarization: Abstractive text summarization involves paraphrasing the given corpus and generating new sentences (Zhang et al., 2019). • Hybrid Automatic Text Summarization: It combines both extractive and abstractive methods. It means extracting some sentences and generating a new one from a given corpus (Binwahlan et al., 2010). Think of a highlighter used to point out important sentences in a book. It is an example of extractive text summarization. Now think of the notes we prepare from a book using our own words. It is an example of abstractive text summarization. Extractive text summarization is like copy-pasting some of the important sentences from the source text, while abstractive text summarization selects some meaningful sentences and generates new sentences from previously selected sentences. Refer Fig. 6 for a better understanding of Extractive and Abstractive summarization. Hybrid text summarization combines an approach for producing a summary efficiently. Both Extractive and Abstractive text summarization falls into Machine Learning and NLP domain. Additionally, abstractive text summarization covers NLG. The survey of both approaches is shown in the later section of this article. Critical areas where extractive text summarization is applied are news, medical, book, legal document, abstractive text summarization, customer reviews, blog, tweet summarization, etc.
The plus point of the extractive text summarization model is that the sentences in the summaries must adhere to the syntactic structure's constraints. However, that model's shortcoming is that the summaries' sentences may not be semantically meaningful. This disadvantage arises because adjacent sentences in the summaries are not always contiguous in the original text. Because ATS models learn the collocation between words and construct a sequence of keywords based on the collocation between words after training, they have the advantage of inclusive semantics. The downside of ATS models is that it is challenging to meet the criterion of syntactic structure with this sequence of keywords. Rare words are another major flaw in traditional ATS models. The number of occurrences of a rare word and its collocation will define its importance, but humans will use other elements to assess whether a word is essential. As a result, in some instances, some words that appear infrequently might be deemed unimportant, although a portion of these words is critical for summary construction from a human perspective (Song et al., 2019b).
Based on Output Summary Nature:
Based on the output summary's characteristics the ATS system can be divided into two types:
• Generic: Generic text summarizers fetch important information from one or more documents to provide a concise meaning of given document(s) (Aone et al., 1997).
• Query-Based: A query-based summarizer is built to handle multi-documents and gives a solution as per the user's query (Van Lierde & Chow, 2019). The score of sentences in each document is based on the frequency counts of words or phrases in query-based text summarization. Sentences containing query phrases receive higher marks than sentences containing single query words. The sentences with the highest scores and their structural context are extracted for the output summary (Kiyani & Tas, 2017).. A query-based sentence extraction algorithm is given as below (Pembe & Güngör, 2007): i. Rank all the sentences according to their score. ii.
Add the main title of the document to the summary. iii.
Add the first level-1 heading to the summary. iv.
While (summary size limit not exceeded) v.
Add the next highest scored sentence. vi.
Add the structural context of the sentence: (if any and not already included in the summary) vii.
Add the highest-level heading above the extracted text (call this heading h). viii.
Add the heading before h in the same level. ix.
Add the heading after h in the same level. x.
Repeat steps 7, 8 and 9 for the subsequent highest-level headings. xi.
End while A query is not used in generic summaries. Because they do not comprehensively assess the original document, query-based summaries are biased. They are not ideal for content overview because they solely deal with user queries. Generic summaries are necessary to specify the document's category and to describe the document's essential points. The key subjects of the documents are considered in the best general summary, which strives to minimize redundancy as much as possible (Kiyani & Tas, 2017).
Based on Summary Language
Based on the language of input and output of the ATS system, it can be divided into the following 3 categories:
• Monolingual: In a Monolingual text summarizer, the language of the input document and output summary is the same (Kutlu et al., 2010).
• Multilingual: In a Multilingual text summarizer, input is written in many languages (Hindi, English, and Gujarati), and output summary is generated likewise in these languages (Hovy & Lin, 1996). • Cross-Lingual: In a Cross-lingual text summarizer, the input document is in one language (say English), and the output summary is in another language (say Hindi) (Linhares Pontes et al., 2020).
Most of the research papers studied in this article are based on monolingual text summarization. Compared to monolingual, multilingual and cross-lingual is challenging to implement. It takes more effort to train a machine on more than one language structure. SUMMARIST (Hovy & Lin, 1996) is a multilingual text summarization system based on an extraction strategy that generates summaries from English, Indonesian, Spanish, German, Japanese, Korean, and French sources. Cross-Language Text Summarization (CLTS) (Linhares Pontes et al., 2020) generates a summary in a target language from source language materials. It entails a combination of text summarising and machine translation methods. Unfortunately, this combination leads to mistakes, which lowers the quality of summaries. CLTS systems may extract relevant information from both source and destination languages through joint analysis, which improves the development of extractive crosslingual summaries. Recent methods for CLTS have offered compressive and abstractive approaches; however, these methods rely on frameworks or tools that are only available in a few languages, restricting their applicability to other languages.
Based on Summarization Algorithms
Based on the actual algorithm that is used to generate the summarises, the ATS system is divided into two types as given below:
• Supervised: The supervised summarizer needs to train the sample data by labelling the input text document with the help of human efforts (Neto et al., 2002). • Unsupervised: In the Unsupervised summarizer training phase is not needed (Alami et al., 2019).
In order to select important content from documents in a supervised system, training data is required. Training data is a large volume of labelled or annotated data is required for learning techniques. These systems are approached as a two-class classification issue at the sentence level, with positive samples being sentences that belong to the summary and negative samples being sentences that do not belong to the summary. On the other hand, unsupervised systems do not require any training data. They create the summary by just looking at the documents they want to look at for summarization. As a result, they can be used with any newly observed data without needing extra adjustments. These systems use heuristic methods to extract relevant sentences and construct a summary. Clustering is used in unsupervised systems (Gambhir & Gupta, 2017).
A single document supervised machine learning-based approach for the Hindi language is given by Nikita (2016). The sentences are divided into four categories: most influential, important, less important, and insignificant. The summarizer is then trained using the SVM supervised machine learning algorithm to extract important sentences based on the feature vector. Sentences are then included in the final summary based on the required compression ratio. The experiment was carried out on news stories from various categories such as Bollywood, politics, and sports, and the results showed 72 percent accuracy at a compression ratio of 50 percent and 60 percent at a compression ratio of 25 percent. Recently, an unsupervised neural network approach has been studied by Meknassi et al. (2021) for Arabic language text summarization. A new approach using documents clustering, topic modelling, and unsupervised neural networks have been proposed to build an efficient document representation model to overcome problems raised with Arabic text documents. The proposed approach is evaluated on Essex Arabic Summaries Corpus and compared against other Arabic text summarization approaches using ROUGE measure.
Based on Summary Content
Based on the type of the content of the output summaries, the system is categorised into two parts as:
• Inductive: Indicative summary contains only generic idea about the source document (Bhat et al., 2018).
• Informative: Informative summary contains all the main topics about the original document (Bhat et al., 2018).
• Evaluative or Critical: It capture the summary from author's point of view on a given topic (Jezek & Steinberger, 2008).
Inductive summarization is used to indicate what the document is all about, and it aims to give an idea to a user whether to read this original document or not. The length of this summary is approximately 5% of the original content. The informative summarization system summarises the primary text concisely. The helpful summary is around 20% of the whole text length (Kiyani & Tas, 2017). A typical example of evaluative summaries are reviews, but they are pretty out of the scope of nowadays summarizers. It should be emphasized that the three groupings indicated above are not mutually exclusive and are common summaries that have both an informative and an indicative role. Informative summarizers are frequently used as a subset of indicative summarizers (Jezek & Steinberger, 2008).
Based on Summary types
Based on the length of the generated summaries, the ATS system is divided into four types as given below:
• Headline: A headline generated from a source document is usually shorter than a sentence (Barzilay & Mckeown, 2005).
• Sentence level: The sentence-level summarizer produces a single sentence from the original text (Y. H. Hu et al., 2017).
• Highlight: Highlights are produced in a compressed form of the original text written in bullet points (Tomek, 1998).
• Full summary: Full summary is generated as per the user's compression rate or user's requirements (Koupaee & Wang, 2018) Headlines, highlights, and sentence level type of summary are generally used in news database or opinion mining or for social media dataset whereas a full summary is commonly used for all the domains.
Based on Summarization Domain:
Based on the domain of the input and output of the ATS system, it is divided into following 3 categories:
• Genre Specific: It accepts only special type of input text format (Hovy & Lin, 1996) .
• Domain dependent: Domain dependent summarization is specific to one domain (Farzindar & Lapalme, 2004).
• Domain independent: Domain independent summarization system is independent of source documents' domain.
In genre-specific summarization, there is a restriction on the text template. Newspaper articles, scientific papers, stories, instructions, and other types of templates are available. The summary is generated by the system using the structure of these templates. On the other hand, independent systems have no predefined limitations and can take a variety of text kinds. Furthermore, some techniques only summarise texts whose subject can be characterized in the system's domain; these systems are domain-dependent. These systems impose some restrictions on the topic matter of documents. Such systems know everything there is to know about a specific subject and use that knowledge to summarise. Generally, graph-based techniques are adopted for domain-dependent summarisation as they have sound potential. The authors of (Moradi et al., 2020) have given an efficient solution to deal with the challenges in graph-based methods. To capture the linguistic, semantic, and contextual relationships between the sentences, they trained the model by continuous word representation model. i.e., Word2vec's Skiagrams and Continuous Bag of Words (CBOW) models (Mikolov et al., 2013) and Global Vectors for Word Representation (GloVe) (Mutlu et al., 2020)(Hanson Er, 1971) on a large corpus of biomedical text. To solve the challenge of ranking the most important nodes in a graph, they adopted undirected and weighted graph ranking techniques like the PageRank algorithm (Brin & Page, 2012). Newspaper stories and scientific text have distinct qualities than legal text. In a comprehensive document of the news genre, for example, there is little or no structure. The presence of the same term at different levels of the hierarchy will have distinct effects. The relevance of the words in a judgement is determined by the source of the ruling (whether it is from a District Court, State Court, Supreme Court, or Federal Court). We can generally ignore references/citations when summarizing content; however, this may not be possible in legal writings (Kanapala et al., 2019).
Based on Processing Level
Based on the processing level of the input document, the system is divided into two types:
• Surface-level approaches: In this scenario, data is represented by shallow feature ideas and their combinations. Statistically salient terms, positionally salient terms, cue phrases, domain-specific or a user's query terms are examples of shallow features. The outcomes are in the form of extracts (Ježek et al., 2007).
• Deeper-level approaches: Extracts or abstracts may be produced through deeper-level techniques. In the latter scenario, synthesis is used to generate natural language. It requires some semantic analysis. For example, entity techniques can construct a representation of text entities (text units) and their relationships to identify salient areas. Entity relationships include thesaural, syntactic, and semantic relationships, among others. They can also use discourse methodologies to represent the text structure, such as hypertext mark-up or rhetorical structure (Ježek et al., 2007).
Detailed about ETS, ABS and HTS
Among all the Automatic text summarization system classification, the most commonly accepted or used categories are Extractive, Abstractive and hybrid summarization. Thus, this article focused is on these two approaches mainly. Now lets us see a detailed survey on these approaches:
Extractive text summarization
Since starting with the era when first-time automatic text summarization came into the picture (Luhn, 1958), the text processing task is performed mainly by using features based on IR (Information Retrieval) measures, i.e., term frequency (TF), inverse term frequency (TF-IDF). Table-1 shows a detailed survey on extractive text summarization with research papers, the dataset used, the system's accuracy, and its pros and cons.Earlier, the efficiency of the summary was prepared by the proportion of no. of judged-important points to total no. of words in the summary (Garner, 1982). The immediate summarization result and relationship to detailed comprehension and recall results were analyzed in that study. The lack of linguistic knowledge is a weak point for extracting helpful information from a large amount of the data. To overcome these two limitations: (i) One mechanism that deals with unknown words and gaps in linguistic information. (ii) To extract linguistic information from text automatically, SCISOR (System for Conceptual Information Summarization, Organization and Retrieval) was developed by Rau et al. (1989). Experiments performed on summarization until 1990 were focused on just extracting (reproduction) the summaries from original text rather than abstracting (newly generated). SUMMRIST system (Hovy & Lin, 1996) was developed with the help of NLP techniques, in which one can create a multi-lingual summarizer by modifying some part of the structure.
The challenges with traditional frequency-based, knowledge-based and discourse-based summarization lead to encountering these challenges with robust NLP techniques like corpus-based statistical NLP (Aone et al., 1997). The summarization system named DimSum consists of a summarization server and summarization client. The features produced from these powerful NLP algorithms were also used to give the user numerous summary views in an innovative way. Evaluating the summaries of humans and systems by four parameters; optimistic evaluation, pessimistic evaluation, intersection evaluation, union evaluation (Salton et al., 1997) and proven that the summaries generated by the two humans are dissimilar for the same article while automatic methods are favourable here. A robust summarization was practically implemented on online news 'New York Times' by Tomek (1998), which gives summaries very quickly with including significantly less portion of original lengthy text. The study of effects of headings on text summarization proven (Lorch et al., 2001) that readers depend heavily on organizational signals to construct a topic structure. The machine learning approach (Neto et al., 2002) considers automatic text summarization as a two-class classification problem, where a sentence is considered 'correct' if it appears in extractive reference summary or otherwise as 'incorrect'. Here they used two famous ML classification approaches, Naïve Bayes and C4.5. Lexicon is a salient part of ant textual data. Focusing on an algorithm (Silber & McCoy, 2002) that efficiently makes lexical chains in linear time is a feasible intermediate representation of text summarization.
As a prior study shows, supervised methods were where human-made summaries helped us find parameters or features of summarization algorithms. Despite that, unsupervised methods (Nomoto & Matsumoto, 2003) with diversity functionality define relevant features without any help from human-made summaries. (Yeh et al., 2005b) proposed a trainable summarizer that generates summaries based on numerous factors such as location, positive keyword, negative keyword, centrality, and resemblance to the title. It uses a genetic algorithm (GA) to train the score function to find a good combination of feature weights. After that, it employs latent semantic analysis (LSA) to derive a document's or corpus' semantic matrix and semantic sentence representation to build a semantic text relationship map. Combining three approaches: a diversity-based method, fuzzy logic, and swarm-based methods (Binwahlan et al., 2010), can generate good summaries. Where diversity-based methods use to figure out similar sentences and get the most diverse sentence and concentrate on reducing redundancy, while swarm-based methods are used to distinguish the most important and less important sentences then use fuzzy logic to tolerate redundancy, approximate values and uncertainty, and this combination concentrates on the scoring techniques of sentences. While comparing two-approach Swarm-fuzzy based methods performs well than diversity-based methods here.
The construction of methods for measuring the efficiency of SAS (Systems of automatic summarization) functioning is an important area in the theory and practice of automatic summarization. Based on a model vocabulary supplied by subjects, four techniques (ESSENCE (ESS), Subject Search Summarizer (SSS), COPERNIC (COP), Open Text Summarizer (OTS)) of automatic text summarization are evaluated by Yatsko & Vishnyakov (2007). The distribution of vocabulary terms in the source text is compared to the distribution of vocabulary terms in summaries of various lengths generated by the systems. (Ye et al. (2007b) contend that the quality of a summary can be judged by how many concepts from the source documents can be retained after summarization. As a result, summary generation can be viewed as an optimization task involving selecting a set of sentences with the least answer loss. The proposed document concept lattice (DCL) is a unique document model that indexes sentences based on their coverage of overlapping concepts. The authors of (Ko & Seo, 2008) suggested method merges two consecutive sentences into a bi-gram pseudo sentence, allowing statistical sentence-extraction tools to use contextual information. The statistical sentence-extraction approaches first choose salient bi-gram pseudo sentences, and then each selected bi-gram pseudo sentence is split into two single sentences. The second sentenceextraction operation for the separated single sentences is completed to create a final text summary.
CN-Summ(Complex Networks-based Summarization) was proposed by (Antiqueira et al., 2009). Nodes relate to sentences in the graph or network representing one piece of text, while edges connect sentences that share common significant nouns. CN-Summ consists of 4 steps: 1) prepossessing (lemmatization). 2) resulting text is mapped to a network representation according to adjutancy and weight metrics of order n*n (n is no. of sentences/nodes) .3) compute different network measurements 4) the first m sentences are selected as summary sentences depending upon compression rate. Alguliev & Aliguliyev (2009) gave a new approach for unsupervised text summarization. That approach is focused on sentence clustering, where clustering is the technique of detecting interesting distributions and patterns within multidimensional data by establishing natural groupings or clusters based on some similarity metric. Here the researchers have proposed a new method to measure similarity named Normalized Google Distance (NGD) and to optimize criterion functions discrete differential evolution algorithm called as MDDE (Modified Discrete Differential Evolution) Algorithm is proposed.
Swarm Intelligence (SI) is the collective intelligence resulting from the collective behaviours of (unsophisticated) individuals interacting locally and with their environment, causing coherent functional global patterns to emerge. The primary computational parts of swarm intelligence are Particle Swarm Optimization (PSO), which is inspired by the social behaviour of bird flocking or fish schooling, and Ant Colony Optimization (ACO), which is inspired by the behaviour of ants. Binwahlan et al. (2009a) suggested a model based on PSO whose primary goal is to score sentences while focusing on dealing equally with text elements depending on their value. Combining three approaches: diversity-based method, fuzzy logic, and swarm-based methods (Binwahlan et al., 2010) can generate good summaries. Where diversity-based methods use to figure out similar sentences and get the most diverse sentence and concentrate on reducing redundancy, while swarm-based methods are used to distinguish the most important and less important sentences then use fuzzy logic to tolerate redundancy, imprecise values and uncertainty, and this combination concentrates on the scoring techniques of sentences. While comparing two-approach Swarm-fuzzy based methods performs well than diversity-based methods here.
In (Mashechkin et al., 2011), the researchers had used LSA(Latent Semantic Analysis) for text summarization. The original text is reproduced as a matrix of terms and sentences. Text sentences are represented as vectors in the term space, and a matrix column represents each sentence. The resultant matrix is then subjected to latent semantic analysis to construct a representation of text sentences in the topic space, which is performed by applying one of the matrix factorizations (singular value decomposition (SVD)) to the text matrix. (Alguliev et al., 2011b) consider text summarization problem as integer linear programming problem while assuming that summarization is a task of finding a subset of sentences from the original text to represent important detail of the original text. That study focused on three characteristics (relevance, redundancy, and length) and tried to optimize that by particle swarm optimization algorithm (PSO) and branch and bound optimization algorithm. In extractive text summarization, sentence scoring is the most commonly used technique. The study (Ferreira et al., 2013) evaluated 15 algorithms available for sentence scoring based on quantitative and qualitative assessments. In conjunction with a graph-based ranking summarizer, Wikipedia is given by (Sankarasubramaniam et al., 2014a). It has given a unique concept by introducing incremental summarization property, where single and multi-document both can provide additional content in real-time. So, the users can first see the initial summary, and if willing to see other content, they can make a request.
Using a deep auto-encoder (AE) to calculate a feature space from the term-frequency (tf) input, (Yousefi-Azar & Hamey, 2017b) offer approaches for extractive query-oriented single-document summarization. Both local and global vocabularies are considered in experiments. The study shows the effect of adding slight random noise to local TF as the AE's input representation and propose the Ensemble Noisy Auto-Encoder as a collection of such noisy AEs (ENAE). Even though there is a lot of study on domain-based summarising in English and other languages, there is not much in Arabic due to a lack of knowledge bases. A hybrid, singledocument text summarization approach is proposed in (Al-Radaideh & Bataineh, 2018a) paper (ASDKGA). The method uses domain expertise, statistical traits, and genetic algorithms to extract essential points from Arabic political documents. For domain or genre-specific summarization (such as for medical reports or specific news articles), feature engineering-based models have shown to be far more successful, as classifiers can be taught to recognize particular forms of information. For general text summary, these algorithms produce poor results. To overcome the issue, an entirely data-driven approach for automatic text summarization is given by (Sinha et al., 2018) The most challenging difficulties are covering a wide range of topics and providing diversity in summary. New research based on clustering, optimization, and evolutionary algorithms has yielded promising results for text summarization. A two-stage sentences selection model based on clustering and optimization techniques, called COSUM, was proposed by Alguliyev et al. (2019b). The sentence set is clustered using the k -means algorithm in the first stage to discover all subjects in a text. An optimization approach is proposed in the second step for selecting significant sentences from clusters. The most crucial reason for the lack of domain shift approaches could be understanding different domain definitions in text summarization. For the text summarization task, extended the traditional definition of the domain from categories to data sources. Then used, a multi-domain summary dataset to see how the distance between different domains affects neural summarization model performance. Traditional applications have a major flaw: they use high-dimensional, sparse data, making it impossible to gather relevant information. Word embedding is a neural network technique that produces a considerably smaller word representation than the classic Bag-of-Words (BOW) method. (Alami et al., 2019) has created a text summarization system based on word embeddings, and it showed that the Word2Vec representation outperforms the classic BOW representation. Another summarization approach using word embeddings was given by (Mohd et al., 2020). This study also used Word2Vec as a distributional semantic model that captures the semantics.
Current state-of-art systems produce generic summaries that are unrelated to the preferences and expectations of their users. CTRLsum (He, Kryscinski, et al., 2020), a unique framework for controlled summarizing, is presented to address that limitation. This system permits users to interact with the summary system via textual input in a collection of key phrases or descriptive prompts to influence several features of generated summaries. The majority of recent neural network summarization algorithms are either selection-based extraction or generation-based abstraction. (Xu & Durrett, 2020) introduced a neural model based on joint extraction and syntactic compression for single-document summarization. The model selects phrases from the document, identifies plausible compressions based on constituent parses, and rates those compressions using a neural model to construct the final summary. Four algorithms were proposed by (El-Kassas et al., 2020). The first algorithm uses the input document to create a new text graph model representation. The second and third algorithms look for sentences to include in the candidate summary in the built text graph. The fourth algorithm selects the most important sentences when the resulting candidate summary exceeds a userspecified limit. Automatic text summarization is an arduous effort for under-resourced languages like Hindi, and it is still an unsolved topic. Another problem with such languages is the lack of corpus and insufficient processing tools. For Hindi novels and stories, (Rani & Lobiyal, 2021) developed an extractive lexical knowledge-rich topic modelling text summarising approach in this study. The standard words-based similarity measure grants weight to most graph-based text summarising techniques. Belwal et al. (2021) offered a new graph-based summarization technique that considers the similarity between individual words and the sentences and the entire input text.
Abstractive text summarization
The competitions DUC-22003 and DUC-2004 had standardized the task of Abstractive text summarization, in which news articles from various fields with multiple reference summaries per article generated by humans are used as datasets. The TOPIARY system (Zajic et al., 2004) stood the best performing technique. Some noticeable work was submitted by Banko et al. (2000) with phrase-table based machine translation techniques and (Woodsend et al., 2010) with quasisynchronous grammar techniques. Table-2 shows a detailed survey on Abstractive text summarization with a particular research paper with the dataset used, the system's accuracy, and its pros and cons.After that, deep learning was introduced as a viable alternative to many NLP problems. Text is a sequence of words where sequence-to-sequence models can entertain input and output sequences. With the apparent similarities Machine translation (MT) problem may be mapped to text summarization despite that abstractive summarization is very different from it. MT is lossless while summarization is lossy in the manner, and MT is a one-to-one word-level mapping between source and target, but that mapping is less in summarization.
In (Rush et al., 2015), the researchers had used convolution models to encode the input and context-sensitive feed-forward network with attentional mechanism and showed better results for Gigaword and DUC datasets. (Chen, 2015) have produced a sizeable Chinese dataset for short text summarization (LCSTS), which has given good results on their dataset while using RNN architecture at both encoder and decoder sides. Beyond RNN architecture at both encoder and decoder sides, (Nallapati et al., 2016) captured keywords, modelled unseen or rare words, and captured the document's hierarchy using a hierarchical attention mechanism. The authors have also tried to analyse the quality of the output summary. In that case, somewhere models perform well and somewhere poor compared to others. Human's summaries are more abstractive naturally because they use some inherent structures while writing summaries. The deterministic transformation in a discriminative model(RNN) used by Nallapati et al. (2016) limits the representation of latent structure information. After that, Miao & Blunsom (2016) gave a generative model to capture the latent structure, but they did not consider recurrent dependencies in their generative model. The authors of (Li et al., 2017) tried to find some common structures such as "what", "what happened", "who actioned what" from the source and proposed a deep recurrent generative model for modelling latent structure.
AMR (Abstract Meaning Representation) was firstly introduced by Banarescu et al. (2013). AMR targets fetching the meaning of the text by giving a specialmeaning representation to the source text. AMR attempts to capture "who is doing what to whom". The work of Liu et al. (2015).' s includes AMR, but they did not use it at the abstraction level, so their work is limited to extractive summarization only. Also, the approach aims to generate a summary from a story. Producing a single graph assumes that all the important sentences can be extracted from a single subgraph. Difficulties arise when information is spread out all over. So, Doha et al. (2017a) worked on multiple summary graphs and explored problems with existing evaluation methods and datasets while doing abstractive summarization.
Combining the advantages of extractive and abstractive and curing the disadvantages (Song et al., 2019b) had implemented a model named ATSDL (ATS using DL). This model uses a phrase extraction method called MOSP to extract key phrases from the original text after that, learns the collocation of phrases. Following training, the model will generate a phrase sequence that satisfies the syntactic structure criteria. Furthermore, we leverage phrase location information to overcome (He, Kryściński, et al., 2020) Language dependent the problem of unusual terms, which practically all abstractive models would face. Regarding sequence-to-sequence models, RNN is not most used because it tends to low-efficiency problems as they rely on the previous step when training, and it must preserve the hidden state of the whole past, thus not able to perform parallel operations. To overcome these problems, Zhang et al. (2019) proposed a sequence-to-sequence model based on CNN to create the representation of source text. As we know, traditional CNN can only encode with fixed size contexts of inputs, but in this study, they increase the compelling text by stacking CNN layers over each other. The length of the sequence under consideration may thus be readily regulated, and each component of the sequence can be computed in parallel. More commonly, abstractive summarization problems are that the generated summaries are frequently incompatible with the source content in terms of semantics. WEI et al. (2018) offer a regularisation strategy for the sequence-to-sequence model in this research, and we use what the model has learnt to regularise the learning objective to mitigate this problem's influence.
Until now, the model discussed does not consider whether the summaries are factually consistent with source documents. Kryściński et al. (2019a) present a model-based technique for evaluating factual consistency and detecting conflicts between source documents and the output summary that is a weakly supervised model. The steps of these models are:
• Determine whether sentences are factually consistent after being transformed,
• Find a span in the source documents to validate the consistency prediction, and • Find an inconsistent span in the summary phrase if one exists. Another notable work is done in the sequence-to-sequence encoder and decoder approach by Kryściński et al. (2020). That study makes two main contributions. First, separate extraction and generation at decoder part. The contextual network is standalone for extraction, and a language model generates paraphrases. Second, optimizing the n-gram overlap while encouraging abstraction with ground-truth summaries. Wang et al. (2020) provide a unique Generative Adversarial Network (GAN) for Abstractive Text Summarization with a multitask constraint in this research (PGAN-ATSMT). Through adversarial learning, this model simultaneously trains a generator G and a discriminator D. The sequence-to-sequence architecture is the backbone of the generative model G, which takes the source document as input and generates the summary. The model uses a language model to implement D instead of a binary classifier as the discriminative model D, and the output of the language model is used as the reward to steer the generative model. A minimax two-player game was used to optimize the generative model G and the discriminative model D. Extended work on the GAN network is done by Yang et al. (2021). They present a new Hierarchical Human-like deep neural network for ATS (HH-ATS), influenced by how humans interpret articles and produce summaries. HH-ATS comprises three main components (i.e., a knowledge-aware hierarchical attention module, a multitask learning module, and a dual discriminator generative adversarial network) that reflect the three phases of human reading cognition (i.e., rough reading, active reading, and post-editing).
Hybrid text summarization:
When discussing very precious approaches of Automatic text summarization that are Extractive and Abstractive, both come with their pros and cons. Extractive summarization is comparatively easier to implement than abstractive summarization, but extractive summarization is not as efficient as user perception. Combining these methods by strengthening their pros and weakening their cons leads to hybrid methods for text summarization.
Experiments were done on summarization until 1990 were focused on just extracting (reproduced) the summaries from original text rather than abstracting (newly generated).SUMMRIST system (Hovy & Lin, 1996) was developed with the help of NLP techniques. We can develop a multi-lingual summarizer by modifying some parts of the structure.
Semantic and statistical features combine extracting and abstracting. The authors of Bhat et al. (2018) used emotions of the text as a semantic feature. Emotions play a significant role in defining the user's emotional affinity, so lines with implicit emotional content are crucial to the writer and should be included in the summary. The extracted summary is then put into the Novel language generator, a hybrid summarizer that combines WordNet, Lesk algorithm, and POS to transform extractive summary into an abstractive summary. Table-3 shows a detailed survey on hybrid text summarization with a particular research paper with the dataset used, accuracy of the system and its pros and cons.
ATS System Evaluation and Evaluation Programs
There have been many efforts to solve summary evaluation issues in the past two decades. NIST (National Institute of Standards and Technology) leads the effort by organizing the DUC and TAC challenges. (Huang et al., 2010) has given four pillars that should be considered to generate summaries:
• Information Coverage • Information Significance • Information Redundancy • Text Coherence
In the discipline of automatic text summarization, evaluating the summary is a critical task. Evaluating the summary and increasing the development of reusable resources and infrastructure aids in comparing and replicating findings, adding competition to improve the outcomes. However, carefully evaluating many texts to acquire an unbiased perspective is impossible. As a result, accurate automatic evaluation measures are required for quick and consistent evaluation. It is difficult for people to recognize what information should be included in a summary; therefore, evaluating it is difficult. Information changes depending on the summary's purpose, and mechanically capturing this information is a challenging undertaking (Gambhir & Gupta, 2017). Evaluation of the ATS system is given in fig.7 below: a) Extrinsic Evaluation: An extrinsic evaluation looks at how it influences the accomplishment of another task (Text classification, Information retrieval, Question answering). i.e., a summary is termed a good summary if it provides help to other tasks. Extrinsic evaluations have looked at how summarization affects tasks such as relevance assessment, reading comprehension, etc.
-Relevance evaluation: Various methods are used to analyse a topic's relevance in the summary or the original material.
-Reading comprehension: After reading the summary, it assesses whether it is possible to answer multiple-choice assessments.
b) Intrinsic Evaluation: An Intrinsic evaluation looks at the summarization system on its own. The coherence and informativeness of summaries have been the focus of intrinsic evaluations. Evaluations based on comparisons with the model summary/summaries and evaluations based on comparisons with the source document are the two types of intrinsic techniques (Steinberger & Ježek, 2009).
It assesses the quality of a summary by comparing the coverage of a machine-generated and a human-generated summary. The two most significant aspects of judging a summary are its quality and informativeness. A summary's informativeness is usually assessed by comparing it to a human-made summary, such as a reference summary. There is also faithfulness to the source paradigm, which examines if the summary contains the same or similar material as the original document. This method has a flaw: how it can be determined which concepts in the document are relevant and which are not? (Kryściński et al., 2018) Improving abstraction in text summarization Matches score when compared to the stateof-the-art in relevance but slightly low in terms of readability of summaries. (Yang, Wang, et al., 2020)
Content Evaluation
• Co-selection: Only identical sentences can be used in co-selection measures. It ignores the reality that even though two sentences are written differently, they can contain the same information. In addition, summaries provided by two separate authors rarely contain similar sentences. Co-selection can be calculated by precision, recall and F-measure.
a. Precision: Precision is the fraction of relevant instances among the retrieved instances.
Where, X and Y are representations of a system summary and its reference document based on the vector space model. b. Unit Overlap: Unit Overlap can be calculated as,
overlap( , ) = ‖ ∩ ‖ ‖ ‖+‖ ‖−‖ ∩ ‖(7)
Where, X and Y are representations based on sets of words or lemmas. ‖ ‖ is the size of set X.
c. Longest Common Subsequence (LCS): the LCS formula is defined as shown in equation (8),
LCS( , ) = ℎ( )+ ℎ( )−( , ) 2 (8)
Where, X and Y are representations based on sequences of words or lemmas, LCS (X, Y) is the length of the longest common subsequence between X and Y, length(X) is the length of the string X, and editdi(X, Y) is the edit distance of X and Y.
d. ROUGE (Recall-Oriented Understudy for Gisting Evaluation): It was firstly introduced by C. Lin & Rey (2001). It contains measures for automatically determining the quality of a summary by comparing it to other (ideal) summaries generated by people. The measures count the number of overlapping units such as n-grams, word sequences, and word pairs between the computer-generated summaries to be evaluated and the ideal summaries written by humans. ROUGE includes five measures like ROUGE-N, ROUGE-L, ROUGE-W, ROUGE-S and ROUGE-SU.
• ROUGE-N counts the number of N-gram units shared by a given summary and a group of reference summaries, where N is the length of the N-gram. i.e., ROUGE-1 for unigrams and ROUGE-2 for bi-grams. • ROUGE-L calculates the LCS (Longest Common Subsequence) statistic. LCS is the maximum size of a common subsequence for two given sequences, X and Y. ROUGE-L estimates the ratio of the LCS of two summaries to the LCS of the reference summary. • ROUGE-W is the weighted longest common subsequence metric. It is a step forward from the basic LCS strategy. ROUGE-W prefers LCS with successive common units. Dynamic programming can be used to compute it efficiently. • ROUGE-S (Skip-Bigram co-occurrence statistics) calculates the percentage of skip bigrams shared between a single summary and a group of reference summaries. The skip bigrams are word pairs in the sentence order with random gaps. • ROUGE-SU is a weighted average of ROUGE-S and ROUGE-1 that expands ROUGE-S to include a unigram counting unit. This is a step forward from ROUGE-S. e. LSA-based method: This method was developed by Steinberger & Ježek (2009). If there are m terms and n sentences in the document, we will obtain an m*n matrix A. The next step is to apply Singular Value Decomposition (SVD) to matrix A. The SVD of an m*n matrix A is defined as given in equation (9):
A = UΣVT (9)
In terms of NLP, SVD (Singular Value Decomposition) is used to generate the document's latent semantic structure, represented by matrix A: that is, a breakdown of the original document into r linearly-independent basis vectors that express the document's primary 'Topics'. SVD can record interrelationships between terms, allowing concepts and sentences to be clustered on a 'semantic' rather than a 'word' basis.
Text Coherence or Quality Evaluation:
a. Grammaticality: The text should not contain non-textual items (i.e., markers), punctuation errors or incorrect words. b. Non-redundancy: The text should not contain redundant information. c. Reference clarity: The nouns and pronouns should be referred to in summary. For example, the pronoun he has to mean somebody in the context of the summary. d. Coherence and structure: The summary should have good structure, and the sentences should be coherent.
The linguistic characteristics of the summary are properly considered here. Non-redundancy, focus, grammaticality, referential clarity, and structure and coherence are five questions based on linguistic quality used in DUC (Document Understanding Conference) and TAC (Text Analysis Conference) conferences to evaluate summaries not needed to be reviewed against the reference summary. Expert human assessors manually score the summary based on its quality, awarding a score to the summary according to a five-point scale (Gambhir & Gupta, 2017).
The text quality of a summary can also be checked by examining several readability variables. Text quality is analysed using various criteria such as vocabulary, syntax, and discourse to estimate a correlation between these characteristics and previously acquired human readability ratings. Unigrams represent vocabulary, while the average number of verbs or nouns represent syntax.
Automatic Text Summarization Evaluation Programs
SUMMAC (TIPSTER Text Summarization Evaluation) was the first conference where automatic summarization systems were reviewed, and it was held at the end of the 1990s, where text summaries were assessed using two extrinsic and intrinsic criteria. DUC (Document Understanding Conferences), which took place every year from 2001 to 2007, is another notable conference for text summarizing. Initially, activities at DUC conferences like DUC 2001 and DUC 2002 featured generic summarizing of single and multiple documents, which was later expanded to include a query-based summary of multiple documents in DUC 2003. Topic-based single and multi-document cross-lingual summaries were assessed in DUC 2004. Multi-document, query-based summaries were examined in DUC 2005, and DUC 2006, whilst multi-document, update, query-based summaries were evaluated in DUC 2007. However, in 2007, DUC conferences were no longer held because they were absorbed into the Text Analysis Conference (TAC), which featured summarization sessions. TAC is a series of evaluation workshops designed to promote research in the domains of Natural Language Processing and related fields. The TAC QA program arose from the TREC QA program. The Summarization track aids in the development of methods for producing concise, cohesive text summaries. Every year TAC workshops have been held since 2008.
Frequently used Dataset for ATS
There are applications of ATS systems that are widely spread worldwide and know the available data globally. So, to perform the text summarization task essential thing is the data. Not all data can be directed feed to the system. It required prepossessing and other treatments. The machine learning-based approaches need a huge training dataset with ideal summaries to train the model. Also, the ideal or sample Dataset is needed to evaluate a particular ATS system. That sample data is manually generated or created by human researchers. The list of the Dataset available for the ATS task is very long. A very few datasets are given below:
• DUC: The National Institute of Standards and Technology (NIST) provides these datasets, the most prevalent and widely used datasets in text summarization research. The DUC corpora were distributed as part of the DUC conference's summarizing shared work. (Rush et al., 2015), Headline-generation on a corpus of article pairs from Gigaword consisting of around 4 million articles in the English language. • LCTCS: Created by Chen (2015), the LCSTS Dataset was constructed from the Chinese microblogging website Sina Weibo. It consists of over 2 million real Chinese short texts with short summaries given by the author of each text. Requires application in the Chinese language. • wikiHow: Created by Koupaee & Wang (2018), the WikiHow Dataset contains article and summary pairs extracted and constructed from an online knowledge base written by different human authors in the English language. There are two features: -text: wikiHow answers texts. -headline: bold lines as summary. • CNN: CNN/DailyMail non-anonymized summarization dataset. The CNN / Daily Mail Dataset is an English-language dataset containing just over 300k unique news articles written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine-reading and comprehension and abstractive question answering.
Application, Challenges and future scope
Applications of ATS
There are numerous uses for automatic text summarizing. As we see how text summarization is divided into many more categories. All these categories further lead us to treasure of ATS's applications. This subsection includes some of the applications of ATS system. Table 4 shows the research survey on application of ATS system. The table includes article name, the method & the dataset used in a particular article, the performance of the system proposed in that particular research study and advantages & disadvantages of that article.
Here are a few examples:
• Improving the performance of classic IR and IE systems (using a summarization system in conjunction with a Question-Answering (QA) system); (De Tre et -Proposed technique fetches top-k informative sentences from online reviews -Proposed method focused on critical factors such as usefulness of a review, credibility of authors, review time, conflicts in reviews.
-TripAdvisor.com contains reviews in many more languages but the proposed approach works for only English.
-Small sample of participants with same backgrounds. -Experiments are done for only 2 hotels. (Lovinger et al., 2017)
Challenges and future scope
While generating an automated text summary, one faces a lot of challenges. The first challenge is defining what constitutes a decent summary, or more precisely, how a summary might be constructed. Our requirements for a summary provide good clues as to what it should be: extractive or abstractive, general, or querydriven, etc. Even if we figure out how humans normally summarise, putting it into practice will be difficult. Creating a powerful automatic text summarizer necessitates many resources, either in terms of tools or corpora. Another challenge is summary in formativeness; how can a machine emulate human people when it comes to summarising? One of the long-standing issues has been the coherence of the summary. The shortage of resources is another most challenging problems in ATS. Compared to the past, there are numerous powerful tools for stemming, parsing, and other tasks available now. Despite this, determining which ones are appropriate for a particular summarization problem is the issue. Furthermore, annotated corpus for ATS can be considered a challenge. The evaluation process is also a significant difficulty. Both intrinsic and extrinsic evaluation approaches were explored in this work. The language shared by a machine-generated reference summary is generally the focus of current intrinsic evaluation methods. Intuitive evaluation can create new ways to evaluate the summary based on the information it includes and how it is presented. The process of evaluation is highly subjective. First, a reasonable criterion must be defined to understand what is important and what is not. It is also unclear whether this procedure can be fully automated.
Text summarising has been around for more than fifty years, and the academic community is very interested in it; therefore, they continue to improve existing text summarization ways or invent new summary approaches to provide higher-quality summaries. However, text summarization performance is still average, and the summaries created are not perfect. As a result, by merging this system with other systems, it can be made more intelligent, allowing the combined system to perform better.
• Conclusion
Automatic Text summarization reduces the size of a source text while maintaining its information value and overall meaning. Automatic Text summarization has become a powerful technique for analysing text information due to a large amount of information we are given and the growth of Internet technologies. The automatic summarization of text is a thriving-known task in natural language processing (NLP). Automatic text summarization is an exciting research area, and it has a treasure of applications. This paper aims to make readers understand automatic text summarization from ground level and familiarise them with all detailed types of ATS systems. After that all different types are distinguished deeply and clearly in this study. The summarization task is mainly divided into extractive and abstractive. The study shows numerous techniques for extractive summarization, but the summaries generated by extractive summarizers are far from human-made summaries. On the other hand, abstractive summarizer is close to human summaries but not practically implemented with high performance. The combination of both extractive and abstractive is hybrid text summarization. This paper includes research survey on Extractive, Abstractive and Hybrid Text Summarization. Also, this survey article tried to cover all major application areas of ATS system and provided detailed survey on the same. There are so many methods to evaluate summarizing system and generated summaries that are included in this paper. Further it gives brief idea about frequently used datasets, conferences and programs that held every year for automatic text summarization system.
. The future study is to build a robust, domain and language independent extractive text summarization that works well with multi-documents. Similarly, because the quality evaluation of the summary is done manually by experienced assessors, it is highly subjective. There are specific quality assessment criteria, such as grammaticality and coherence, but the results are different when two experts evaluate the same summary.
Conflicts of Interest:
On behalf of all authors, I state that there is no conflict of interest.
Fig. 4 :
4Number of Research articles published in the domain of ATS in different time interval
Fig. 5 :
5Detailed Categorization of automatic text summarization system
Fig. 6 :
6Extractive text summarizer and Abstractive text summarizer
Fig. 7 :
7The evaluation Techniques for Automatic Text Summarization
•
Recall: Recall is the fraction of relevant instances that were retrieved. F-measure: It is computed by combining recall and precision. Content-based: Drawbacks of co-selection methods are handled by content-based methods. a. Cosine Similarity: Cosine Similarity can be measured as,
Table 1 :
1Research survey on Extractive text summarization methodCitation
article
Model/methods/te
chniques applied
Dataset Used
Performance
Advantages/Pros
Disadvantages/Con
s
(Garner, 1982)
Efficient
Text
Summarization:
Costs
and
Benefits
Extractive
text
summarization
Dutch elm Disease
(167 words-a
single word)
Efficiency:
proportion
of
number of judged-
important ideas to
total number of
words in summary
Proportion=0.2 to
0.12
-
-
(Rau et al., 1989)
Information
extraction
and
text
summarization
using linguistic
knowledge
acquisition
Extractive
text
summarization
Dataset named as:
Group Offers to
Sweeten Warnaco
Bid
-
SCISOR is robust
and reliable
extraction
of
information
Size of lexicon.
(10,000 words) and
system is limited.
(Bloom et al.,
1994)
Automatic
Analysis, Theme
Generation, and
Summarization
of
Machine-
Readable Texts
Extractive
text
summarization
175-page article
Entitle
"United
States of America.
-
robust and
generally
applicable to a wide
variety of
texts
in
many
different
environments,
In the absence of
deep linguistic, it is
not possible to build
Intellectually
satisfactory
text
summaries.
(Reimer et al.,
1997)
Formal Model of
Text
Summarization
Based on
Condensation
Operators of a
Terminological
Logic
Extractive
text
summarization
text
-
attempt was
made to properly
integrate the text
summarization
process to the
formal reasoning
mechanisms of the
underlying
knowledge
representation
language
Currently
the
summarization
process considers
only activity and
connectivity
patterns m the text
knowledge base
Table 2 :
2Research survey on Abstractive text summarization methodCitation
article
Model/methods/te
chniques applied
Dataset Used Performance
Advantages/Pros
Disadvantages/Cons
(Nallapati et al.,
2016)
Abstractive text
summarization
using sequence-
to-sequence
RNNs
and
beyond
Abstractive
(Sequence-to-
sequence
encoder-decoder
with RNN)
Gigaword
corpus,
DUC,
CNN/daily
mail corpus
ROUGE-1: 35.46
ROUGE-2:13.30
ROUGE-L:32.65
Qualitative
evaluation: a few
high quality and
poor-quality output
-Performs a little bit well as
compared to state-of-the-art
-Propose new dataset for
multiple sentence summary
-Worked for single
sentence
output
summaries
-capturing
the
meaning of complex
sentence is weakness
for this model
(Li et al., 2017)
Deep recurrent
generative
decoder
for
abstractive text
summarization
Abstractive
(Sequence-to-
sequence
encoder-decoder
with
deep
recurrent
generative
decoder (DRGN))
Gigaword
corpus,
DUC,
LCSTS
ROUGE-1: 36.71
ROUGE-2:24.00
ROUGE-L:34.10
Implement
deep
recurrent
generative model to capture latent
structure information.
Worked for single
sentence
output
summaries
(Dohare et al.,
2017b)
Text
Summarization
using Abstract
Meaning
Representation
Abstractive
summarization
with AMR.
CNN,
DailyMail
ROUGE-1: 39.53
ROUGE-2:17.28
ROUGE-L:36.38
-Suggested a full pipeline for
summarization with AMR
-proven that ROGUE can't be used
for evaluating the abstractive
summaries
-a novel approach for extracting
multiple summary graphs
Not a lot of work has
been done to extract
AMR graphs for
summaries
(Song et al.,
2019a)
Abstractive text
summarization
using LSTM-
CNN
based
deep learning
Abstractive
(CNN-LSTM)
CNN,
DailyMail
dataset
ROUGE-1: 34.9
ROUGE-2:17.8
-Implement ATSDL (ATS using
Deep Learning) system that
Combines CNN and LSTM for
better performance
-solve problem of rare words
-Training of deep
learning is very time-
consuming process.
-ROUGE
can't
evaluate the quality of
summary effectively.
(Zhang et al.,
2019)
Abstract text
summarization
with
a
convolutional
seq2seq model
Abstractive
(CNN)
Gigaword
corpus,
DUC,
CNN/daily
mail corpus
ROUGE-1: 42.04
ROUGE-2: 19.77
ROUGE-L: 39.42
-equip the CNN model with GLU
and residual connections.
-hierarchical attention mechanism
to generate the keywords and the
key sentences simultaneously.
-a copying mechanism to extract
out-of-vocabulary words from
source text.
In practise, adding
more sentences has a
negative impact on
performance, which
we explain to the fact
that
the
latter
sentences
are
unrelated
to
the
summary.
(Kryściński et
al., 2019b)
Evaluating the
Factual
Consistency of
Abstractive
Text
Summarization
Abstractive
CNN/Daily
Mail dataset
Accuracy: 74.15
F1-Score: 0.5106
-Implemented
factual
consistency checking model
(FactCC).
-common-sense
mistakes made by
summarization
models.
-stemming errors
from
dependencies
between
different
sentences within the
summary
(WEI et al.,
2018)
Regularizing
output
distribution of
abstractive
Chinese social
media
text
summarization
for improved
semantic
consistency
Abstractive
Large-Scale
Chinese
Short Text
Summarizati
on Dataset
(LCSTS)
ROUGE-1: 36.2
ROUGE-2:24.3
ROUGE-L: 33.8
Accuracy
of
Human
Evaluation
approach: 53.6%
-
suggest a method for
regularising the output word
distribution so that semantic
inconsistency in the training data,
such as terms not linked to the
source
content,
is
underrepresented in the model.
-Proposed a simple human
evaluation
approach
for
determining the
generated
summary's
semantic
compatibility with the original
information.
Language dependent
(Chinese dataset)
Table 3 .
3Research survey on hybrid text summarization method
The most recent DUC challenge took place in 2007. Datasets for DUC 2001 through DUC 2007 are available on the DUC website. • Text Analysis Conference (TAC) Datasets: DUC was added to the TAC as a summary track in 2008. To gain access to the TAC datasets, you must first fill out the application forms available on the TAC website. • Gigaword: Created by
al., 2014) (S. Liu et al., 2012) (Perea-Ortega et al., 2013) • News Summarization and Newswire generation (Tomek,1998) (Bouras & Tsogkas, 2010) • Rich Site Summary (RSS) feed summarization (Zhan et al., 2009) • Blog Summarization (Y. H. Hu et al., 2017) • Tweet Summarization (Chakraborty et al., 2019) • Web page Summarization (Shen et al., 2007) • Email and email thread Summarization (Muresan et al., 2001) •Report Summarization for business men, politicians, researchers, etc. Use of Summarization in medical field(Feblowitz et al., 2011)(Ramesh et al., 2015 •
Meeting Summarization.
•
Biographical extracts
•
Legal Document Summarization (Farzindar & Lapalme, 2004)
•
Books Summarization (Mihalcea & Ceylan, 2007)
•
Table 4 :
4Research survey on Application of ATS System The summaries are only 5% to 10% of original text so it can be quickly read and understood.-The algorithm is very robust that efficiently process a large range of documents, domain independent and can be easily accepted by most of the European languages.-It can work in two modes: Generic and Topical -Worked on DMS (Discourse Macro Structure) to conquer the shortcomings of sentence-based summarization by working on paragraph level Summary presented as table style -focused on many categories of judgements i.e., Copyright, Air low, human rights etc.Semi-structured summaries are generated which consist of sentences regarding specific semantic aspects of a gene.Removing noise from web pages while preserving most relevant features to increase accuracy of web classification find and extracts important topics from a set of online reviews and then ranks these retrieved topics -For different online sites the style of reviews is written in differently. This -Enhance personalization algorithm with help of various features extracted from user's profile and viewed history of articles. -A stable system for day-to-day use is build named as PeRSSonal.-An Interactive visual text analysis tool TIARA which produces a visual summary of text analytic results automatically and enhanced with significant, time-sensitive topic-based summary method.-COMPENDIUM produces summaries for biomedical research papers automatically with both approaches: extractive and abstractive.-abstractive are far better as per user perspective.Two types of summaries are generated (generic & geographical) with improvement in recall and compression rates.-Works for both single document and multi document -For selecting summary phrases, created a bipartite sentence-concept graph and presented an iterative ranking algorithm.Unsupervised FigSum+ system that automatically fetches associated texts, remove repetitions, and generate text summary for a particular figure.(Y. H.Hu et al., 2017) Citation
Article
Model/methods
applied
Dataset Used Performance
Advantages/Pros
Disadvantages/Con
s
(Tomek,19
98)
A
Robust
Practical
Text
Summarization
Multi-
Document,
Extractive Text
summarization
New York
times online
news
Length
of
summaries:
Quality
of
summaries not that
much good. It can
be improved by
additional
paragraph scoring
function.
(Muresan et
al., 2001)
Combining
linguistic
and
machine learning
techniques for
email
summarization
Extractive text
summarization
with machine
learning
Emails
Precision: 83%
Recall:85.7%
Linguistic
Knowledge
Enhances
Machine Learning
Deep
linguistic
knowledge
is
required.
(Mckeown
et al., 2002)
Tracking
and
summarizing
news on a daily
basis
with
Columbia's
Newsblaster
Multi-document
summarization
with extractive
approach
News sites
-
These research
achievements are incorporated into
"Newsblaster"
Personalization of
Newsblaster
and
restricting it to user
preferred topics or
questions is still
having to be done
(Farzindar
& Lapalme,
2004)
Legal
Text
Summarization
by Exploration
of the Thematic
Structure
and
Argumentative
Roles
Extractive Text
summarization
Corpus
contains
3500
judgements
of Federal
court
of
Canada
Preliminary results
are very promising.
-F-measures: 0.935
on average for all
stages
-The System is not
properly for many
other
remaining
categories
of
judgements
(Mihalcea
& Ceylan,
2007)
Explorations in
automatic book
summarization
Extractive text
summarization
with
TEXTRANK
approach
gold
standard"
data set of 50
books
F-Measure: 0.404
it introduced a new summarization
benchmark, specifically targeting the
evaluation
of systems for book summarization
Exhaustive method
when we have short
length book
(Ling et al.,
2007)
Generating gene
summaries from
biomedical
literature:
A
study of semi-
structured
summarization
Multi-
Document,
Extractive Text
summarization
test set with
20 genes
ROUGE-N
performs better than
existing methods
-High quality of
data required. (Data
dependent)
-Redundant
information
in
generated summary
(Shen et al.,
2007)
Noise reduction
through
summarization
for
Web-page
classification
Extractive Text
summarization
2
million
Web pages
crawled
from
the
LookSmart
Web
directory
F-measures
for
hybrid
approach
(supervised
and
unsupervised):
Naïve Byes-72.0 ±
0.3
SVM-72.9 ± 0.3
-
Focus
only
isolated web pages.
Does not include
hyperlinks.
(Zhan et al.,
2009)
Gather customer
concerns from
online product
reviews -A text
summarization
approach
Extractive Text
summarization
Five datasets
from Hu's
corpus (M.
Hu & Liu,
2004) and 3
sets
from
Amazon
Average
responsiveness
scores: 4.3
difference is not
efficiently
entertained.
(Bouras &
Tsogkas,
2010)
Noun retrieval
effect on text
summarization
and delivery of
personalized
news articles to
the
user's
desktop
Multi-
Document
Extractive Text
summarization
Numerous
news portals
around the
internet
Precision is boosted
by using noun
retrieval effect.
With
new
personalization
scheme increase to
around 17% for
precision and 14%
for recall.
Language
dependent
(Feblowitz
et al., 2011)
Summarization
of
clinical
information: A
conceptual
model
Extractive Text
summarization
Day to day
clinical data
-
Both computer supported and computer
independent clinical tasks are analysed.
Not
standardised
and
optimized
clinical summary
(Lloret et
al., 2011)
Text
Summarization
Contribution to
Semantic
Question
Answering: New
Approaches for
Finding Answers
on the Web
Elena
Extractive Text
summarization
Most
relevant 20
documents
from google
search
engine for
particular
Question.
Query-focused
summaries
gives
58% improvement
in accuracy than
generic summaries.
shorter summaries
gives
6.3%
improvement
in
accuracy than long
summaries.
Efficiently
combines
text
summarization for semantic question
answering while focusing on query-
based summaries rather than generic
summaries.
The summary for a
particular question's
answer was built
from
only
20
documents which
were retrieved from
google
search
engine.
(S. Liu et
al., 2012)
TIARA:
Interactive,
topic-based
visual
text
summarization
and analysis
Multi-
Document
Extractive Text
summarization
IBM
employees'
Email
Improvement
in
TIARA
with
regarding
to
usefulness
and
satisfaction
of
summary
than
TheMail. (Viégas et
al., 2006)
Application
-
specific features are
not added in the
tool.
High F-score compared to TextRank and LexRankRouge-1, F1-score: 0.664 Rouge-2, F1-score: 0.548Capturing the diverse opinions helps in better identification of the relevant tweet set.Gist:
general
integrated
summarization
of
text
and
reviews
Extractive text
summarization
with
optimization-
Based approach
Movie
reviews,
news articles
Average
F-
measure: 0.276
Average running
time: 0.067235
High computational
time and cost
(Chakrabor
ty et al.,
2019)
Tweet
summarization
of news articles:
An
objective
ordering-based
perspective
Extractive text
summarization
with
LEXRANK
approach
News article
dataset and
tweet dataset
classification
of
opinion from tweets
are not implemented
(Kumar &
Reddy,
2019)
Factual instance
tweet
summarization
and
opinion
analysis of sport
competition
Extractive text
summarization
IPL 2017
challenger 1,
eliminator,
challenger 2,
and
final
competition
with
each
having
10,000
tweets
Sentiment classifier
time:
-Logistic
regression
Train: 0.27s
Test: 0.025
classification of sub-events of tweets
-opinion
classification is not
derived.
Automatic Text Summarization Using Supervised Machine Learning Technique for Hindi Langauge. N D , 10.15623/ijret.2016.0506065International Journal of Research in Engineering and Technology. 06N. D. (2016). Automatic Text Summarization Using Supervised Machine Learning Technique for Hindi Langauge. International Journal of Research in Engineering and Technology, 05(06), 361-367. https://doi.org/10.15623/ijret.2016.0506065
An Overview on Document Summarization Techniques. A Ab, C Sunitha, Ab, A., & Sunitha, C. (n.d.). An Overview on Document Summarization Techniques. 113-118.
A Hybrid Approach for Arabic Text Summarization Using Domain Knowledge and Genetic Algorithms. Q A Al-Radaideh, D Q Bataineh, 10.1007/s12559-018-9547-zCognitive Computation. 104Al-Radaideh, Q. A., & Bataineh, D. Q. (2018a). A Hybrid Approach for Arabic Text Summarization Using Domain Knowledge and Genetic Algorithms. Cognitive Computation, 10(4), 651-669. https://doi.org/10.1007/s12559-018-9547-z
A Hybrid Approach for Arabic Text Summarization Using Domain Knowledge and Genetic Algorithms. Q A Al-Radaideh, D Q Bataineh, 10.1007/s12559-018-9547-zCognitive Computation. 104Al-Radaideh, Q. A., & Bataineh, D. Q. (2018b). A Hybrid Approach for Arabic Text Summarization Using Domain Knowledge and Genetic Algorithms. Cognitive Computation, 10(4), 651-669. https://doi.org/10.1007/s12559-018-9547-z
Hybrid method for text summarization based on statistical and semantic treatment. Multimedia Tools and Applications. N Alami, M Mallahi, El, H Amakdouf, H Qjidaa, 10.1007/s11042-021-10613-9Alami, N., Mallahi, M. El, Amakdouf, H., & Qjidaa, H. (2021). Hybrid method for text summarization based on statistical and semantic treatment. Multimedia Tools and Applications. https://doi.org/10.1007/s11042-021-10613-9
Enhancing unsupervised neural networks based text summarization with word embedding and ensemble learning. N Alami, M Meknassi, N En-Nahnahi, 10.1016/j.eswa.2019.01.037Expert Systems with Applications. 123Alami, N., Meknassi, M., & En-nahnahi, N. (2019). Enhancing unsupervised neural networks based text summarization with word embedding and ensemble learning. Expert Systems with Applications, 123, 195-211. https://doi.org/10.1016/j.eswa.2019.01.037
Unsupervised neural networks for automatic Arabic text summarization using document clustering and topic modeling. N Alami, M Meknassi, N En-Nahnahi, Y El Adlouni, O Ammor, 10.1016/j.eswa.2021.114652Expert Systems with Applications. 172Alami, N., Meknassi, M., En-nahnahi, N., El Adlouni, Y., & Ammor, O. (2021). Unsupervised neural networks for automatic Arabic text summarization using document clustering and topic modeling. Expert Systems with Applications, 172. https://doi.org/10.1016/j.eswa.2021.114652
Evolutionary Algorithm for Extractive Text Summarization. R Alguliev, R Aliguliyev, 10.4236/iim.2009.12019Intelligent Information Management. 02ALGULIEV, R., & ALIGULIYEV, R. (2009). Evolutionary Algorithm for Extractive Text Summarization. Intelligent Information Management, 01(02), 128-138. https://doi.org/10.4236/iim.2009.12019
MCMR: Maximum coverage and minimum redundant text summarization model. R M Alguliev, R M Aliguliyev, M S Hajirahimova, C A Mehdiyev, 10.1016/j.eswa.2011.05.033Expert Systems with Applications. 3812Alguliev, R. M., Aliguliyev, R. M., Hajirahimova, M. S., & Mehdiyev, C. A. (2011a). MCMR: Maximum coverage and minimum redundant text summarization model. Expert Systems with Applications, 38(12), 14514-14522. https://doi.org/10.1016/j.eswa.2011.05.033
MCMR: Maximum coverage and minimum redundant text summarization model. R M Alguliev, R M Aliguliyev, M S Hajirahimova, C A Mehdiyev, 10.1016/j.eswa.2011.05.033Expert Systems with Applications. 3812Alguliev, R. M., Aliguliyev, R. M., Hajirahimova, M. S., & Mehdiyev, C. A. (2011b). MCMR: Maximum coverage and minimum redundant text summarization model. Expert Systems with Applications, 38(12), 14514-14522. https://doi.org/10.1016/j.eswa.2011.05.033
COSUM: Text summarization based on clustering and optimization. R M Alguliyev, R M Aliguliyev, N R Isazade, A Abdi, N Idris, 10.1111/exsy.12340Expert Systems. 361Alguliyev, R. M., Aliguliyev, R. M., Isazade, N. R., Abdi, A., & Idris, N. (2019a). COSUM: Text summarization based on clustering and optimization. Expert Systems, 36(1), 1-17. https://doi.org/10.1111/exsy.12340
COSUM: Text summarization based on clustering and optimization. R M Alguliyev, R M Aliguliyev, N R Isazade, A Abdi, N Idris, 10.1111/exsy.12340Expert Systems. 136Alguliyev, R. M., Aliguliyev, R. M., Isazade, N. R., Abdi, A., & Idris, N. (2019b). COSUM: Text summarization based on clustering and optimization. Expert Systems, 36(1). https://doi.org/10.1111/exsy.12340
A complex network approach to text summarization. L Antiqueira, O N Oliveira, L Costa, F Da, M Nunes, G V Das, 10.1016/j.ins.2008.10.032Information Sciences. 1795Antiqueira, L., Oliveira, O. N., Costa, L. da F., & Nunes, M. das G. V. (2009). A complex network approach to text summarization. Information Sciences, 179(5), 584-599. https://doi.org/10.1016/j.ins.2008.10.032
A Scalable Summarization System Using Robust NLP. C Aone, M E Okurowski, J Gorlinsky, B Larsen, Proceedings of the Intelligent Scalable Text Summarization Workshop. the Intelligent Scalable Text Summarization WorkshopAone, C., Okurowski, M. E., Gorlinsky, J., & Larsen, B. (1997). A Scalable Summarization System Using Robust NLP. Proceedings of the Intelligent Scalable Text Summarization Workshop, 66-73.
Headline generation based on statistical translation. M Banko, V O Mittal, M J Witbrock, 10.3115/1075218.1075259Banko, M., Mittal, V. O., & Witbrock, M. J. (2000). Headline generation based on statistical translation. 318-325. https://doi.org/10.3115/1075218.1075259
R Barzilay, K R Mckeown, Sentence Fusion for Multidocument News Summarization. Barzilay, R., & Mckeown, K. R. (2005). Sentence Fusion for Multidocument News Summarization.
A new graph-based extractive text summarization using keywords or topic modeling. R C Belwal, S Rai, A Gupta, 10.1007/s12652-020-02591-xJournal of Ambient Intelligence and Humanized Computing. 1210Belwal, R. C., Rai, S., & Gupta, A. (2021). A new graph-based extractive text summarization using keywords or topic modeling. Journal of Ambient Intelligence and Humanized Computing, 12(10), 8975-8990. https://doi.org/10.1007/s12652-020-02591-x
SumItUp: A Hybrid Single-Document Text Summarizer. I K Bhat, M Mohd, R Hashmy, 10.1007/978-981-10-5687-1_56Advances in Intelligent Systems and Computing. 583Bhat, I. K., Mohd, M., & Hashmy, R. (2018). SumItUp: A Hybrid Single-Document Text Summarizer. Advances in Intelligent Systems and Computing, 583(April), 619-634. https://doi.org/10.1007/978-981-10-5687-1_56
Swarm Based Text Summarization. M S Binwahlan, N Salim, L Suanmali, 10.1109/IACSIT-SC.2009.61International Association of Computer Science and Information Technology -Spring Conference. Binwahlan, M. S., Salim, N., & Suanmali, L. (2009a). Swarm Based Text Summarization. 2009 International Association of Computer Science and Information Technology -Spring Conference, IACSIT-SC 2009, 145-150. https://doi.org/10.1109/IACSIT-SC.2009.61
Fuzzy swarm diversity hybrid model for text summarization. M S Binwahlan, N Salim, L Suanmali, 10.1016/j.ipm.2010.03.004Information Processing and Management. 465Binwahlan, M. S., Salim, N., & Suanmali, L. (2010). Fuzzy swarm diversity hybrid model for text summarization. Information Processing and Management, 46(5), 571-588. https://doi.org/10.1016/j.ipm.2010.03.004
Swarm Based Text Summarization. M S Binwahlan, N Salim, L Suanmali, 10.1109/IACSIT-SC.2009.61International Association of Computer Science and Information Technology -Spring Conference. Binwahlan, M. S., Salim, N., & Suanmali, L. (2009b). Swarm Based Text Summarization. 2009 International Association of Computer Science and Information Technology -Spring Conference, IACSIT-SC 2009, 145-150. https://doi.org/10.1109/IACSIT-SC.2009.61
. Bloom. Bloom, ;
. E H Fischer, H Charbonneau, N K , Tonks. E H, Fischer, H., Charbonneau, N. K., Tonks, ;
Automatic Analysis, Theme Generation, and Summarization of Machine-Readable Texts. J , n.dMirkovitch , n.dJ E Sadowski, n.dD ; H B Shuai, n.dK Darnell, n.dJ E Gilman, n.dM Z , n.dInterferon: Principles and Medical Applications. 13J, Mirkovitch, J. E., Sadowski, D. ; H. B., Shuai, K., Darnell, J. E., & Gilman, M. Z. (n.d.). Automatic Analysis, Theme Generation, and Summarization of Machine-Readable Texts. In Interferon: Principles and Medical Applications (Vol. 13).
Noun retrieval effect on text summarization and delivery of personalized news articles to the user's desktop. Data and Knowledge Engineering. C Bouras, V Tsogkas, 10.1016/j.datak.2010.02.00569Bouras, C., & Tsogkas, V. (2010). Noun retrieval effect on text summarization and delivery of personalized news articles to the user's desktop. Data and Knowledge Engineering, 69(7), 664-677. https://doi.org/10.1016/j.datak.2010.02.005
Reprint of: The anatomy of a large-scale hypertextual web search engine. S Brin, L Page, 10.1016/j.comnet.2012.10.007Computer Networks. 5618Brin, S., & Page, L. (2012). Reprint of: The anatomy of a large-scale hypertextual web search engine. Computer Networks, 56(18), 3825-3833. https://doi.org/10.1016/j.comnet.2012.10.007
The use of MMR, diversity-based reranking for reordering documents and producing summaries. J Carbonell, J Goldstein, Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. the 21st annual international ACM SIGIR conference on Research and development in information retrievalCarbonell, J., & Goldstein, J. (1998). The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, 335-336.
Tweet summarization of news articles: An objective ordering-based perspective. R Chakraborty, M Bhavsar, S K Dandapat, J Chandra, 10.1109/TCSS.2019.2926144IEEE Transactions on Computational Social Systems. 64Chakraborty, R., Bhavsar, M., Dandapat, S. K., & Chandra, J. (2019). Tweet summarization of news articles: An objective ordering-based perspective. IEEE Transactions on Computational Social Systems, 6(4), 761-777. https://doi.org/10.1109/TCSS.2019.2926144
LCSTS: A Large Scale Chinese Short Text Summarization Dataset. Q Chen, Chen, Q. (2015). LCSTS: A Large Scale Chinese Short Text Summarization Dataset.
Performance optimization of object comparison. G De Tre, A Hallez, A Bronselaer, 10.1002/intInternational Journal of Intelligent Systems. 292De Tre, G., Hallez, A., & Bronselaer, A. (2014). Performance optimization of object comparison. International Journal of Intelligent Systems, 29(2), 495-524. https://doi.org/10.1002/int
Text Summarization using Abstract Meaning Representation. S Dohare, H Karnick, V Gupta, ArXivDohare, S., Karnick, H., & Gupta, V. (2017a). Text Summarization using Abstract Meaning Representation. ArXiv.
Text Summarization using Abstract Meaning Representation. S Dohare, H Karnick, V Gupta, Dohare, S., Karnick, H., & Gupta, V. (2017b). Text Summarization using Abstract Meaning Representation. http://arxiv.org/abs/1706.01678
EdgeSumm: Graph-based framework for automatic text summarization. W S El-Kassas, C R Salama, A A Rafea, H K Mohamed, 10.1016/j.ipm.2020.102264Information Processing and Management. 576102264El-Kassas, W. S., Salama, C. R., Rafea, A. A., & Mohamed, H. K. (2020). EdgeSumm: Graph-based framework for automatic text summarization. Information Processing and Management, 57(6), 102264. https://doi.org/10.1016/j.ipm.2020.102264
Legal Text Summarization by Exploration of the Thematic Structure and Argumentative Roles. A Farzindar, G Lapalme, Text Summarization Branches Out Conference Held in Conjunction with ACL 2004. Farzindar, A., & Lapalme, G. (2004). Legal Text Summarization by Exploration of the Thematic Structure and Argumentative Roles. In Text Summarization Branches Out Conference Held in Conjunction with ACL 2004, 27-38.
Summarization of clinical information: A conceptual model. J C Feblowitz, A Wright, H Singh, L Samal, D F Sittig, 10.1016/j.jbi.2011.03.008Journal of Biomedical Informatics. 444Feblowitz, J. C., Wright, A., Singh, H., Samal, L., & Sittig, D. F. (2011). Summarization of clinical information: A conceptual model. Journal of Biomedical Informatics, 44(4), 688-699. https://doi.org/10.1016/j.jbi.2011.03.008
A multi-document summarization system based on statistics and linguistic treatment. R Ferreira, L De Souza Cabral, F Freitas, R D Lins, De França, G Silva, S J Simske, L Favaro, 10.1016/j.eswa.2014.03.023Expert Systems with Applications. 4113Ferreira, R., De Souza Cabral, L., Freitas, F., Lins, R. D., De França Silva, G., Simske, S. J., & Favaro, L. (2014). A multi-document summarization system based on statistics and linguistic treatment. Expert Systems with Applications, 41(13), 5780-5787. https://doi.org/10.1016/j.eswa.2014.03.023
Assessing sentence scoring techniques for extractive text summarization. R Ferreira, L De Souza Cabral, R D Lins, G Pereira E Silva, F Freitas, G D C Cavalcanti, R Lima, S J Simske, L Favaro, 10.1016/j.eswa.2013.04.023Expert Systems with Applications. 40Ferreira, R., De Souza Cabral, L., Lins, R. D., Pereira E Silva, G., Freitas, F., Cavalcanti, G. D. C., Lima, R., Simske, S. J., & Favaro, L. (2013). Assessing sentence scoring techniques for extractive text summarization. In Expert Systems with Applications (Vol. 40, Issue 14, pp. 5755-5764). https://doi.org/10.1016/j.eswa.2013.04.023
Recent automatic text summarization techniques: a survey. M Gambhir, V Gupta, 10.1007/s10462-016-9475-9Artificial Intelligence Review. 147Gambhir, M., & Gupta, V. (2017). Recent automatic text summarization techniques: a survey. Artificial Intelligence Review, 47(1). https://doi.org/10.1007/s10462- 016-9475-9
Efficient text summarization: Costs and benefits. R Garner, 10.1080/00220671.1982.10885394Journal of Educational Research. 755Garner, R. (1982). Efficient text summarization: Costs and benefits. Journal of Educational Research, 75(5), 275-279. https://doi.org/10.1080/00220671.1982.10885394
A Novel Hybrid Text Summarization System for Punjabi Text. V Gupta, N Kaur, 10.1007/s12559-015-9359-3Cognitive Computation. 82Gupta, V., & Kaur, N. (2016). A Novel Hybrid Text Summarization System for Punjabi Text. Cognitive Computation, 8(2), 261-277. https://doi.org/10.1007/s12559-015-9359-3
Musicassette Interchangeability. the Facts Behind the Facts. Hanson Er, AES: Journal of the Audio Engineering Society. 195HANSON ER. (1971). Musicassette Interchangeability. the Facts Behind the Facts. AES: Journal of the Audio Engineering Society, 19(5), 417-425.
J He, W Kryscinski, B Mccann, N Rajani, C Xiong, CTRLsum: Towards generic controllable text summarization. ArXiv. He, J., Kryscinski, W., McCann, B., Rajani, N., & Xiong, C. (2020). CTRLsum: Towards generic controllable text summarization. ArXiv, 1-35.
J He, W Kryściński, B Mccann, N Rajani, C Xiong, CTRLsum: Towards Generic Controllable Text Summarization. He, J., Kryściński, W., McCann, B., Rajani, N., & Xiong, C. (2020). CTRLsum: Towards Generic Controllable Text Summarization. http://arxiv.org/abs/2012.04281
E Hovy, C.-Y Lin, AUTOMATED TEXT SUMMARIZATION AND THE SUMMARIST SYSTEM. Hovy, E., & Lin, C.-Y. (n.d.). AUTOMATED TEXT SUMMARIZATION AND THE SUMMARIST SYSTEM.
Automated text summarization and the SUMMARIST system. E Hovy, C.-Y Lin, 10.3115/1119089.1119121197Hovy, E., & Lin, C.-Y. (1996). Automated text summarization and the SUMMARIST system. 197. https://doi.org/10.3115/1119089.1119121
Mining opinion features in customer reviews. M Hu, B Liu, Proceedings of the National Conference on Artificial Intelligence. the National Conference on Artificial IntelligenceHu, M., & Liu, B. (2004). Mining opinion features in customer reviews. Proceedings of the National Conference on Artificial Intelligence, 755-760.
Opinion mining from online hotel reviews -A text summarization approach. Information Processing and Management. Y H Hu, Y L Chen, H L Chou, 10.1016/j.ipm.2016.12.00253Hu, Y. H., Chen, Y. L., & Chou, H. L. (2017). Opinion mining from online hotel reviews -A text summarization approach. Information Processing and Management, 53(2), 436-449. https://doi.org/10.1016/j.ipm.2016.12.002
Modeling Document Summarization as Multi-objective Optimization. 2-6. L Huang, Y He, F Wei, W Li, 10.1109/IITSI.2010.80Huang, L., He, Y., Wei, F., & Li, W. (2010). Modeling Document Summarization as Multi-objective Optimization. 2-6. https://doi.org/10.1109/IITSI.2010.80
Automatic Text Summarization (The state of the art 2007 and new challenges). K Ježek, J S Katedra, J Steinberger, Ježek, K., Katedra, J. S., & Steinberger, J. (2007). Automatic Text Summarization (The state of the art 2007 and new challenges).
Automatic summarizing: (The state of the art 2007 and new challenges). K Jezek, J Steinberger, Proceedings of Znalosti. ZnalostiJezek, K., & Steinberger, J. (2008). Automatic summarizing: (The state of the art 2007 and new challenges). Proceedings of Znalosti, 1-12.
SummCoder: An unsupervised framework for extractive text summarization based on deep autoencoders. A Joshi, E Fidalgo, E Alegre, L Fernández-Robles, 10.1016/j.eswa.2019.03.045Expert Systems with Applications. 129Joshi, A., Fidalgo, E., Alegre, E., & Fernández-Robles, L. (2019). SummCoder: An unsupervised framework for extractive text summarization based on deep auto- encoders. Expert Systems with Applications, 129, 200-215. https://doi.org/10.1016/j.eswa.2019.03.045
Text summarization from legal documents: a survey. A Kanapala, S Pal, R Pamula, 10.1007/s10462-017-9566-2Artificial Intelligence Review. 513Kanapala, A., Pal, S., & Pamula, R. (2019). Text summarization from legal documents: a survey. Artificial Intelligence Review, 51(3), 371-402. https://doi.org/10.1007/s10462-017-9566-2
An Automatic Legal Document Summarization and Search Using Hybrid System An Automatic Legal Document Summarization. S D Kavila, V Puli, G S V P Raju, R Bandaru, 10.1007/978-3-642-35314-7Kavila, S. D., Puli, V., Raju, G. S. V. P., & Bandaru, R. (2020). An Automatic Legal Document Summarization and Search Using Hybrid System An Automatic Legal Document Summarization. January 2013. https://doi.org/10.1007/978-3-642-35314-7
A survey automatic text summarization. F Kiyani, O Tas, 10.17261/pressacademia.2017.591Pressacademia. 51Kiyani, F., & Tas, O. (2017). A survey automatic text summarization. Pressacademia, 5(1), 205-213. https://doi.org/10.17261/pressacademia.2017.591
An effective sentence-extraction technique using contextual information and statistical approaches for text summarization. Y Ko, J Seo, 10.1016/j.patrec.2008.02.008Pattern Recognition Letters. 299Ko, Y., & Seo, J. (2008). An effective sentence-extraction technique using contextual information and statistical approaches for text summarization. Pattern Recognition Letters, 29(9), 1366-1371. https://doi.org/10.1016/j.patrec.2008.02.008
M Koupaee, W Y Wang, WikiHow: A Large Scale Text Summarization Dataset. Koupaee, M., & Wang, W. Y. (2018). WikiHow: A Large Scale Text Summarization Dataset. http://arxiv.org/abs/1810.09305
Evaluating the factual consistency of abstractive text summarization. W Kryściński, B Mccann, C Xiong, R Socher, 10.18653/v1/2020.emnlp-main.750Kryściński, W., McCann, B., Xiong, C., & Socher, R. (2019a). Evaluating the factual consistency of abstractive text summarization. ArXiv. https://doi.org/10.18653/v1/2020.emnlp-main.750
Evaluating the Factual Consistency of Abstractive Text Summarization. W Kryściński, B Mccann, C Xiong, R Socher, Kryściński, W., McCann, B., Xiong, C., & Socher, R. (2019b). Evaluating the Factual Consistency of Abstractive Text Summarization. http://arxiv.org/abs/1910.12840
Improving Abstraction in Text Summarization. W Kryściński, R Paulus, C Xiong, R Socher, Kryściński, W., Paulus, R., Xiong, C., & Socher, R. (2018). Improving Abstraction in Text Summarization. http://arxiv.org/abs/1808.07913
Improving abstraction in text summarization. W Kryściński, R Paulus, C Xiong, R Socher, 10.18653/v1/d18-1207Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language Processing2018Kryściński, W., Paulus, R., Xiong, C., & Socher, R. (2020). Improving abstraction in text summarization. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, 1808-1817. https://doi.org/10.18653/v1/d18-1207
Factual Instance Tweet Summarization and Opinion Analysis of Sport Competition. N V Kumar, n.dM J Reddy, n.d10.1007/978-981-13-3393-4SpringerSingaporeKumar, N. V., & Reddy, M. J. (n.d.). Factual Instance Tweet Summarization and Opinion Analysis of Sport Competition. Springer Singapore. https://doi.org/10.1007/978-981-13-3393-4
Generic text summarization for Turkish. M Kutlu, C Ciǧir, I Cicekli, 10.1093/comjnl/bxp124Computer Journal. 538Kutlu, M., Ciǧir, C., & Cicekli, I. (2010). Generic text summarization for Turkish. Computer Journal, 53(8), 1315-1323. https://doi.org/10.1093/comjnl/bxp124
Deep Recurrent Generative Decoder for Abstractive Text Summarization. P Li, W Lam, L Bing, Z Wang, Li, P., Lam, W., Bing, L., & Wang, Z. (2017). Deep Recurrent Generative Decoder for Abstractive Text Summarization. http://arxiv.org/abs/1708.00625
R OUGE : A Package for Automatic Evaluation of Summaries. C Lin, M Rey, Lin, C., & Rey, M. (2001). R OUGE : A Package for Automatic Evaluation of Summaries.
Training a selection function for extraction. C Y Lin, 10.1145/319950.319957International Conference on Information and Knowledge Management, Proceedings. Lin, C. Y. (1999). Training a selection function for extraction. International Conference on Information and Knowledge Management, Proceedings, 55-62. https://doi.org/10.1145/319950.319957
Generating gene summaries from biomedical literature: A study of semi-structured summarization. Information Processing and Management. X Ling, J Jiang, X He, Q Mei, C Zhai, B Schatz, 10.1016/j.ipm.2007.01.01843Ling, X., Jiang, J., He, X., Mei, Q., Zhai, C., & Schatz, B. (2007). Generating gene summaries from biomedical literature: A study of semi-structured summarization. Information Processing and Management, 43(6), 1777-1791. https://doi.org/10.1016/j.ipm.2007.01.018
Compressive approaches for cross-language multi-document summarization. Data and Knowledge Engineering. E Linhares Pontes, S Huet, J M Torres-Moreno, A C Linhares, 10.1016/j.datak.2019.101763125Linhares Pontes, E., Huet, S., Torres-Moreno, J. M., & Linhares, A. C. (2020). Compressive approaches for cross-language multi-document summarization. Data and Knowledge Engineering, 125. https://doi.org/10.1016/j.datak.2019.101763
Toward abstractive summarization using semantic representations. F Liu, J Flanigan, S Thomson, N Sadeh, N A Smith, 10.3115/v1/n15-1114NAACL HLT 2015 -2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference. Liu, F., Flanigan, J., Thomson, S., Sadeh, N., & Smith, N. A. (2015). Toward abstractive summarization using semantic representations. NAACL HLT 2015 -2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 1077-1086. https://doi.org/10.3115/v1/n15-1114
TIARA: Interactive, topic-based visual text summarization and analysis. S Liu, M X Zhou, S Pan, Y Song, W Qian, W Cai, X Lian, 10.1145/2089094.2089101ACM Transactions on Intelligent Systems and Technology. 32Liu, S., Zhou, M. X., Pan, S., Song, Y., Qian, W., Cai, W., & Lian, X. (2012). TIARA: Interactive, topic-based visual text summarization and analysis. ACM Transactions on Intelligent Systems and Technology, 3(2). https://doi.org/10.1145/2089094.2089101
Text summarization contribution to semantic question answering: New approaches for finding answers on the web. E Lloret, H Llorens, P Moreda, E Saquete, M Palomar, 10.1002/int.20502International Journal of Intelligent Systems. 2612Lloret, E., Llorens, H., Moreda, P., Saquete, E., & Palomar, M. (2011). Text summarization contribution to semantic question answering: New approaches for finding answers on the web. International Journal of Intelligent Systems, 26(12), 1125-1152. https://doi.org/10.1002/int.20502
COMPENDIUM: A text summarization system for generating abstracts of research papers. E Lloret, M T Romá-Ferri, M Palomar, 10.1016/j.datak.2013.08.005Data and Knowledge Engineering. 88Lloret, E., Romá-Ferri, M. T., & Palomar, M. (2013). COMPENDIUM: A text summarization system for generating abstracts of research papers. Data and Knowledge Engineering, 88, 164-175. https://doi.org/10.1016/j.datak.2013.08.005
Effects of Headings on Text Summarization. R F Lorch, E P Lorch, K Ritchey, L Mcgovern, D Coleman, 10.1006/ceps.1999.1037Contemporary Educational Psychology. 262Lorch, R. F., Lorch, E. P., Ritchey, K., McGovern, L., & Coleman, D. (2001). Effects of Headings on Text Summarization. Contemporary Educational Psychology, 26(2), 171-191. https://doi.org/10.1006/ceps.1999.1037
Gist : general integrated summarization of text and reviews. Soft Computing. J Lovinger, I Valova, C Clough, 10.1007/s00500-017-2882-2Lovinger, J., Valova, I., & Clough, C. (2017). Gist : general integrated summarization of text and reviews. Soft Computing. https://doi.org/10.1007/s00500-017- 2882-2
Automatic text summarization using latent semantic analysis. Programming and Computer Software. I V Mashechkin, M I Petrovskiy, D S Popov, D V Tsarev, 10.1134/S036176881106004137Mashechkin, I. V., Petrovskiy, M. I., Popov, D. S., & Tsarev, D. V. (2011). Automatic text summarization using latent semantic analysis. Programming and Computer Software, 37(6), 299-305. https://doi.org/10.1134/S0361768811060041
Tracking and Summarizing News on a Daily Basis with Columbia ' s Newsblaster. K R Mckeown, J L Klavans, B Schiffman, Mckeown, K. R., Klavans, J. L., & Schiffman, B. (n.d.). Tracking and Summarizing News on a Daily Basis with Columbia ' s Newsblaster.
Language as a latent variable: Discrete generative models for sentence compression. Y Miao, P Blunsom, 10.18653/v1/d16-1031EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing, Proceedings. Miao, Y., & Blunsom, P. (2016). Language as a latent variable: Discrete generative models for sentence compression. EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing, Proceedings, 319-328. https://doi.org/10.18653/v1/d16-1031
Explorations in automatic book summarization. R Mihalcea, H Ceylan, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningMihalcea, R., & Ceylan, H. (2007). Explorations in automatic book summarization. EMNLP-CoNLL 2007 -Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, June, 380-389.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, 1st International Conference on Learning Representations, ICLR 2013 -Workshop Track Proceedings. Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. 1st International Conference on Learning Representations, ICLR 2013 -Workshop Track Proceedings, 1-12.
Text document summarization using word embedding. Expert Systems with Applications. M Mohd, R Jan, M Shah, 10.1016/j.eswa.2019.112958143Mohd, M., Jan, R., & Shah, M. (2020). Text document summarization using word embedding. Expert Systems with Applications, 143. https://doi.org/10.1016/j.eswa.2019.112958
Summarization of biomedical articles using domain-specific word embeddings and graph ranking. M Moradi, M Dashti, M Samwald, 10.1016/j.jbi.2020.103452Journal of Biomedical Informatics. 107103452Moradi, M., Dashti, M., & Samwald, M. (2020). Summarization of biomedical articles using domain-specific word embeddings and graph ranking. Journal of Biomedical Informatics, 107(May), 103452. https://doi.org/10.1016/j.jbi.2020.103452
Combining linguistic and machine learning techniques for email summarization. S Muresan, E Tzoukermann, J L Klavans, 10.3115/1117822.1117837Muresan, S., Tzoukermann, E., & Klavans, J. L. (2001). Combining linguistic and machine learning techniques for email summarization. 1-8. https://doi.org/10.3115/1117822.1117837
Candidate sentence selection for extractive text summarization. B Mutlu, E A Sezer, M A Akcayol, 10.1016/j.ipm.2020.102359Information Processing and Management. 657Mutlu, B., Sezer, E. A., & Akcayol, M. A. (2020). Candidate sentence selection for extractive text summarization. Information Processing and Management, 57(6). https://doi.org/10.1016/j.ipm.2020.102359
Abstractive text summarization using sequence-to-sequence RNNs and beyond. R Nallapati, B Zhou, C Santos, Ç Gulçehre, B Xiang, 10.18653/v1/k16-1028CoNLL 2016 -20th SIGNLL Conference on Computational Natural Language Learning, Proceedings. Nallapati, R., Zhou, B., dos Santos, C., Gulçehre, Ç., & Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence RNNs and beyond. CoNLL 2016 -20th SIGNLL Conference on Computational Natural Language Learning, Proceedings, 280-290. https://doi.org/10.18653/v1/k16-1028
Automatic text summarization using a machine learning approach. J L Neto, A A Freitas, C A A Kaestner, 10.1007/3-540-36127-8_20Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics. iNeto, J. L., Freitas, A. A., & Kaestner, C. A. A. (2002). Automatic text summarization using a machine learning approach. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2507(i), 205-215. https://doi.org/10.1007/3-540-36127-8_20
The diversity-based approach to open-domain text summarization. T Nomoto, Y Matsumoto, 10.1016/S0306-4573(02Information Processing and Management. 393Nomoto, T., & Matsumoto, Y. (2003). The diversity-based approach to open-domain text summarization. Information Processing and Management, 39(3), 363- 389. https://doi.org/10.1016/S0306-4573(02)00096-1
Automated Querybiased and Structure-preserving Text Summarization on Web Documents. … on Innovations in Intelligent Systems and …. F Pembe, T Güngör, Pembe, F., & Güngör, T. (2007). Automated Querybiased and Structure-preserving Text Summarization on Web Documents. … on Innovations in Intelligent Systems and ….
Application of text summarization techniques to the geographical information retrieval task. J M Perea-Ortega, E Lloret, Alfonso Ureña-López, L Palomar, M , 10.1016/j.eswa.2012.12.012Expert Systems with Applications. 408Perea-Ortega, J. M., Lloret, E., Alfonso Ureña-López, L., & Palomar, M. (2013). Application of text summarization techniques to the geographical information retrieval task. Expert Systems with Applications, 40(8), 2966-2974. https://doi.org/10.1016/j.eswa.2012.12.012
Proceedings of the 2009 2nd International Conference on Computer Science and Its Applications : CSA 2009. the 2009 2nd International Conference on Computer Science and Its Applications : CSA 2009IEEEProceedings of the 2009 2nd International Conference on Computer Science and Its Applications : CSA 2009. (2009). IEEE.
Centroid-based summarization of multiple documents. Information Processing and Management. D R Radev, H Jing, M Styś, D Tam, 10.1016/j.ipm.2003.10.00640Radev, D. R., Jing, H., Styś, M., & Tam, D. (2004). Centroid-based summarization of multiple documents. Information Processing and Management, 40(6), 919- 938. https://doi.org/10.1016/j.ipm.2003.10.006
Figure-associated text summarization and evaluation. B P Ramesh, R J Sethi, H Yu, 10.1371/journal.pone.0115671PLoS ONE. 210Ramesh, B. P., Sethi, R. J., & Yu, H. (2015). Figure-associated text summarization and evaluation. PLoS ONE, 10(2). https://doi.org/10.1371/journal.pone.0115671
An extractive text summarization approach using tagged-LDA based topic modeling. R Rani, D K Lobiyal, 10.1007/s11042-020-09549-3Multimedia Tools and Applications. 803Rani, R., & Lobiyal, D. K. (2021). An extractive text summarization approach using tagged-LDA based topic modeling. Multimedia Tools and Applications, 80(3), 3275-3305. https://doi.org/10.1007/s11042-020-09549-3
INFORMATION EXTRACTION AND TEXT SUMMARIZATION USING LINGUISTIC KNOWLEDGE ACQUISITION*. L F Rau, P S Jacobs, U Zernik, Informarron Processrng & Managemenr. 25Rau, L. F., Jacobs, P. S., & Zernik, U. (1989). INFORMATION EXTRACTION AND TEXT SUMMARIZATION USING LINGUISTIC KNOWLEDGE ACQUISITION*. In Informarron Processrng & Managemenr (Vol. 25, Issue 4).
A Formal Model of Text Summarization Based on Condensation Operators of a Terminological Logic. U Reimer, U Hahn, S Life, F Unlverslty, Reimer, U., Hahn, U., Life, S., & Unlverslty, F. (n.d.). A Formal Model of Text Summarization Based on Condensation Operators of a Terminological Logic.
A neural attention model for sentence summarization. A M Rush, S Chopra, J Weston, Conference Proceedings -EMNLP 2015: Conference on Empirical Methods in Natural Language Processing. Rush, A. M., Chopra, S., & Weston, J. (2015). A neural attention model for sentence summarization. Conference Proceedings -EMNLP 2015: Conference on Empirical Methods in Natural Language Processing, 379-389.
Automatic text structuring and summarization. Information Processing and Management. G Salton, A Singhal, M Mitra, C Buckley, 10.1016/S0306-4573(96)00062-333Salton, G., Singhal, A., Mitra, M., & Buckley, C. (1997). Automatic text structuring and summarization. Information Processing and Management, 33(2), 193- 207. https://doi.org/10.1016/S0306-4573(96)00062-3
The impact of term-weighting schemes and similarity measures on extractive multi-document text summarization. J M Sanchez-Gomez, M A Vega-Rodríguez, C J Pérez, 10.1016/j.eswa.2020.114510Expert Systems with Applications. 169Sanchez-Gomez, J. M., Vega-Rodríguez, M. A., & Pérez, C. J. (2021). The impact of term-weighting schemes and similarity measures on extractive multi-document text summarization. Expert Systems with Applications, 169(December 2020). https://doi.org/10.1016/j.eswa.2020.114510
Text summarization using Wikipedia. Y Sankarasubramaniam, K Ramanathan, S Ghosh, 10.1016/j.ipm.2014.02.001Information Processing and Management. 503Sankarasubramaniam, Y., Ramanathan, K., & Ghosh, S. (2014a). Text summarization using Wikipedia. Information Processing and Management, 50(3), 443-461. https://doi.org/10.1016/j.ipm.2014.02.001
Text summarization using Wikipedia. Y Sankarasubramaniam, K Ramanathan, S Ghosh, 10.1016/j.ipm.2014.02.001Information Processing and Management. 503Sankarasubramaniam, Y., Ramanathan, K., & Ghosh, S. (2014b). Text summarization using Wikipedia. Information Processing and Management, 50(3), 443-461. https://doi.org/10.1016/j.ipm.2014.02.001
Volume of data/information created, captured, copied, and consumed worldwide from. A V See, See, A. V., (2021).Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025. https://www.statista.com/statistics/871513/worldwide-data-created/, accessed on 27 Jan 2022.
Noise reduction through summarization for Web-page classification. Information Processing and Management. D Shen, Q Yang, Z Chen, 10.1016/j.ipm.2007.01.01343Shen, D., Yang, Q., & Chen, Z. (2007). Noise reduction through summarization for Web-page classification. Information Processing and Management, 43(6), 1735-1747. https://doi.org/10.1016/j.ipm.2007.01.013
Efficiently computed lexical chains as an intermediate representation for automatic text summarization. H G Silber, K F Mccoy, 10.1162/089120102762671954Computational Linguistics. 284Silber, H. G., & McCoy, K. F. (2002). Efficiently computed lexical chains as an intermediate representation for automatic text summarization. Computational Linguistics, 28(4), 486-496. https://doi.org/10.1162/089120102762671954
A Sinha, n.dA Yadav, n.dA Gahlot, n.dExtractive Text Summarization using Neural Networks. Sinha, A., Yadav, A., & Gahlot, A. (n.d.). Extractive Text Summarization using Neural Networks.
S Song, H Huang, T Ruan, 10.1007/s11042-018-5749-3Abstractive text summarization using LSTM-CNN based deep learning. Multimedia Tools and Applications. 78Song, S., Huang, H., & Ruan, T. (2019a). Abstractive text summarization using LSTM-CNN based deep learning. Multimedia Tools and Applications, 78(1), 857- 875. https://doi.org/10.1007/s11042-018-5749-3
S Song, H Huang, T Ruan, 10.1007/s11042-018-5749-3Abstractive text summarization using LSTM-CNN based deep learning. Multimedia Tools and Applications. 78Song, S., Huang, H., & Ruan, T. (2019b). Abstractive text summarization using LSTM-CNN based deep learning. Multimedia Tools and Applications, 78(1), 857- 875. https://doi.org/10.1007/s11042-018-5749-3
Evaluation measures for text summarization. J Steinberger, K Ježek, Computing and Informatics. 282Steinberger, J., & Ježek, K. (2009). Evaluation measures for text summarization. Computing and Informatics, 28(2), 251-275.
Query-oriented text summarization based on hypergraph transversals. Information Processing and Management. H Van Lierde, T W S Chow, 10.1016/j.ipm.2019.03.00356Van Lierde, H., & Chow, T. W. S. (2019). Query-oriented text summarization based on hypergraph transversals. Information Processing and Management, 56(4), 1317-1338. https://doi.org/10.1016/j.ipm.2019.03.003
Visualizing email content: Portraying relationships from conversational histories. F B Viégas, S Golder, J Donath, Conference on Human Factors in Computing Systems -Proceedings. 2Viégas, F. B., Golder, S., & Donath, J. (2006). Visualizing email content: Portraying relationships from conversational histories. Conference on Human Factors in Computing Systems -Proceedings, 2, 979-988.
Exploring domain shift in extractive text summarization. D Wang, P Liu, M Zhong, J Fu, X Qiu, X Huang, ArXivWang, D., Liu, P., Zhong, M., Fu, J., Qiu, X., & Huang, X. (2019). Exploring domain shift in extractive text summarization. ArXiv.
Regularizing output distribution of abstractive chinese social media text summarization for improved semantic consistency. B Wei, X Ren, X Sun, Y Zhang, X Cai, Q Su, ArXiv. 5WEI, B., REN, X., SUN, X., ZHANG, Y., CAI, X., & SU, Q. (2018). Regularizing output distribution of abstractive chinese social media text summarization for improved semantic consistency. ArXiv, 5, 1-14.
Title generation with quasi-synchronous grammar. K Woodsend, Y Feng, M Lapata, EMNLP 2010 -Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. Woodsend, K., Feng, Y., & Lapata, M. (2010). Title generation with quasi-synchronous grammar. EMNLP 2010 -Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, October, 513-523.
Neural extractive text summarization with syntactic compression. J Xu, G Durrett, 10.18653/v1/d19-1324EMNLP-IJCNLP 2019 -2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference. Xu, J., & Durrett, G. (2020). Neural extractive text summarization with syntactic compression. EMNLP-IJCNLP 2019 -2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 3292-3303. https://doi.org/10.18653/v1/d19-1324
Hierarchical Human-Like Deep Neural Networks for Abstractive Text Summarization. M Yang, C Li, Y Shen, Q Wu, Z Zhao, X Chen, 10.1109/tnnls.2020.3008037IEEE Transactions on Neural Networks and Learning Systems. Yang, M., Li, C., Shen, Y., Wu, Q., Zhao, Z., & Chen, X. (2020). Hierarchical Human-Like Deep Neural Networks for Abstractive Text Summarization. IEEE Transactions on Neural Networks and Learning Systems, 1-14. https://doi.org/10.1109/tnnls.2020.3008037
Hierarchical Human-Like Deep Neural Networks for Abstractive Text Summarization. M Yang, C Li, Y Shen, Q Wu, Z Zhao, X Chen, 10.1109/TNNLS.2020.3008037IEEE Transactions on Neural Networks and Learning Systems. 326Yang, M., Li, C., Shen, Y., Wu, Q., Zhao, Z., & Chen, X. (2021). Hierarchical Human-Like Deep Neural Networks for Abstractive Text Summarization. IEEE Transactions on Neural Networks and Learning Systems, 32(6), 2744-2757. https://doi.org/10.1109/TNNLS.2020.3008037
Plausibility-promoting generative adversarial network for abstractive text summarization with multitask constraint. M Yang, X Wang, Y Lu, J Lv, Y Shen, C Li, 10.1016/j.ins.2020.02.040Information Sciences. 521Yang, M., Wang, X., Lu, Y., Lv, J., Shen, Y., & Li, C. (2020). Plausibility-promoting generative adversarial network for abstractive text summarization with multi- task constraint. Information Sciences, 521, 46-61. https://doi.org/10.1016/j.ins.2020.02.040
Document concept lattice for text understanding and summarization. Information Processing and Management. S Ye, T S Chua, M Y Kan, L Qiu, 10.1016/j.ipm.2007.03.01043Ye, S., Chua, T. S., Kan, M. Y., & Qiu, L. (2007a). Document concept lattice for text understanding and summarization. Information Processing and Management, 43(6), 1643-1662. https://doi.org/10.1016/j.ipm.2007.03.010
Document concept lattice for text understanding and summarization. Information Processing and Management. S Ye, T S Chua, M Y Kan, L Qiu, 10.1016/j.ipm.2007.03.01043Ye, S., Chua, T. S., Kan, M. Y., & Qiu, L. (2007b). Document concept lattice for text understanding and summarization. Information Processing and Management, 43(6), 1643-1662. https://doi.org/10.1016/j.ipm.2007.03.010
Text summarization using a trainable summarizer and latent semantic analysis. J Y Yeh, H R Ke, W P Yang, I H Meng, 10.1016/j.ipm.2004.04.00341Yeh, J. Y., Ke, H. R., Yang, W. P., & Meng, I. H. (2005a). Text summarization using a trainable summarizer and latent semantic analysis. 41(1), 75-95. https://doi.org/10.1016/j.ipm.2004.04.003
Text summarization using a trainable summarizer and latent semantic analysis. Information Processing and Management. J Y Yeh, H R Ke, W P Yang, I H Meng, 10.1016/j.ipm.2004.04.00341Yeh, J. Y., Ke, H. R., Yang, W. P., & Meng, I. H. (2005b). Text summarization using a trainable summarizer and latent semantic analysis. Information Processing and Management, 41(1), 75-95. https://doi.org/10.1016/j.ipm.2004.04.003
Text summarization using unsupervised deep learning. M Yousefi-Azar, L Hamey, 10.1016/j.eswa.2016.10.017Expert Systems with Applications. 68Yousefi-Azar, M., & Hamey, L. (2017a). Text summarization using unsupervised deep learning. Expert Systems with Applications, 68, 93-105. https://doi.org/10.1016/j.eswa.2016.10.017
Text summarization using unsupervised deep learning. M Yousefi-Azar, L Hamey, 10.1016/j.eswa.2016.10.017Expert Systems with Applications. 68Yousefi-Azar, M., & Hamey, L. (2017b). Text summarization using unsupervised deep learning. Expert Systems with Applications, 68, 93-105. https://doi.org/10.1016/j.eswa.2016.10.017
D Zajic, B Dorr, R Schwartz, BBN / UMD at DUC-2004 : Topiary. Proceedings of the HLT-NAACL 2004 Document Understanding Workshop. BostonZajic, D., Dorr, B., & Schwartz, R. (2004). BBN / UMD at DUC-2004 : Topiary. Proceedings of the HLT-NAACL 2004 Document Understanding Workshop, Boston, 112--119.
Gather customer concerns from online product reviews -A text summarization approach. J Zhan, H T Loh, Y Liu, 10.1016/j.eswa.2007.12.039Expert Systems with Applications. 362 PART 1Zhan, J., Loh, H. T., & Liu, Y. (2009). Gather customer concerns from online product reviews -A text summarization approach. Expert Systems with Applications, 36(2 PART 1), 2107-2115. https://doi.org/10.1016/j.eswa.2007.12.039
Abstract text summarization with a convolutional seq2seq model. Y Zhang, D Li, Y Wang, Y Fang, W Xiao, 10.3390/app9081665Applied Sciences (Switzerland). 89Zhang, Y., Li, D., Wang, Y., Fang, Y., & Xiao, W. (2019). Abstract text summarization with a convolutional seq2seq model. Applied Sciences (Switzerland), 9(8). https://doi.org/10.3390/app9081665
| [] |
[
"EMPLOYING WIKIPEDIA'S NATURAL INTELLIGENCE FOR CROSS LANGUAGE INFORMATION RETRIEVAL",
"EMPLOYING WIKIPEDIA'S NATURAL INTELLIGENCE FOR CROSS LANGUAGE INFORMATION RETRIEVAL"
] | [
"Mike)Mikhail Basilyan \nDepartment of Computer Science\nJohns Hopkins University Baltimore\nMD\n"
] | [
"Department of Computer Science\nJohns Hopkins University Baltimore\nMD"
] | [] | In this paper we present a novel method for retrieving information in languages other than that of the query. We use this technique in combination with existing traditional Cross Language Information Retrieval (CLIR) techniques to improve their results. This method has a number of advantages over traditional techniques that rely on machine translation to translate the query and then search the target document space using a machine translation. This method is not limited to the availability of a machine translation algorithm for the desired language and uses already existing sources of readily available translated information on the internet as a "middle-man" approach. In this paper we use Wikipedia; however, any similar multilingual, cross referenced body of documents can be used. For evaluation and comparison purposes we also implemented a traditional machine translation approach separately as well as the Wikipedia approach separately. (Directions for running the software are in Appendix A.) | null | [
"https://arxiv.org/pdf/0906.2835v1.pdf"
] | 12,356,465 | 0906.2835 | 9645ff46b0b40630050353ebf90c7e44c3afa0bb |
EMPLOYING WIKIPEDIA'S NATURAL INTELLIGENCE FOR CROSS LANGUAGE INFORMATION RETRIEVAL
13 May 2009
Mike)Mikhail Basilyan
Department of Computer Science
Johns Hopkins University Baltimore
MD
EMPLOYING WIKIPEDIA'S NATURAL INTELLIGENCE FOR CROSS LANGUAGE INFORMATION RETRIEVAL
13 May 2009
In this paper we present a novel method for retrieving information in languages other than that of the query. We use this technique in combination with existing traditional Cross Language Information Retrieval (CLIR) techniques to improve their results. This method has a number of advantages over traditional techniques that rely on machine translation to translate the query and then search the target document space using a machine translation. This method is not limited to the availability of a machine translation algorithm for the desired language and uses already existing sources of readily available translated information on the internet as a "middle-man" approach. In this paper we use Wikipedia; however, any similar multilingual, cross referenced body of documents can be used. For evaluation and comparison purposes we also implemented a traditional machine translation approach separately as well as the Wikipedia approach separately. (Directions for running the software are in Appendix A.)
Introduction
The goal of this project and paper is to demonstrate a new method for querying documents of one language in another. We use Russian as the target language and English as the query language, though the results and algorithms can be easily extended to any language combinations and even multiple languages. We introduce a new technique for creating a query vector in the target language (Russian) using Wikipedia, and then we combine that vector with the vector derived using the machine translated query. From here on we will refer to the Wikipedia approach as such, and to the machine translation method as either the traditional method, the current method, or the "Babelfish method" after Yahoo!'s "Babelfish" translation tool.
Cross Language Information Retrieval
Cross Language Information Retrieval (CLIR) is the process of retrieving documents in a language or languages other than that of the original query. CLIR is important for many applications. For example, there are many document collections that are made up of multiple languages and it's more convenient to search the entire collection with the same query. Another example is the process of searching for an image, graph, or diagram that may be a part of larger document. If, for example, we are looking for a circuit schematic for some electronic device, it might not matter to us that the schematic is part of a document written in a language we cannot understand.
Current Methods
Current methods of CLIR usually involve a machine translation (MT) step. The query, the documents, or both are translated with a MT algorithm and compared using traditional information retrieval techniques. This process is limited by the effectiveness of modern MT algorithm. The technique proposed here and described below attempts to side step the issues of MT. Its advantages are described below.
Using Wikipedia for CLIR
Wikipedia i , the free online encyclopedia, has become an invaluable resource for many individuals looking for fast information. It is probably one of the biggest collections of freely available, organized human knowledge to date and contains millions of articles on obscure topics in a wide variety of languages: currently almost 3 million in English, about 800,000 in French, and approaching a 900,000 in German. Articles on a variety of topics are written in multiple languages, though they are not translations of each other. For example, an article on "Golden Gate Bridge" appears in both English, and Italian. ii Even though the Italian article is not a direct translation of the English, it is very near to it in terms of fulfilling an informational need. Someone trying to find information on the famous red bridge in San Francisco would be similarly satisfied with either article, assuming they understand both languages.
The novelty of our approach lies in the use of Wikipedia's articles in different languages to transpose our informational need from the vector space in one language into another. For example, in order to search a French document collection for the English phrase "Big Ben," it is possible to perform a Wikipedia search for an English article closest in vector space or otherwise related to the term "Big Ben." Then using the contents of the cross referenced page in our target language, French, as a search query, we can identify the French documents in our target collection closest to it in vector space. Figure 1 illustrates this example graphically.
Added Advantage
Using Wikipedia to map a language query into the desired space provides us with a number of advantages over traditional CLIR techniques.
• Machine translation algorithms are not available for a variety of the world's less commonly spoken languages. Wikipedia however provides articles in hundreds of languages, with 88 languages containing over 10,000 articles and 25 languages with 100,000 articles. In comparison, using the Yahoo! Babel Fish service iii can translate from English into just 12 other languages. From Spanish to just 2 others, and from Japanese to 1 other language.
• We are also able bypass the traditional problems associated with machine translation. For example translating "Big Ben" using the Yahoo!'s Babelfish service iv into Russian yields: "Большое Бен" which is a literal translation with a grammatical mistake and would yield very few hits. The common way to refer to "Big Ben" in Russian is, "Биг-Бен," v this is more of a transliteration. (Wikipedia gets it right.)
• As Wikipedia grows and includes more obscure articles, this technique will become more effective.
• This technique is not limited to Wikipedia but can be used with any translated and cross referenced document collection that can be used as "middle-man."
Combining the Two Methods
We implemented a CLIR search engine that has three options for searching a document in a foreign language. The first method allows the user to use the machine translated query in combination with the Wikipedia derived query to search the document space and get the best of both approaches. We also implemented the Wikipedia approach and the machine translation approach (using Yahoo!'s Babelfish machine translator service) separately for comparison purposes.
Languages & Libraries
Perl was chosen as the language of choice for implementing the algorithms. Perl's flexibility and extensibility make it a language of choice of rapid experimentation. Its text manipulation and regular expression facilities also help make it a good choice.
In addition to Perl and its built-in modules, the following modules were used: Google::Search for searching Wikipedia for the English articles, HTTP::Request, HTTP Response, LWP:: UserAgent for retrieving Wikipedia pages and parsing the URL for the target language (Russian) from them. Lingua::Stem::Snowball was used to stem the Russian words, Lingua::Translate vi was used as the machine translation algorithm with which to compare our results. Figure 2 shows the outline of the overall process flow for the Wikipedia approach. The backend loads the document corpus (right hand column), removes common words (using a Russian stoplist) and stems the words using the Snowball stemmer vii . Then each document is encoded into its own vector using TF-IDF. On the user-end side, an English query is prompted, the closest matching Wikipedia page is retrieved, and the corresponding Russian page is fetched and is encoded into a vector. Finally, each document vector is compared against the Russian Wiki page vector and the results are returned, sorted.
The Wikipedia Approach Algorithm
Finding Corresponding Wikipedia Articles
Finding the English Wikipedia article is done using the Google search API. We use Google to search the Wikipedia site for the English using the query "$query site:wikipedia.org;". It is important to point out that we are not using Google to perform any of the actual searching of the target Russian document corpus. We are only using Google to find the corresponding Wikipedia English article. Even though Wikipedia provides its own search engine it proved insufficient for our purposes, and so Google was used.
Once the English page is found the link to the Russian equivalent is parsed out using regular expressions and that page is fetched.
Encoding Vectors
Once the Russian website has been retrieved it's parsed into individual words. Common words, also known as stop words, such as "Я" (Russian for "I"), "Вы" (Russian for "you") were removed. Then using the Snowball stemming algorithm we transformed words from their given tenses and conjugations into their most common form. For example, if a document or query contained the words "важно", "важное", "важной" (all variations of the word important) they would all be transformed into "важн." viii This helps make sure that someone searching for the word "важно" gets documents that contain "важной."
Each word was encoded into our vector using TF-IDF. For each word, we counted how frequently it occurs in the document and multiplied that by log(N/df t ) where N is the number of documents in the collection and df t is the number of documents the given term appears in at least once. TF-IDF is widely believed to be the best known weighting scheme in information retrieval. It increases both with the number of occurrences of a term in a document and it increases for the rarity of a term in a document.
Comparing Vectors
Document vectors were compared using the cosine similarity:
cosሺ,ܣ ܤሻ = ܣ • ܤ ||ܣ|| * ||ܤ||
The cosine similarity represents the angle between two vectors, A and B in the term space. The closer the vectors are in the term space the more terms they share in common and are thus more closely related. Other advantages of using the cosine similarity include the fact that it's self normalizing so vector lengths are not taken into account.
Using a Machine Translation
Our machine translation was implemented by contacting Yahoo!'s Babelfish translator service and requesting that the English query be translated into Russian. The Russian query was then transformed into a vector the same way the Wikipedia page was.
Joining the two Approaches
The two queries were joined together by taking the union of the two vectors.
Document Collection
Because we could not locate an existing Russian document corpus for information retrieval purposes, we had to build our own by hand. To this end we scrapped together on the order of one hundred paragraph-length segments from http://www.math-net.ru, a website that archives articles from Russian mathematical journals. We primarily used articles from 'Дискретная математика' (Discrete Mathematics). It is worth noting that the author's understanding of Russian and advanced discrete mathematics is embarrassingly low.
Evaluation & Results
We wanted to see what advantages, if any, the Wikipedia approach brings to the machine translation approach. To test this, we used the two approaches separately (options 2, and 3 in the software menu) to run the codes on some discriminating queries that help illuminate the difference between these two approaches.
First let's take a look at some places for which the Wikipedia approach when used alone had failed. We performed searches for the English query "complexity." This query helps demonstrate one of the main weaknesses of the Wikipedia approach: it turns out that there is no Russian Wikipedia article for "complexity" and so our search failed. Trying the Babelfish/machine translation approach alone yielded a satisfactory Russian translation: "сложность" and 15 apparently relevant results.
In a different trial both approaches performed equally well. For example, searching for "monotonic functions" the results were surprisingly similar. The following test was meant to demonstrate areas in which the Wikipedia approach performed extremely well and the machine translation approach used alone failed. Take the query "bubble sort." Our document collection contains a paragraph long definition of the famous "bubble sort" algorithm that would be an ideal result for most queries for "bubble sort."
With the Wikipedia approach:
Using our Wikipedia method we get the following result: The document we expected is ranked first (doc ID. 0000.)
Using Yahoo!'s Babelfish to perform the translation for "bubble sort" we get the translation "вид пузыря" which translates back into English as "a type of bubble." The expected document is not even amongst the top 7 results (our algorithm is configured to not return documents below a similarity threshold of 1e-12): Running the above tests, amongst others, using software option 1 (combining the Wikipedia method with the machine translation method) yielded results that were as satisfactory as above or better. There are situations in which the Wikipedia approach provides lower quality results, especially when the article in question has many marginally related topics. In these situations it is helpful to only use the first twenty words in the article which include, typically, the definition of the term. (This option is already in the software.)
==============================================================================
Future Directions
This project has given very satisfactory initial results. It has demonstrated that it is possible to use Wikipedia to aid in cross language information retrieval tasks. In future revisions it would be valuable to consider other cross-referenced documents in addition to Wikipedia. Translated corporate websites come to mind.
It would also be valuable to find a more complete document corpus that could be used to evaluate the process as well as an expert group to determine which documents are relevant so that a more quantitative analysis can be performed.
Finally, it would be interesting to provide a mechanism that can learn which queries are helped by including the Wikipedia results and which are harmed by it and weigh the corresponding vectors appropriately, or allow the user to set the weights manually.
Conclusion
It appears that our approach provides significant improvements over the traditional machine translation method used alone. It is of no use in some situations when there is no Wikipedia article for the matching query or the query is too complex to be captured in a Wikipedia article. However, given an appropriate cross-referenced, multi-lingual alternative to Wikipedia, any query is reasonable, in theory. It performs extremely well in situations where you have small, difficult to translate queries such as "bubble sort" or "root locus." In conclusion, it seems to make sense that this approach is used in conjunction with machine translation algorithms, as implemented here, to augment difficult to translate phrases with Wikipedia derived "translations."
Figure 1 :
1Using Wikipedia to move from one language space into another
Figure 2 :
2Wikipedia approach
ENGLISH WIKI: http://en.wikipedia.org/wiki/Monotonic TRANSLATED WIKI: http://ru.wikipedia.org/wiki/%D0%9C%D0%BE%D0%BD%D0%BE%D1%82%D0%BE%D0%BD%D0%BD%D0%B0%D1%8F_%D1%84% D1%83%D0%BD%D0%BA%D1%86%D0%B8%D1%8FRESULTS:
==============================================================================
RANK
DOC.ID
DOCUMENT TITLE
SIMILARITY
==============================================================================
01
0028
О средней сложности монотонных
0.153419
02
0088
Оценки сложности одного метода
0.153412
03
0081
О свойствах функций
0.134879
04
0012
О сложности расшифровки разбие
0.127879
05
0038
О мощности некоторых подклассо
0.117888
06
0061
Некоторые свойства групп инерц
0.110278
07
0044
Сложность умножения в некоторы
0.090094
08
0057
О числе биюнктивных функций
0.075312
09
0006
О связи уровня аффинности с кр
0.074252
10
0010
Независимые системы порождающи
0.071059
11
0086
Абстрактные свойства класса ин
0.062243
12
0032
Об уровне аффинности булевых ф
0.061459
13
0078
О числе независимых множеств в
0.058187
14
0039
Эквациональное замыкание
0.058107
15
0052
О наследовании свойств при суж
0.056978
With the Babelfish approach:
Please enter your English search query:
monotonic function
Contacting Yahoo!'s Babelfish Translator Service...
Babelfish Query Translation: монотонно функция
RESULTS:
==============================================================================
RANK
DOC.ID
DOCUMENT TITLE
SIMILARITY
==============================================================================
01
0028
О средней сложности монотонных
0.464489
02
0081
О свойствах функций
0.332347
03
0088
Оценки сложности одного метода
0.321211
04
0038
О мощности некоторых подклассо
0.317822
05
0061
Некоторые свойства групп инерц
0.261573
06
0012
О сложности расшифровки разбие
0.236581
07
0006
О связи уровня аффинности с кр
0.182960
08
0057
О числе биюнктивных функций
0.166212
09
0032
Об уровне аффинности булевых ф
0.151438
10
0036
О сложности вычисления диффере
0.129123
11
0052
О наследовании свойств при суж
0.122090
12
0025
Функция Шеннона сложности инте
0.114794
13
0024
О некоторых алгоритмах построе
0.110947
14
0003
язык Эта глава описывает одну
0.073595
15
0033
Бесповторность распознается сх
0.069425
Appendix A: Running the CodeThe code requires the following modules to be installed from CPAN: (The first three should come with a default Perl installation.) HTTP::Request HTTP Response LWP:: UserAgent Google::Search Lingua::Stem::Snowball Lingua::Translate Then to run the code just type "perl main.pl" in the directory into which the search engine was unpacked. To run queries an active internet connection is required.
. Ii Wikipedia, Golden Gate Bridge; Big Benii Wikipedia, Golden Gate Bridge <http://en.wikipedia.org/wiki/Golden_gate_bridge> iii Yahoo!'s Babelfish <http://babelfish.yahoo.com/> iv Yahoo!'s Babelfish <http://babelfish.yahoo.com/> v Russian Wikipedia, Big Ben <http://ru.wikipedia.org/wiki/Биг-Бен> vi Perl Archive Network<http://www.cpan.org/> vii Snowball Stemmer <http://snowball.tartarus.org/> viii Russian Stemming Algorithm < http://snowball.tartarus.org/algorithms/russian/stemmer.html>
| [] |
[
"Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing",
"Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing"
] | [
"FondazioneClaudio Eccher ",
"Bruno Kessler ",
"Italy Reviewed ",
"Angelo D ' Ambrosio ",
"Marcel Salathé marcel.salathe@epfl.ch ",
"Martin M Müller \nDigital Epidemiology Lab, EPFL\nGenevaSwitzerland\n\nSchool of Life Sciences, EPFL\nLausanneSwitzerland\n\nSchool of Computer and Communication Sciences\nEPFL\nLausanneSwitzerland\n",
"Marcel Salathé \nDigital Epidemiology Lab, EPFL\nGenevaSwitzerland\n\nSchool of Life Sciences, EPFL\nLausanneSwitzerland\n\nSchool of Computer and Communication Sciences\nEPFL\nLausanneSwitzerland\n",
"\nUniversity of Turin\nItaly\n",
"\nLaszlo Balkanyi\nRetired, SolnaSweden\n"
] | [
"Digital Epidemiology Lab, EPFL\nGenevaSwitzerland",
"School of Life Sciences, EPFL\nLausanneSwitzerland",
"School of Computer and Communication Sciences\nEPFL\nLausanneSwitzerland",
"Digital Epidemiology Lab, EPFL\nGenevaSwitzerland",
"School of Life Sciences, EPFL\nLausanneSwitzerland",
"School of Computer and Communication Sciences\nEPFL\nLausanneSwitzerland",
"University of Turin\nItaly",
"Laszlo Balkanyi\nRetired, SolnaSweden"
] | [
"Article 81 Citation: Müller MM and Salathé M (2019) Crowdbreaks: Tracking Health Trends Using"
] | In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams. At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community. Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labeling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labeling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work describes the technical aspects of the platform, thereby covering the functionalities at its current state and exploring its future use cases and extensions. | 10.3389/fpubh.2019.00081 | null | 21,700,973 | 1805.05491 | 7d4a2515e9239ab03339a92075802f07461e8084 |
Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing
Public Social Media Data and Crowdsourcing. Front. Public HealthCopyright Public Social Media Data and Crowdsourcing. Front. Public HealthApril 2019
FondazioneClaudio Eccher
Bruno Kessler
Italy Reviewed
Angelo D ' Ambrosio
Marcel Salathé marcel.salathe@epfl.ch
Martin M Müller
Digital Epidemiology Lab, EPFL
GenevaSwitzerland
School of Life Sciences, EPFL
LausanneSwitzerland
School of Computer and Communication Sciences
EPFL
LausanneSwitzerland
Marcel Salathé
Digital Epidemiology Lab, EPFL
GenevaSwitzerland
School of Life Sciences, EPFL
LausanneSwitzerland
School of Computer and Communication Sciences
EPFL
LausanneSwitzerland
University of Turin
Italy
Laszlo Balkanyi
Retired, SolnaSweden
Crowdbreaks: Tracking Health Trends Using Public Social Media Data and Crowdsourcing
Article 81 Citation: Müller MM and Salathé M (2019) Crowdbreaks: Tracking Health Trends Using
Public Social Media Data and Crowdsourcing. Front. Public Health781April 201910.3389/fpubh.2019.00081Received: 18 October 2018 Accepted: 19 March 2019TECHNOLOGY REPORT Edited by: *Correspondence: Specialty section: This article was submitted to Digital Health, a section of the journal Frontiers in Public Healthdata miningnatural language processing (NLP)crowdsourcingsocial media datasentiment analysis (SA)vaccinationdata stream analyticsmachine learning
In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams. At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community. Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labeling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labeling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work describes the technical aspects of the platform, thereby covering the functionalities at its current state and exploring its future use cases and extensions.
In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams. At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community. Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labeling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labeling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work describes the technical aspects of the platform, thereby covering the functionalities at its current state and exploring its future use cases and extensions.
INTRODUCTION
In the past years, data derived from public social media has been successfully used for capturing diverse trends about health and disease-related issues, such as flu symptoms, sentiments toward vaccination, allergies, and many others (1)(2)(3)(4)(5). Most of these approaches are based on natural language processing (NLP) and share a common workflow. This workflow involves data collection, human annotation of a subset of this data, training of a supervised classifier, and subsequent analysis of the remaining data. The approach has proven promising in many cases, but it also shares a few shortcomings. A major drawback of this type of research process is that a model, which was trained on data from previous years, might not generalize well into the future. This issue, commonly known as concept drift (6), may not necessarily be only related to overfitting, but may simply be a consequence of how language and content, especially on the internet, evolve over time. A similar effect has been suggested to be the main reason for the increasing inaccuracy of Google Flu Trends (GFT), one of the most well-known flu surveillance systems in the past (7). After launching the platform in 2003, GFT's model had been retrained in 2009, which led to a significant improvement of its performance in the following years. However, during the influenza epidemic in 2012/13, the model's performance decreased again and overestimated the extent of the epidemic by a large margin. Shortly after, it was discontinued (8,9).
Apart from the issue of model drift, a second issue associated with current NLP models is that the collection of large amounts of labeled data, usually through platforms such as Amazon Turk 1 (MTurk), is very costly. Labeling a random subset of the collected social media data may be inefficient, as depending on the degree of filtering applied, large fractions of the collected data are possibly not relevant to the topic, and therefore have to be discarded.
Lastly, there is a growing interest in the public health field to capture more fine-grained categorizations of trends, opinions or emotions. Such categorizations could allow to paint a more accurate picture of the nature of the health issue at hand. However, multi-class annotations of a large sample of data again exponentially increases costs.
Here, we introduce Crowdbreaks 2 , a platform targeted at tackling some of these issues. Crowdbreaks allows the continuous labeling of public social media content in a crowdsourced way. The system is built in a way which allows algorithms to improve as more labeled data is collected. This work describes the functionalities of the platform at its current state as well as its possible use cases and extensions.
SIMILAR WORK
In recent years, a number of platforms have been launched which allow the public to contribute to solving a specific scientific problem. Among many others, examples of successful projects include the Zooniverse platform (formerly known as Galaxy Zoo) (10), Crowdcrafting (11), eBird (a platform for collecting ornithological data) (12), and FoldIt (a platform to solve protein folding structures) (13). Many of these projects have shown that citizen science can be used to help solve complex scientific problems. At the same time, there is a growing number of platforms which offer monetary compensations to workers for the fulfillment of microtasks (the most prominent example being MTurk). These platforms gain importance as the need for large amounts of labeled data for the training of supervised machine learning algorithms increases. Previous work focused mostly on efficiency improvement of large-scale human annotation of images, e.g., in the context of the ImageNet project (14). Most of these improvements include better ways to select which data to annotate, how to annotate (which is a UI specific problem) and what type of annotations (classes and subclasses) should be collected (15). Online task assignment algorithms have been suggested which may consider both label uncertainty as well as annotator uncertainty during the annotation process (16,17). Results suggest that this allows for a more efficient training of algorithms. More recently, a crowd-based scientific image annotation platform called Quantius has been proposed, showing decreased analysis time and cost (18). To our knowledge, no similar work has been proposed with the regard to the human annotation of textual data, such as tweets.
METHODS AND TOOLS
Crowdbreaks is a platform which aims at automatizing the whole process from data collection (currently through Twitter), filtering, crowdsourced annotation and training of Machine Learning classifiers. Eventually these algorithms can help evaluate trends in health behaviors, such as vaccine hesitancy or the risk potential for disease outbreaks.
Crowdbreaks consists of a data collection pipeline 3 ("streaming pipeline") and a platform for the collection of labeled data 4 ("user interface"), connected through an API (Application Programming Interface), as schematized in
Streaming Pipeline
Currently Crowdbreaks consumes data from the Twitter streaming API only, therefore the rest of this work will focus on tweets as the only data source. However, it could be extended to any textual data which can be collected in the form of a data stream through an API. The Twitter API allows for the filtering of tweets by a specific set of keywords in realtime. Tweets collected contain at least one exact match within certain fields of the tweet object. Incoming tweets are put on a background job queue for filtering, pre-processing, geo-tag enrichment, and annotation with metadata, such as estimated relevance or sentiment (more on this in section 5). After these processing steps, tweets are stored in a database. Based on a priority score (e.g., the uncertainty of a predicted label, see section 3.3.1) the tweet IDs are also pushed into a priority queue for subsequent labeling. Once the priority queue has reached a certain size, older items with low priority are removed from the queue and replaced with more recent items. Therefore, the queue keeps a pool of recent tweets which are prioritized for labeling. Once a tweet has been labeled, it is ensured that the same tweet will be labeled by a certain number of distinct users in order to reach a consensus.
User Interface
The user interface allows labeling of tweets based on answering of a sequence of questions. Arbitrary question sequences can be defined, which allow the annotation of multiple classes and subclasses to a single tweet. Most commonly, different followup questions would be asked depending on the answers given previously, e.g., whether or not the tweet is relevant to the topic at hand (see Figures 2A,B). In the beginning of a question sequence an API call is made to the streaming pipeline to retrieve a new tweet ID from the priority queue (see section 3.1). Every question a user answers creates a new row in a database table, containing the respective user, tweet, question and answer IDs. After the user has successfully finished the question sequence the respective user ID is then added to a set, in order to ensure that the same tweet is not labeled multiple times by the same user.
Crowdbreaks supports multiple projects, each project may be connected to its own data stream from Twitter. New projects can be created through an admin interface, making it possible to control both the data collection, as well as to define project-specific question sequences. Eventually, visualizations, such as sentiment trends over time, may be presented to the public user, allowing the users to see the outcomes of their work. Crowdbreaks also features an integration of the question sequence interface with Amazon Turk, allowing the collection of labeled data through paid crowdworkers as an alternative to public users.
Sentiment Analysis
Algorithms
In recent years, algorithms for sentiment analysis based on word embeddings have become increasingly more popular compared to traditional approaches which rely on manual feature engineering (19)(20)(21). Word embeddings give a high-dimensional vector representation of the input text, usually based on a pre-trained language model. Although these approaches may not consistently yield better results compared to traditional approaches, they allow for an easier automatization of the training workflow and are usually more generalizable to other problems. This is a desirable property in the context of Crowdbreaks, as it aims to further automatize this process and retrain classifiers automatically as more labeled data arrive. Furthermore, pre-trained word embeddings based on large Twitter corpora are available in different languages, which also make them interesting for following health trends in languages other than English (22). At its current state, the platform makes use of a baseline fastText classifier (21), which is trained on a small set of labeled data. FastText models are quickly re-trained and lead to small model sizes, making them suitable to be used in active learning production environments.
Active Learning
Active learning frameworks have been proposed for a more efficient training of classifiers in the context of word embeddings (23,24). These frameworks allow algorithms to be trained with a much smaller number of annotated data, compared to a standard supervised training workflow (see Figure 3). The query strategy, which is usually related to label uncertainty, is generally the critical component for the relative performance speed-up of these methods. In the context of Crowdbreaks, we are not only prioritizing data with higher label uncertainty, but also data which is more recent in time. Therefore, we are faced with a trade-off between exploration and exploitation with regard to label uncertainty and timeliness of data. Crowdbreaks can serve as a framework to explore these challenges and find the right balance.
Technologies Used
Crowdbreaks uses a Python Flask API to interface between the components of the streaming pipeline and the user interface. The streaming pipeline makes use of Redis for the message queuing of the processing queue as well as the priority queue (see Figure 1). Filtering and data processing, as well as NLP-related tasks are written in Python using the standard data analysis toolchain (numpy, scipy, nltk). Tweet objects are stored as flat files as well as in JSON format on Elasticsearch, which allows for an easier exploration and visualization of the data using Kibana. The user interface is built using Ruby on Rails with a postgres database backend in order to store the annotations, as well as user-related data.
All tools in the Crowdbreaks stack are open source and easy to deploy using Docker. The choice of tools was influenced by their long-term availability, community support and openness. FIGURE 3 | Crowdbreaks can be seen as an active learning framework which allows to improve algorithms as more labels are collected. In this example, an algorithm tries to learn sentiments from tweets and is given an initial small set of labeled data to be trained on. This algorithm may then be used to predict the labels and label uncertainty of newly collected tweets. Subsequently, tweets which the algorithm is most uncertain about will be presented to human annotators. As new labeled data is generated, the algorithm is retrained to further improve in performance.
RESULTS
The intensity, spread and effects of public opinion toward vaccination on social media and news sources has been explored in previous work (3,25). Declines in vaccine confidence and boycotts of vaccination programs could sometimes be linked to disease outbreaks or set back efforts to eradicate certain diseases, such as polio or measles (26,27). In particular, the potential benefits of real-time monitoring of vaccine sentiments as a tool for the improved planning of public health intervention programs has been highlighted (28)(29)(30). Tracking of such sentiments toward vaccines is a primary use case of Crowdbreaks.
Between July 2018 and January 2019 tweets were collected through the Twitter Streaming API using a list of vaccinerelated keywords 5 and predicted using a supervised bag-ofwords fastText classifier 6 . The classifier was trained on annotated data (collected through MTurk) provided in recent work by Pananos et al. (29), resulting in micro-averaged precision and recall scores of 77.0%. The collected annotations include the label classes "positive, " "negative, " and "other" (in this work denoted as "neutral") with regard to the attitude toward vaccinations the tweets express. For a detailed reasoning of how and why these specific labels and keywords were selected, please refer to the work by Pananos et al. As shown in Figure 4, we observe most of the discussion surrounding vaccination to be either neutral or positive. The fraction of data classified as "anti-vaccine" is below 10% and remains relatively constant at that level. Furthermore, we observe that the weekly tweet count exhibits a large variance in terms of volume over time. This effect can be mitigated by calculating a normalized ratio of positive and negative counts in a rolling window of 1 month, which we call "sentiment index" 5 The keywords include "vaccine", "vaccination", "vaxxer, " "vaxxed, " "vaccinated, " "vaccinating, " "vacine". 6 Data and code of the analysis are provided under https://github.com/ salathegroup/crowdbreaks-paper. Figure 4 (black curve). The sentiment index is calculated as (r − µ)/σ , in which r is the fraction of tweets predicted as positive among positive and negative tweets, and µ and σ are the mean and standard deviation of this ratio, respectively. This value remains largerly constant over time and then increases after August 2018, due to an increase in the number of tweets predicted as "pro-vaccine" and stays at that level. Further investigation will be needed in order to understand the nature of this change. Although these results are only of preliminary nature they illustrate the potential of the platform to track health trends over time.
DISCUSSION
Here we introduced Crowdbreaks, an open tool allowing any researcher to start measurements of health trends in real-time from public social media content. As illustrated in the use case on vaccine sentiments, the platform can be used to monitor such sentiments and detect long-term shifts in health trends. Further analysis will be needed in order to reveal spatial sentiment distributions of the predicted vaccine sentiment as well as the correlation with vaccination coverage or disease outbreak data. Such analysis would however go beyond the scope of this work. Unlike in traditional settings of measuring vaccine sentiment, the platform involves crowdworkers as well as the general public to collect new annotations continuously over time. This allows to re-train models and counteract the problem of concept drift. In the future, we may use the platform to measure more fine-grained categorizations of this data, hence improving our understanding of attitudes toward vaccination.
A major goal of the platform is the eventual incorporation of similar models into the public health decision-making process. In order to achieve this, there is a need for proper validation and benchmarking of machine learning models, which in turn increases both trust and transparency of algorithms used for such purpose (31). In the future, annotation data generated on Crowdbreaks may be released in public challenges, thereby creating an open benchmark for a specific problem.
Although the platform focuses on the measurement of health trends, Crowdbreaks may also be used with regard to tracking flu or other infectious diseases in the future. However, disease prediction solely from Twitter data remains to be a hard problem. This is due to the fact that a precise understanding of the content (e.g., whether a tweet just raises awareness vs. actually reporting an infection) is crucial for the robustness of the model. Previous work has suggested hybrid models between Twitter and less volatile data sources (such a Wikipedia page rate clicks) to be superior for the purpose of outbreak tracking (32,33). Such hybrid models may serve as a future direction for disease prediction projects on Crowdbreaks.
AUTHOR CONTRIBUTIONS
MM built the platform, did the analysis and wrote large parts of the papers and made the figures. MS had the initial idea for the project, drafted the initial design of the platform and wrote the abstract of the paper. All authors revised the manuscript and made corrections.
FIGURE 1 |
1Overview of the architecture of the Crowdbreaks platform. The platform consists of a streaming pipeline (a message queueing system) and a user interface, linked through an API.
Figure 1 .
1
FIGURE 2 |
2(A) An example of a question sequence. Questions are denoted by Q, answers by a and the arrows designate the possible transitions between questions. In the given example, different questions are reached depending on whether an annotator answers Q 1 with a 1,1 or a 1,2 allowing for an efficient and fine-grained annotation of the data. (B) Screenshot of the annotation interface. Shown is a question for determining the vaccine sentiment of a tweet which has been deemed relevant to the topic.
FIGURE 4 |
4Real-time predictions of vaccine sentiments using Crowdbreaks. The data is based on a Twitter data stream filtered by vaccine-related keywords. Colored values indicate the stacked 1-week moving averages of tweet counts of the respective label class. The black curve denotes a sentiment index which reflects a lowess fit of the normalized ratio of counts of tweets predicted as positive and negative, aggregated in a 1 month window. The sentiment index reveals certain long-term trends irrespective of the high variance in volume over time.Frontiers in Public Health | www.frontiersin.org in
https://www.mturk.com 2 https://www.crowdbreaks.org
https://github.com/crowdbreaks/crowdbreaks-streamer 4 https://github.com/crowdbreaks/crowdbreaks
April 2019 | Volume 7 | Article 81
Frontiers in Public Health | www.frontiersin.org
ACKNOWLEDGMENTSWe thank Sean Carroll, Yannis Jaquet, Djilani Kebaili, and S. P. Mohanty for valuable discussions and help regarding the technical aspects of this project. Thanks also to Chloé Allémann for comments and Laura Symul for advice on visualization. Icons by Font Awesome, licensed under Creative Commons Attribution 4.0 International.Conflict of Interest Statement:The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.Copyright © 2019 Müller and Salathé. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Towards detecting influenza epidemics by analyzing Twitter messages. A Culotta, Proceedings of the First Workshop on Social Media Analytics. the First Workshop on Social Media AnalyticsWashington, DCACMCulotta A. Towards detecting influenza epidemics by analyzing Twitter messages. In: Proceedings of the First Workshop on Social Media Analytics. Washington, DC: ACM (2010). p. 115-22.
You are what you Tweet: analyzing Twitter for public health. M J Paul, M Dredze, Icwsm. 20Paul MJ, Dredze M. You are what you Tweet: analyzing Twitter for public health. Icwsm. (2011) 20:265-72. Retrieved from: https://www.aaai.org
Assessing vaccination sentiments with online social media: implications for infectious disease dynamics and control. M Salathé, S Khandelwal, 10.1371/journal.pcbi.1002199PLoS Comput Biol. 71002199Salathé M, Khandelwal S. Assessing vaccination sentiments with online social media: implications for infectious disease dynamics and control. PLoS Comput Biol. (2011) 7:e1002199. doi: 10.1371/journal.pcbi.1002199
A model for mining public health topics from Twitter. M J Paul, M Dredze, Health. 11Paul MJ, Dredze M. A model for mining public health topics from Twitter. Health. (2012) 11:16-6. Retrieved from: https://www.semanticscholar.org
A framework for detecting public health trends with twitter. J Parker, Y Wei, A Yates, O Frieder, N Goharian, Proceedings of the 2013. the 2013Parker J, Wei Y, Yates A, Frieder O, Goharian N. A framework for detecting public health trends with twitter. In: Proceedings of the 2013
IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. Niagara FallsACMIEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. Niagara Falls, ACM (2013). p. 556-63.
Learning in the presence of concept drift and hidden contexts. G Widmer, M Kubat, 10.1007/BF00116900Mach Learn. 23Widmer G, Kubat M. Learning in the presence of concept drift and hidden contexts. Mach Learn. (1996) 23:69-101. doi: 10.1007/BF00116900
Detecting influenza epidemics using search engine query data. J Ginsberg, M H Mohebbi, R S Patel, L Brammer, M S Smolinski, L Brilliant, 10.1038/nature07634Nature. 4571012Ginsberg J, Mohebbi MH, Patel RS, Brammer L, Smolinski MS, Brilliant L. Detecting influenza epidemics using search engine query data. Nature. (2009) 457:1012. doi: 10.1038/nature07634
The parable of google flu: Traps in big data analysis. D Lazer, R Kennedy, G King, A Vespignani, 10.1126/science.12485062014Science. 343Lazer D, Kennedy R, King G, Vespignani A. The parable of google flu: Traps in big data analysis. Science. (2014) 343:1203-5. doi: 10.1126/science.12485062014
When Google got flu wrong. D Butler, Nature. 494Butler D. When Google got flu wrong. Nature. (2013) 494:155-6.
Zooniverse: observing the world's largest citizen science platform. R Simpson, K R Page, D De Roure, 10.1145/2567948.2579215Proceedings of the 23rd International Conference on World Wide Web. the 23rd International Conference on World Wide WebSimpson R, Page KR, De Roure D. Zooniverse: observing the world's largest citizen science platform. In: Proceedings of the 23rd International Conference on World Wide Web. (2014). p. 1049-54. doi: 10.1145/2567948.25 79215
eBird: engaging birders in science and conservation. C Crowdcrafting ; Wood, B Sullivan, M Iliff, D Fink, S Kelling, 10.1371/journal.pbio.1001220PLoS Biol. 9Crowdcrafting. Available online at: https://crowdcrafting.org 12. Wood C, Sullivan B, Iliff M, Fink D, Kelling S. eBird: engaging birders in science and conservation. PLoS Biol. (2011) 9:e1001220. doi: 10.1371/journal.pbio.1001220
Crystal structure of a monomeric retroviral protease solved by protein folding game players. F Khatib, F Dimaio, S Cooper, M Kazmierczyk, M Gilski, S Krzywda, 10.1038/nsmb.2119Nat Struct Mol Biol. 18Khatib F, Dimaio F, Cooper S, Kazmierczyk M, Gilski M, Krzywda S, et al. Crystal structure of a monomeric retroviral protease solved by protein folding game players. Nat Struct Mol Biol. (2010) 18:1175-7. doi: 10.1038/nsmb.2119
ImageNet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, 10.1007/s11263-015-0816-yInt J Comput Vis. 115Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. Int J Comput Vis. (2015) 115:211-52. doi: 10.1007/s11263-015-0816-y
Crowdsourcing in computer vision. A Kovashka, O Russakovsky, L Fei-Fei, K Grauman, 10.1561/0600000073Foundations and Trends R in computer graphics and Vision. 10Kovashka A, Russakovsky O, Fei-Fei L, Grauman K. Crowdsourcing in computer vision. Foundations and Trends R in computer graphics and Vision. (2016) 10:177-243. doi: 10.1561/0600000073
Online crowdsourcing: rating annotators and obtaining cost-effective labels. P Welinder, P Perona, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. San Francisco, CACVPRWWelinder P, Perona P. Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA: CVPRW (2010). p. 25-32.
Online task assignment in crowdsourcing markets. C J Ho, J W Vaughan, Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. the Twenty-Sixth AAAI Conference on Artificial IntelligenceHo CJ, Vaughan JW. Online task assignment in crowdsourcing markets. In: Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. (2012) (Kuhn 1955). p. 45-51. Available online at: http://arxiv.org/abs/1508. 03593
Quantius: generic, high-fidelity human annotation of scientific images at 10ˆ5-clicksper-hour. A J Hughes, J D Mornin, S K Biswas, D P Bauer, S Bianco, Z J Gartner, 10.1038/s41592-018-0069-0Nat Methods. Hughes AJ, Mornin JD, Biswas SK, Bauer DP, Bianco S, Gartner ZJ. Quantius: generic, high-fidelity human annotation of scientific images at 10ˆ5-clicks- per-hour. Nat Methods. (2017). doi: 10.1038/s41592-018-0069-0
A neural probabilistic language model. Y Bengio, R Ducharme, P Vincent, C Janvin, 10.1162/153244303322533223J Mach Learn Res. 3Bengio Y, Ducharme R, Vincent P, Janvin C. A neural probabilistic language model. J Mach Learn Res. (2003) 3:1137-55. doi: 10.1162/153244303322533223
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:13013781.arXiv [preprintMikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv [preprint]. arXiv:13013781. (2013). Retrieved from: https://arxiv.org
Bag of tricks for efficient text classification. A Joulin, E Grave, P Bojanowski, T Mikolov, arXiv:160701759.arXiv [preprintJoulin A, Grave E, Bojanowski P, Mikolov T. Bag of tricks for efficient text classification. arXiv [preprint]. arXiv:160701759. (2016). Retrieved from: https://arxiv.org
Leveraging large amounts of weakly supervised data for multi-language sentiment classification. J Deriu, A Lucchi, V De Luca, A Severyn, S Müller, M Cieliebak, Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee. the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering CommitteeDeriu J, Lucchi A, De Luca V, Severyn A, Müller S, Cieliebak M, et al. Leveraging large amounts of weakly supervised data for multi-language sentiment classification. In: Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee (2017). p. 1045-52.
Clinical information extraction using small data: an active learning approach based on sequence representations and word embeddings. M Kholghi, De Vine, L Sitbon, L Zuccon, G Nguyen, A , 10.1002/asi.23936J Assoc Inf Sci Technol. 68Kholghi M, De Vine L, Sitbon L, Zuccon G, Nguyen A. Clinical information extraction using small data: an active learning approach based on sequence representations and word embeddings. J Assoc Inf Sci Technol. (2017) 68:2543-56. doi: 10.1002/asi.23936
Active Discriminative Word Embedding Learning. Y Zhang, B Wallace, Available online atZhang Y, Wallace B. Active Discriminative Word Embedding Learning. NAACL (2016). Available online at: http://arxiv.org/abs/1606.04212
Assessing and responding in real time to online anti-vaccine sentiment during a flu pandemic. N Seeman, A Ing, C Rizo, 10.12927/hcq.2010.21923Healthc Q. 13Seeman N, Ing A, Rizo C. Assessing and responding in real time to online anti-vaccine sentiment during a flu pandemic. Healthc Q. (2010) 13:8-15. doi: 10.12927/hcq.2010.21923
Lessons from polio eradication. H J Larson, I Ghinai, 10.1038/473446aNature. 473446Larson HJ, Ghinai I. Lessons from polio eradication. Nature. (2011) 473:446. doi: 10.1038/473446a
Polio vaccines-"no thank you!" barriers to polio eradication in Northern Nigeria. M Yahya, 10.1093/afraf/adm016Af Aff. 106Yahya M. Polio vaccines-"no thank you!" barriers to polio eradication in Northern Nigeria. Af Aff. (2007) 106:185-204. doi: 10.1093/afraf/ adm016
Measuring vaccine confidence: analysis of data obtained by a media surveillance system used to analyse public concerns about vaccines. H J Larson, Dmd Smith, P Paterson, M Cumming, E Eckersberger, C C Freifeld, 10.1016/S1473-3099(13)70108-7Lancet Infect Dis. 13Larson HJ, Smith DMD, Paterson P, Cumming M, Eckersberger E, Freifeld CC, et al. Measuring vaccine confidence: analysis of data obtained by a media surveillance system used to analyse public concerns about vaccines. Lancet Infect Dis. (2013) 13:606-13. doi: 10.1016/S1473-3099(13)7 0108-7
Critical dynamics in population vaccinating behavior. A D Pananos, T M Bury, C Wang, J Schonfeld, S P Mohanty, B Nyhan, 10.1073/pnas.1704093114Proc Natl Acad Sci. 114Pananos AD, Bury TM, Wang C, Schonfeld J, Mohanty SP, Nyhan B, et al. Critical dynamics in population vaccinating behavior. Proc Natl Acad Sci USA. (2017) 114:13762-7. doi: 10.1073/pnas.1704093114
Publicly available online tool facilitates real-time monitoring of vaccine conversations and sentiments. C Y Bahk, M Cumming, L Paushter, L C Madoff, A Thomson, J S Brownstein, 10.1377/hlthaff.2015.1092Health Aff. 35Bahk CY, Cumming M, Paushter L, Madoff LC, Thomson A, Brownstein JS. Publicly available online tool facilitates real-time monitoring of vaccine conversations and sentiments. Health Aff. (2016) 35:341-7. doi: 10.1377/hlthaff.2015.1092
. M Salathé, T Wiegand, M Wenzel, arXiv:180904797.preprintSalathé M, Wiegand T, Wenzel M. Focus group on artificial intelligence for health. arXiv [preprint]. arXiv:180904797. (2018). Retrieved from: https:// arxiv.org
Wikipedia usage estimates prevalence of influenzalike illness in the united states in near real-time. D J Mciver, J S Brownstein, 10.1371/journal.pcbi.1003581PLoS Comput Biol. 101003581McIver DJ, Brownstein JS. Wikipedia usage estimates prevalence of influenza- like illness in the united states in near real-time. PLoS Comput Biol. (2014) 10:e1003581. doi: 10.1371/journal.pcbi.1003581
Combining search, social media, and traditional data sources to improve influenza surveillance. M Santillana, A T Nguyen, M Dredze, M J Paul, O Nsoesie, J S Brownstein, 10.1371/journal.pcbi.1004513PLoS Comput Biol. 111004513Santillana M, Nguyen AT, Dredze M, Paul MJ, Nsoesie O, Brownstein JS. Combining search, social media, and traditional data sources to improve influenza surveillance. PLoS Comput Biol. (2015) 11:e1004513. doi: 10.1371/journal.pcbi.1004513
| [
"https://github.com/crowdbreaks/crowdbreaks-streamer",
"https://github.com/crowdbreaks/crowdbreaks"
] |
[
"Modernizing Open-Set Speech Language Identification",
"Modernizing Open-Set Speech Language Identification"
] | [
"Mustafa Eyceoz ",
"Justin Lee ",
"Homayoon Beigi \nRecognition Technologies, Inc. and Columbia University\nNew York\n",
"\nDept. of Computer Science\nColumbia University\nNew York\n"
] | [
"Recognition Technologies, Inc. and Columbia University\nNew York",
"Dept. of Computer Science\nColumbia University\nNew York"
] | [] | While most modern speech Language Identification methods are closed-set, we want to see if they can be modified and adapted for the open-set problem. When switching to the open-set problem, the solution gains the ability to reject an audio input when it fails to match any of our known language options. We tackle the openset task by adapting two modern-day state-of-the-art approaches to closed-set language identification: the first using a CRNN with attention and the second using a TDNN. In addition to enhancing our input feature embeddings using MFCCs, log spectral features, and pitch, we will be attempting two approaches to out-of-set language detection: one using thresholds, and the other essentially performing a verification task. We will compare both the performance of the TDNN and the CRNN, as well as our detection approaches. | 10.13140/rg.2.2.24797.28647 | [
"https://arxiv.org/pdf/2205.10397v1.pdf"
] | 248,987,706 | 2205.10397 | c0db9e72df5967e62b3264ca37f99ff27968c649 |
Modernizing Open-Set Speech Language Identification
Mustafa Eyceoz
Justin Lee
Homayoon Beigi
Recognition Technologies, Inc. and Columbia University
New York
Dept. of Computer Science
Columbia University
New York
Modernizing Open-Set Speech Language Identification
Index Terms-Speech language identificationopen-setclosed-setCRNNTDNNattentionthresholdverification
While most modern speech Language Identification methods are closed-set, we want to see if they can be modified and adapted for the open-set problem. When switching to the open-set problem, the solution gains the ability to reject an audio input when it fails to match any of our known language options. We tackle the openset task by adapting two modern-day state-of-the-art approaches to closed-set language identification: the first using a CRNN with attention and the second using a TDNN. In addition to enhancing our input feature embeddings using MFCCs, log spectral features, and pitch, we will be attempting two approaches to out-of-set language detection: one using thresholds, and the other essentially performing a verification task. We will compare both the performance of the TDNN and the CRNN, as well as our detection approaches.
Introduction
Speech Language Identification is the process of taking audio as an input and determining what language is being spoken, if any. There are two subsections to the language identification problem (which will henceforth be referred to as LID): open-set and closed-set [1]. In closed-set LID, set of languages to identify is defined, and for every audio input, the "most probable" language within the set is outputted. In open-set LID, however, we also gain the option to "reject" that prediction and detect when the audio input matches none of our known languages well. It can also allow for identification and learning of new languages for the system.
Today, there are a number of modern-day state-ofthe-art approaches to language identification, but almost all of them have opted to take the closed-set approach. In an era of data abundance, the limitations of the closedset solution are typically circumvented by including hundreds of languages and training on thousands of hours of data for each of them. This workaround is obviously still not as ideal as the true open-set solution, though, as it lacks the ability to detect and reject or learn unknown languages, and in these cases it will unavoidably output an incorrect prediction. Therefore, our goal is attempt to adapt and modify these various state-of-the-art closed-set solutions to attempt the open-set task, and see how well they perform, as well as determine which implementation performs the best.
Related Works and State of the Art
To start, we will be looking at a few of the current bestperforming closed-set solutions. Convolutional Recurrent Neural Networks, or CRNNs, have become increasingly popular in LID over recent years. Solutions like that of Bartz et al in 2017 [2] initially used spectrograms as inputs, but over a number of iterations we have seen the best performance come from solutions like the recent 2021 paper from Mandal et al [3], which proposes the use of a CRNN with attention, as well as using Mel-frequency Cepstral Coefficient (MFCC) [1] features of audio samples as input.
Another classic yet still high-performing method of both LID and general speech recognition is to use the same MFCCs, but now use them as input for a time-delay neural network, or TDNN. TDNN's are capable of modeling long-term context information, which is why they are often used in various speech recognition tasks.
There is also a third method of speech LID that, either in this paper or in future works, may be worth exploring. This method actually separates the tasks of speech recognition and language identification. First, the speech is converted to text. While TDNNs have long been used for speech-to-text, the current top performers for this task are all wave2vec 2.0 [4] implementations. Once the text has been obtained, textual LID is currently done best using bi-directional LSTMs (long short-term memory, a type of recurrent neural network or RNN architecture), like the implementation by Toftrup et al [5] that builds off an outline by Apple.
For this paper, we primarily focus on the direct speech methods for LID, modifying both the input selection and various other points of the architectures (to be touched on further in section IV. Proposed Methodology). It is also possible that we explore the speechtext-LID method as well, though that may be left for a a continuation of this work.
When looking at previous open-set work, we draw inspiration from both the exploration of thresholding functions and curves for out-of-set language detection and rejection, like the work of Rebai et al [6], as well as creating deeper embeddings with linear discriminant analysis (LDA) [
Datasets
The dataset used in this research consists of audio and text from 9 different language sources. For our in-set languages, we will be using French, Turkish, Spanish, Korean, Mandarin, English, and Russian. And for our outof-set languages, we will be evaluating using Javanese and Bengali.
MediaSpeech is a dataset containing French, Turkish and Spanish media speech. Originally built with the purpose of testing Automated Speech Recognition (ASR) systems performance, MediaSpeech contains 10 hours of speech for each language provided. [8] Pansori-TEDxKR is a dataset that is generated from Korean language TEDx talks from 2010 to 2014. This corpus has about three hours of speech from 41 speakers. [9] Primewords Chinese Corpus Set 1 is a Chinese Mandarin corpus released by Shanghai Primewords Co. Ltd. and contains 100 hours of speech. This corpus was built from smart phone recordings of 296 native Chinese speakers and has transcription accuracy of larger than 98% at a confidence level of 95%. [10] Free ST American English Corpus is a free American English corpus by Surfingtech. It contains the utterances of 10 speakers with each speaker having approximately 350 utterances. [11] Russian LibriSpeech is a Russian dataset based on LibriVox audiobooks. It contains approximately 98 hours of audio data. [12] Note that each of the datasets mentioned above will be normalized so that each in-set language is represented by an equal number of hours of audio in order to prevent any skewing in the in-set languages.
For evaluation, the first additional out-of-set languages is Javanese. The Large Javanese ASR training data set contains approximately 185,000 utterances in Javanese and was collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. [13] The second out-of-set language is Bengali. The Large Bengali ASR training data set contains approximately 196,000 utterances in Bengali and contains transcribed audio data for Bengali. [13]
Proposed Methodology
The two methods we will be adapting and comparing for open-set performance are the CRNN with attention solution and the TDNN solution.
First, to obtain our feature embeddings to use as input for the TDNN and CRNN, we must process the data through a number of steps. We start by performing a discrete Fourier transform on data frames to generate the spectral estimates. From this we can obtain the log spectral features. With an additional discrete cosine transform we can obtain the MFCCs. We then concatenate the log spectral features [1] with the MFCCs, as well as some additional pitch information. To ensure our embeddings have all the information needed for the task, we may also concatenate 100 dimensional i-vectors [14]. From here, we will then pass these embeddings through an LDA [1] to perform both dimensionality and correlation reduction of the features.
Once we have obtained our final feature embeddings, we will be using them to train and compare our two models: the first being our CRNN with attention, and the second being our TDNN. For both, our initial implementation will have a softmax output layer for language identification, as well as threshold curves and functions used on output for out-of-set language detection and rejection.
Our second approach will be more complex. After training our softmax output, we will attempt to continue to a deeper embedding by training an LDA transformation set, both allowing us to treat the open-set problem as a verification task, as well as potentially giving us the ability to add new languages without having to retrain the initial model, instead simply using these deeper embeddings.
We will be comparing performance of the TDNN model vs the CRNN model on both of the proposed approaches, as well as seeing which of the two approaches generally performs better on the open-set task. We currently expect the more modern CRNN model to at least slightly outperform the TDNN model, though we have no expectation for which of the out-of-set detection approaches will perform better (although regardless the verification approach will provide additional functionality over the threshold approach).
It is worth noting that there is also a chance we attempt these two approaches on the word2vec 2.0 + bi-LSTM method of LID, but that may very well be saved for future work.
See Figure 1 for the proposed architecture for our open-set language identification approach.
The Process: Data Preparation and
Feature Extraction
Data Preparation
After downloading the necessary datasets from Open Speech and Language Resources (OpenSLR) [15], we proceeded with data preparation so that all of the data for each language was in the correct format and structure for the Kaldi scripts used in feature extraction. We started by formatting each dataset in the same way: all datasets came with different file formats and structures; and some came with audio data in the form of FLAC files while others came as WAV files. The first step in our data formatting process was to convert all audio files to WAV format. WAV, also known as Waveform Audio File Format, is the main format of audio that is used by the Kaldi scripts, which are subsequently used for feature extraction. Each audio file in our original datasets were either FLAC or WAV, so all FLAC files were converted to WAV using ffmpeg software.
We measured the total duration of data we had for each language and then limited each language's data to 10 hours, as that was the minimum total duration of audio files found across all languages. For some languages, we had up to 40 hours of data, but the reason for using an equal duration of audio files for each language was to reduce any skewing towards certain languages when training. This was all done using librosa, a python library for audio analysis.
Then, the data in each language was split with an 80:20 train and test split. It's worth noting that the split was made based on the total duration of the audio files, not the number of audio files present. From here, several acoustic data files were generated. First, 'wav.scp' files were created for each data split of each language which contain data that maps a WAV file's unique identifier to its file path. Then, a 'text' file was created for each data split of each language which contains a map of every audio file to its text transcription. We then created the 'corpus.txt' files for each language which contained every single utterance transcription from the audio files of said language. Finally other extraneous files were created such as 'utt2spk' which, for our use-case, mapped a WAV file's unique identifier to itself since our dataset and problem statement does not involve individual speakers.
Then, we created several files related to language data as follows. For each language and their transcriptions of the audio files, the 1000 most frequent words were computed and saved. These 1000 most frequent words represent the dictionary of a language and the most significant identifiers for that language. We used these to create a 'lexicon.txt' for each language, which contains all of the 1000 most frequent words with their phone transcriptions. Since we could not find the necessary tools to convert words into phones for all 9 languages, we resorted to the solution of using each individual letter as a phone, also known as graphemic transcription. One point of concern came when working with Mandarin, in which each character is a pictorial representation of a word and therefore has no concept of letters. Thus, the solution was to first convert Mandarin into Pinyin, which is the romanticized text version of Mandarin. From here, it was easy to split a Pinyin word into individual letters. With all of the aforementioned phones, we combined them with the silence phones to create 'lexicon.txt'. Then, it was straightforward to create individual 'nonsilence phones.txt' and 'silence phones.txt' files which contain the non-silent phones and silent phones respectively. The silent phones are 'sil' and 'spn'. Finally, the 'optional silence.txt' file was created with just the phone 'sil'.
Feature Extraction
From here, data preparation was completed and we moved to feature extraction. We used the Kaldi script 'make mfcc pitch.sh' to extract the Mel-frequency cepstral coefficients (MFCC) [1] features and pitch data for each audio file. Then, the Mel-spectral features [1] were extracted from the audio files using the python library librosa. From here, we used another python library called Kaldiio to read the ark files that contain the MFCC and pitch data. For the final feature embeddings, all aforementioned features were concatenated and passed through Linear Discriminant Analysis (LDA) [1] in order to perform both dimensionality and correlation reduction of the features.
The Process: Rival Models
Convolutional Recurrent Neural Network with Attention
Our first model is a Convolutional Recurrent Neural Network (CRNN) with attention. CRNNs are essentially a Convolutional Neural Network (CNN) followed by a Recurrent Neural Network (RNN). Our CRNN approach uses a 2-dimensional CNN and then an RNN built with Bidirectional Long short-term memory (BiLSTM) layers. Specifically, our model contains two 2-dimensional convolution layers both with kernel size of 2 and whose outputs are flattened and concatenated to the original feature embeddings. This is then all passed through two BiLSTM layers both with 256 recurrent layers and 256 hidden features. Then, we add attention which allows the model to focus on the relationship between different discriminative features. The attention mechanism is encapsulated as a single layer that comes after the two BiLSTM layers and has output size of 7 for the 7 in-set languages. [16] We then add a softmax output layer that would typically be used for language identification since it allows us to make a language prediction with highest probability. Finally, to adapt this model to the open-set language identification problem, a threshold is used so that if all of the probabilities outputted by the softmax layer are under this threshold, the input is deemed out of the set and is rejected.
Other details related to this model include the loss function and optimizer, for which we used cross-entropy loss and stochastic gradient descent respectively. This model with the aforementioned loss function and optimizer was trained for 12 epochs.
See Figure 2 for a diagram of the CRNN + attention architecture.
Time Delay Neural Network
Our second model is a Time delay neural network (TDNN). TDNNs are feed-forward networks that can model long-term context information. TDNNs are especially good at modeling context and classifying patterns without needing any explicit segmentation of input data. The key hyper-parameters of each layer in a TDNN are context size, dilation, and stride which describe the number of contiguous frames to observe, number of noncontiguous frames to observe, and how many frames to skip in an iteration. Specifically, our model contains six layers of sizes 512, 512, 512, 512, 1500, 7 with context sizes of 5, 3, 3, 1, 1, 1 respectively. Each layer has dilation of 1, 2, 3, 1, 1, 1 respectively. All layers have stride of 1 and uses the ReLU activation function. [17] We then add a final softmax output layer that would typically be used for language identification since it allows us to make a language prediction with highest probability. Finally, to adapt this model to the open-set language identification problem, a threshold is used so that if all of the probabilities outputted by the softmax layer are under this threshold, the input is deemed out of the set and is rejected.
Other details related to this model include the loss function and optimizer, for which we used cross-entropy loss and stochastic gradient descent respectively. This model with the aforementioned loss function and optimizer was trained for 12 epochs.
See Figure 3 for a diagram of the TDNN architecture.
Results
Convolutional Recurrent Neural Network with Attention
The training data used when training our CRNN with attention included 80% of the total duration of audio files from each of the 7 in-set languages: English, Spanish, French, Korean, Mandarin, Russian, and Turkish. After training the model for 12 epochs on this training data and before experimenting with any thresholds, we first tested it on the other 20% of the total duration of audio files from just the 7 in-set languages again in order to gauge how well our model performs at the closed-set language identification task. The CRNN with attention was able to achieved an in-set accuracy of 85%; that is, the model was able to correctly identify the language of a given inset language audio input 85% of the time.
We then incorporated a threshold as the final layer of the model and tested the CRNN with attention on the other 20% of the total duration of audio files from the 7 in-set languages as well as the 2 out-of-set languages: Javanese and Bengali. We measured three accuracies: overall accuracy, in-set accuracy, and out-of-set accuracy. The overall accuracy is how often our model was able to take an input and correctly identify its language or reject it if it was not one of the 7 in-set languages. The in-set accuracy describes how often our model was able to correctly identify the language of an in-set audio input. The out-ofset accuracy describes how often it was able to correctly reject an out-of-set audio input.
We noticed that the overall accuracy of our model reached a maximum of 0.791 across the various thresholds we experimented with. With too small of a threshold, the overall accuracy was about 70% but as the threshold was increased slightly, the overall accuracy began to increase. At around a threshold of 0.7 and 0.75, the overall accuracy started to reach its maximum. However, when the threshold was set to be too high, the overall accuracy dropped drastically as too high of a threshold resulted in perfect out-of-set accuracy but terrible in-set accuracy. Using a threshold of 0.7, the CRNN with attention achieves the maximal overall accuracy of 79.1% along with in-set and out-of-set accuracies of 80.8% and 76.6% respectively. See Figure 4 for a plot of the overall accuracy for various thresholds and Table 1 for the exact data. Compared to the state of the art for closed-set (the 2021 paper from Mandal et al [3]) which uses their own CRNN with attention model and has an accuracy of 98% (and 91% in noisy environments), our CRNN with attention model has an in-set language identification accuracy of 85%. CNNs, which are a core part of our CRNN model, are mainly used for image classification. Since the CNN is able to model local context connectivity, it is generally useful for tasks with images where a sliding filter would be practical. In our feature embedding, which uses MFCC and Mel-Spectral features [1], there is little connectivity between dimensions, which is an argument for why our 2-dimensional CNN implementation may not have been optimal. However, since there is still local context connectivity across time steps in our data, a 1-dimensional CNN at the start of our CRNN with attention implementation may have been a better choice.
Time Delay Neural Network
The training data used when training our TDNN is identical to the training data of the first model. After training the model for 12 epochs on this training data and before experimenting with any thresholds, we first tested it on the other 20% of the total duration of audio files from just the 7 in-set languages again in order to gauge how well our model performs at the closed-set language identification task. The TDNN was able to achieved an in-set accuracy of 95%; that is, the TDNN was able to correctly identify the language of a given in-set language audio input 95% of the time.
While this assuredly beats out our CRNN + attention model, when comparing to the state of the art CRNN + attention with accuracy of 98%, our TDNN model with an in-set language identification accuracy of 95% still falls just a bit under. It is worth noting, however, that this is with training/testing on only 70 hours of speech data, which gives us hope that the TDNN model could actually rival state of the art CRNNs.
We then incorporated a threshold as the final layer of the model and tested the TDNN on the other 20% of the total duration of audio files from the 7 in-set languages as well as the 2 out-of-set languages: Javanese and Bengali. First, we measured our model's overall accuracy. We noticed that the overall accuracy of our model reached a maximum of 83% across the various thresholds we experimented with. With too small of a threshold, the overall accuracy was about 60% but as the threshold was increased slightly, the overall accuracy began to increase. At around a threshold of 0.7 and 0.8, the overall accuracy started to reach its maximum. However, when the threshold was set to be too high, the overall accuracy dropped as too high of a threshold resulted in perfect out-of-set accuracy but terrible in-set accuracy. See Figure 5 for a plot of the overall accuracy for various thresholds and Table 2 for the exact data.
Taking a few of the best threshold results from the previous step, we took a closer look into the in-set and out-of-set accuracies individually. See Figure 6 for a breakdown of the accuracy of the TDNN and Table 3 for the exact data. At about a threshold of 0.8, we achieve the maximal overall accuracy of 83.3% and in-set and out-ofset accuracies of 85.2% and 80.4% respectively.
Conclusion and Future Work
It is hard to come to any significant conclusions with respect to our CRNN + attention model, as we were unable to reproduce state-of-the-art performance on the initial closed-set task, meaning our open-set results are likely also markedly worse than what the best models could theoretically produce. This, however, is not to say that our labor was without fruit. What we did learn was that even with a modestly sized data set, a TDNN is still able to produce respectable closed-set language identification results, and has the potential (perhaps simply with more data) to rival the modern CRNNs that seem to be dominating the field at the moment. Essentially, TDNNs should not be counted out as an option when architecting modern solutions to language identification tasks, and they could yet be the key to breaking our existing barriers in the field. Furthermore, when incorporating thresholds, the TDNN was still able to hold its own on in-set data, Table 3: Accuracy Breakdown of TDNN vs Threshold even when pushing towards high out-of-set detection accuracy.
There is still much to be done in terms of these experiments, though. First and foremost is that a more diverse array of thresholding and detection methods need to be tested. Surely modeling a curve and picking the best static threshold is not the optimal thresholding solution, and dynamic solutions (such as per-language thresholds), could yield significantly improved results, perhaps allowing us to retain an in-set accuracy much closer to our initial 95%. Beyond this, one may notice that we did not get a chance to experiment further with LDA transformation sets, and treating out-of-set detection as a verification task. This will be an important, if not necessary, step should one choose to continue on this path of research. This is not only due to potential accuracy gains, but also the functional advantages that these deeper embeddings may provide, as mentioned in our initial reasoning.
Additionally, more can be done with respect to feature abstraction and embedding. Further experimentation with respect to optimal incorporation of Mel-Spectral features, as well as the initially-planned i-vectors, could do a great deal in increasing even our models' closed-set accuracies. Obviously there is much more work to be done with our CRNN + attention model as well in get-ting it up to modern standards, though our first course of action would likely be to attempt to break the state of the art with our highly-performant TDNN, and from there begin experimenting with other open-set detection solutions to truly build something practically meaningful. Overall, though, we have gained a great deal of knowledge and experience from this experiment already, and look forward to attempting to take it further.
Figure 1 :
1Open-Set LID Exploration Architecture
Figure 2 :
2CRNN + Attention Architecture
Figure 3 :
3TDNN Architecture
Figure 4 :
4CRNN + Attention Accuracy (%) Breakdown
Figure 5 :
5TDNN Overall Accuracy (%)
Figure 6 :
6TDNN Accuracy (%) Breakdown
1] transformation sets to perform verification arXiv:2205.10397v1 [cs.CL] 20 May 2022 tasks, like with Voxceleb2 speaker verification[7].
Overall Accuracy of TDNN vs ThresholdThreshold Overall Accuracy (%)
0.1
57.0
0.15
57.0
0.2
57.0
0.25
57.2
0.3
58.2
0.35
59.8
0.4
62.1
0.45
64.6
0.5
67.5
0.55
69.9
0.6
72.9
0.65
75.5
0.7
78.5
0.75
81.6
0.8
83.3
0.85
72.3
0.9
54.2
Table 2:
H Beigi, 978-0-387-77591-3Fundamentals of Speaker Recognition. New YorkSpringerH. Beigi, Fundamentals of Speaker Recognition. New York: Springer, 2011, ISBN: 978-0-387-77591-3, http: //www.fundamentalsofspeakerrecognition.org.
Language identification using deep convolutional recurrent neural networks. C Bartz, T Herold, H Yang, C Meinel, arXiv:1708.04811arXiv preprintcs.CVC. Bartz, T. Herold, H. Yang, and C. Meinel, "Language identification using deep convolutional recurrent neu- ral networks," arXiv preprint arXiv:1708.04811 [cs.CV], Aug. 16 2017.
Is attention always needed? a case study on language identification from speech. A Mandal, S Pal, I Dutta, M Bhattacharya, S K Naskar, arXiv:2110.03427arXiv preprintcs.LGA. Mandal, S. Pal, I. Dutta, M. Bhattacharya, and S. K. Naskar, "Is attention always needed? a case study on language identification from speech," arXiv preprint arXiv:2110.03427 [cs.LG], Oct. 5 2021.
wav2vec 2.0: A framework for self-supervised learning of speech representations. A Baevski, H Zhou, A Mohamed, M Auli, arXiv:2006.11477arXiv preprintcs.SLA. Baevski, H. Zhou, A. Mohamed, and M. Auli, "wav2vec 2.0: A framework for self-supervised learning of speech representations," arXiv preprint arXiv:2006.11477 [cs.SL], Oct. 22 2020.
A reproduction of apple's bi-directional LSTM models for language identification in short strings. M Toftrup, S Sørensen, M R Ciosici, I Assent, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research WorkshopAssociation for Computational LinguisticsM. Toftrup, S. Asger Sørensen, M. R. Ciosici, and I. Assent, "A reproduction of apple's bi-directional LSTM models for language identification in short strings," in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop. Online: Association for Computational Linguistics, Apr. 2021, pp. 36-42. [Online]. Available: https://aclanthology.org/ 2021.eacl-srw.6
Improving of open-set language identification by using deep svm and thresholding functions. I Rebai, Y Benayed, W Mahdi, 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA. I. Rebai, Y. BenAyed, and W. Mahdi, "Improving of open-set language identification by using deep svm and thresholding functions," in 2017 IEEE/ACS 14th Interna- tional Conference on Computer Systems and Applications (AICCSA), 2017, pp. 796-802.
Voxceleb2: Deep speaker recognition. J S Chung, A Nagrani, A Zisserman, 10.21437/Interspeech.2018-1929InterSpeech. ISCAJ. S. Chung, A. Nagrani, and A. Zisserman, "Voxceleb2: Deep speaker recognition," in InterSpeech. ISCA, Sep. 2-6 2018. [Online]. Available: http://dx.doi.org/10.21437/ Interspeech.2018-1929
R Kolobov, O Okhapkina, O Omelchishina, A Platunov, R Bedyakin, V Moshkin, D Menshikov, N Mikhaylovskiy, arXiv:2103.16193Mediaspeech: Multilingual asr benchmark and dataset. arXiv preprinteess.ASR. Kolobov, O. Okhapkina, O. Omelchishina, A. Platunov, R. Bedyakin, V. Moshkin, D. Men- shikov, and N. Mikhaylovskiy, "Mediaspeech: Mul- tilingual asr benchmark and dataset," arXiv preprint arXiv:2103.16193 [eess.AS], Mar. 30 2021.
Pansori: ASR Corpus Generation from Open Online Video Contents. Y Choi, B Lee, Proceedings of the IEEE Seoul Section Student Paper. the IEEE Seoul Section Student PaperY. Choi and B. Lee, "Pansori: ASR Corpus Generation from Open Online Video Contents," in Proceedings of the IEEE Seoul Section Student Paper Contest 2018, Nov 2018, pp. 117-121.
Primewords chinese corpus set 1. L Primewords, Information Technology CoL. Primewords Information Technology Co., "Primewords chinese corpus set 1," 2018, https://www.primewords.cn.
Free st american english corpus. Surfingtech, Surfingtech, "Free st american english corpus," 2018, https://www.surfing.ai.
Russian librispeech (ruls). V Panayotov, G Chen, D Povey, S Khudanpur, V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Russian librispeech (ruls)," https://www.openslr.org/96/, 2015.
Crowd-Sourced Speech Corpora for Javanese, Sundanese, Sinhala, Nepali, and Bangladeshi Bengali. O Kjartansson, S Sarin, K Pipatsrisawat, M Jansche, L Ha, GurugramIndiaO. Kjartansson, S. Sarin, K. Pipatsrisawat, M. Jansche, and L. Ha, "Crowd-Sourced Speech Corpora for Javanese, Sundanese, Sinhala, Nepali, and Bangladeshi Bengali," Gurugram, India, pp. 52-55, Aug. 2018. [Online].
. 10.21437/SLTU.2018-11Available: http://dx.doi.org/10.21437/SLTU.2018-11
Support vector machines and joint factor analysis for speaker verification. N Dehak, P Kenny, R Dehak, O Glembek, P Dumouchel, L Burget, V Hubeika, F Castaldo, N. Dehak, P. Kenny, R. Dehak, O. Glembek, P. Du- mouchel, L. Burget, V. Hubeika, and F. Castaldo, "Sup- port vector machines and joint factor analysis for speaker verification," Apr 2009, pp. 4237-4240.
. D Povey, Openslr, D. Povey, "OpenSLR," https://www.openslr.org.
Convolutional recurrent neural network + ctcloss. Holmeyoung, Holmeyoung, "Convolutional recurrent neural network + ctcloss," https://github.com/Holmeyoung/crnn-pytorch, 2019.
Tdnn. cvqluu, "Tdnn," https://github.com/cvqluu/TDNN, 2019.
| [
"https://github.com/Holmeyoung/crnn-pytorch,",
"https://github.com/cvqluu/TDNN,"
] |
[
"Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning",
"Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning"
] | [
"Prasetya Ajie Utama \nIryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab\nTechnische Universität Darmstadt ♣ Hugging Face\nBrooklynUSA\n",
"Nafise Sadat Moosavi \nIryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab\nTechnische Universität Darmstadt ♣ Hugging Face\nBrooklynUSA\n",
"Victor Sanh \nIryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab\nTechnische Universität Darmstadt ♣ Hugging Face\nBrooklynUSA\n"
] | [
"Iryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab\nTechnische Universität Darmstadt ♣ Hugging Face\nBrooklynUSA",
"Iryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab\nTechnische Universität Darmstadt ♣ Hugging Face\nBrooklynUSA",
"Iryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab\nTechnische Universität Darmstadt ♣ Hugging Face\nBrooklynUSA"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem. In this work, we demonstrate that, despite its advantages on low data regimes, finetuned prompt-based models for sentence pair classification tasks still suffer from a common pitfall of adopting inference heuristics based on lexical overlap, e.g., models incorrectly assuming a sentence pair is of the same meaning because they consist of the same set of words. Interestingly, we find that this particular inference heuristic is significantly less present in the zero-shot evaluation of the prompt-based model, indicating how finetuning can be destructive to useful knowledge learned during the pretraining. We then show that adding a regularization that preserves pretraining weights is effective in mitigating this destructive tendency of few-shot finetuning. Our evaluation on three datasets demonstrates promising improvements on the three corresponding challenge datasets used to diagnose the inference heuristics. 1 | 10.18653/v1/2021.emnlp-main.713 | [
"https://www.aclanthology.org/2021.emnlp-main.713.pdf"
] | 237,452,662 | 2109.04144 | e61160191a843fc2157c1e4eea0f45e3712a5f45 |
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Prasetya Ajie Utama
Iryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab
Technische Universität Darmstadt ♣ Hugging Face
BrooklynUSA
Nafise Sadat Moosavi
Iryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab
Technische Universität Darmstadt ♣ Hugging Face
BrooklynUSA
Victor Sanh
Iryna Gurevych ‡ † Research Training Group AIPHES ‡ UKP Lab
Technische Universität Darmstadt ♣ Hugging Face
BrooklynUSA
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 2021
Recent prompt-based approaches allow pretrained language models to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a language modeling problem. In this work, we demonstrate that, despite its advantages on low data regimes, finetuned prompt-based models for sentence pair classification tasks still suffer from a common pitfall of adopting inference heuristics based on lexical overlap, e.g., models incorrectly assuming a sentence pair is of the same meaning because they consist of the same set of words. Interestingly, we find that this particular inference heuristic is significantly less present in the zero-shot evaluation of the prompt-based model, indicating how finetuning can be destructive to useful knowledge learned during the pretraining. We then show that adding a regularization that preserves pretraining weights is effective in mitigating this destructive tendency of few-shot finetuning. Our evaluation on three datasets demonstrates promising improvements on the three corresponding challenge datasets used to diagnose the inference heuristics. 1
Introduction
Prompt-based finetuning has emerged as a promising paradigm to adapt Pretrained Language Models (PLM) for downstream tasks with limited number of labeled examples (Schick and Schütze, 2021a;Radford et al., 2019). This approach reformulates downstream task instances as a language modeling input, 2 allowing PLMs to make non-trivial taskspecific predictions even in zero-shot settings. This in turn, provides a good initialization point for data efficient finetuning (Gao et al., 2021), resulting in a strong advantage on low data regimes where the standard finetuning paradigm struggles. However, the success of this prompting approach has only been shown using common held-out evaluations, which often conceal certain undesirable behaviors of models (Niven and Kao, 2019).
One such behavior commonly reported in downstream models is characterized by their preference to use surface features over general linguistic information (Warstadt et al., 2020). In the Natural Language Inference (NLI) task, McCoy et al. (2019) documented that models preferentially use the lexical overlap feature between sentence pairs to blindly predict that one sentence entails the other. Despite models' high in-distribution performance, they often fail on counterexamples of this inference heuristic, e.g., they predict that "the cat chased the mouse" entails "the mouse chased the cat".
At the same time, there is a mounting evidence that pre-training on large text corpora extracts rich linguistic information (Hewitt and Manning, 2019;Tenney et al., 2019). However, based on recent studies, standard finetuned models often overlook this information in the presence of lexical overlap (Nie et al., 2019;Dasgupta et al., 2018). We therefore question whether direct adaptation of PLMs using prompts can better transfer the use of this information during finetuning. We investigate this question by systematically studying the heuristics in a prompt-based model finetuned across three datasets with varying data regimes. Our intriguing results reveal that: (i) zero-shot prompt-based models are more robust to using the lexical overlap heuristic during inference, indicated by their high performance on the corresponding challenge datasets; (ii) however, prompt-based finetuned models quickly adopt this heuristic as they learn from more labeled data, which is indicated by gradual degradation of the performance in challenge datasets.
We then show that regularizing prompt-based finetuning, by penalizing the learning from up-dating the weights too far from their original pretrained values, is an effective approach to improve the in-distribution performance on target datasets, while mitigating the adoption of inference heuristics. Overall, our work suggests that while promptbased finetuning has gained impressive results on standard benchmarks, it can has a negative impact regarding inference heuristics, which in turn suggests the importance of a more thorough evaluation setup to ensure meaningful progress.
Inference Heuristics in Prompt-based Finetuning
Prompt-based PLM Finetuning In this work, we focus on sentence pairs classification tasks, where the goal is to predict semantic relation y of an input pair x = (s 1 , s 2 ). In a standard finetuning setting, s 1 and s 2 are concatenated along with a special token [CLS], whose embedding is used as an input to a newly initialized classifier head.
The prompt-based approach, on the other hand, reformulates pair x as a masked language model input using a pre-defined template and word-tolabel mapping. For instance, Schick and Schütze (2021a) formulate a natural language inference instance (s 1 , s 2 , y) as:
[CLS]s 1 ?[MASK], s 2 [SEP] with the following mapping for the masked token: "yes"→ "entailment", "maybe"→"neutral", and "no" → "contradiction". The probabilities assigned by the PLM to the label words at the [MASK] token can then be directly used to make task-specific predictions, allowing PLM to perform in a zeroshot setting. Following Gao et al. (2021), we further finetune the prompt-based model on the available labeled examples for each task. Note that this procedure finetunes only the existing pre-trained weights, and does not introduce new parameters.
Task and Datasets
We evaluate on three English language datasets included in the GLUE benchmark (Wang et al., 2018) for which there are challenge datasets to evaluate the lexical overlap heuristic: MNLI (Williams et al., 2018), SNLI (Bowman et al., 2015), and Quora Question Pairs (QQP). In MNLI and SNLI, the task is to determine whether premise sentence s 1 entails, contradicts, or is neutral to the hypothesis sentence s 2 . In QQP, s 1 and s 2 are a pair of questions that are labeled as either duplicate or non-duplicate.
Original Input Premise
The actors that danced saw the author. Hypothesis The actors saw the author. Label entailment (support)
Premise
The managers near the scientist resigned. Hypothesis The scientist resigned. Label non-entailment (against)
Reformulated Input Premise
The actors that danced saw the author?
[MASK], the actors saw the author.
Label word Yes
Premise
The managers near the scientist resigned?
[MASK], the scientist resigned. Label word No / Maybe Researchers constructed corresponding challenge sets for the above datasets, which are designed to contain examples that are against the heuristics, i.e., the examples exhibit word overlap between the two input sentences but are labeled as non-entailment for NLI or non-duplicate for QQP. We evaluate each few-shot model against its corresponding challenge dataset. Namely, we evaluate models trained on MNLI against entailment and non-entailment subsets of the HANS dataset (Mc-Coy et al., 2019), which are further categorized into lexical overlap (lex.), subsequence (subseq.), and constituent (const.) subsets; SNLI models against the long and short subsets of the Scramble Test challenge set (Dasgupta et al., 2018); and QQP models against the PAWS dataset (Zhang et al., 2019). 3 We illustrate challenge datasets examples and their reformulation as prompts in Table 1.
Model and Finetuning Our training and standard evaluation setup closely follow Gao et al. (2021), which measure finetuning performances across five different randomly sampled training data of size K to account for finetuning instability on small datasets (Dodge et al., 2020;Mosbach et al., 2021). We perform five data subsampling for each dataset and each data size K, where K ∈ {16, 32, 64, 128, 256, 512}. Note that K indicates the number of examples per label. We use the original development sets of each training dataset for testing the in-distribution performance. We per- : In-distribution (bold) vs. challenge datasets (italic) evaluation results of prompt-based finetuning across different data size K (x axis), where K = 0 indicates zero-shot evaluation. In all challenge sets, the overall zero-shot performance (both blue and green plots) degrades as the model is finetuned using more data.
form all experiments using the RoBERTa-large model (Liu et al., 2019b).
Inference heuristics across data regimes We show the results of the prompt-based finetuning across different K in Figure 1. For the indistribution evaluations (leftmost of each plot), the prompt-based models finetuned on MNLI, SNLI, and QQP improve rapidly with more training data before saturating at K = 512. In contrast to the in-distribution results, we observe a different trajectory of performance on the three challenge datasets. On the Scramble and HANS sets, promptbased models show non-trivial zero-shot performance (K = 0) that is above its in-distribution counterpart. However, as more data is available, the models exhibit stronger indication of adopting heuristics. Namely, the performance on examples subset that support the heuristics increases, while the performance on cases that are against heuristics decreases. This pattern is most pronounced on the lexical overlap subset of HANS, where the median accuracy on non-entailment subset drops to below 10% while the entailment performance reaches 100%. The results suggest that few-shot finetuning can be destructive against the initial ability of prompt-based classifier to ignore surface features like lexical overlap. Finetuning appears to overadjust model parameters to the small target data, which contain very few to no counter-examples to the heuristics (Min et al., 2020;Lovering et al., 2021).
Avoiding Inference Heuristics
Here we look to mitigate the adverse impact of finetuning by viewing the issue as an instance of catastrophic forgetting (French, 1999), which is characterized by the loss of performance on the original dataset after subsequent finetuning on new data. We then propose a regularized prompt-based finetuning based on the Elastic Weight Consolidation (EWC) method (Kirkpatrick et al., 2017), which penalizes updates on weights crucial for the original zero-shot performance. EWC identifies these weights using empirical Fisher matrix (Martens, 2020), which requires samples of the original dataset. To omit the need of accessing the pretraining data, we follow Chen et al. (2020) that assume stronger independence between the Fisher information and the corresponding weights. The penalty term is now akin to the L2 loss between updated weights θ i and the original weights θ * i , resulting in the following overall loss:
L rF T = αL F T + (1 − α) λ 2 i (θ i − θ * i ) 2
where L F T is a standard cross entropy, λ is a quadratic penalty coefficient, and α is a coefficient to linearly combine the two terms. We use the RecAdam implementation (Chen et al., 2020) for this loss, which also applies an annealing mechanism to gradually upweight the standard loss L F T toward the end of training. 4 Baselines We compare regularized finetuning with another method that also minimally update the pretraining weights. We consider simple weight fixing of the first n layers of the pretrained model, where the n layers are frozen (including the token embeddings) and only the weights of upper layers and LM head are updated throughout the finetuning. In the evaluation, we use n ∈ {6, 12, 18}. We refer to these baselines as FT-fixn.
Results
We evaluate all the considered finetuning strategies by taking their median performance after finetuning on 512 examples (for each label) and compare them with the original zero-shot performance. We report the results on Table 2, which also include the results of standard classifier head finetuning (last row). We observe the following: (1) Freezing the layers has mixed challenge set results, e.g., FT-fix18 improves over vanilla prompt-based finetuning on HANS and PAWS, but degrades Scramble and all in-distribution performances;
(2) The L2 regularization strategy, rFT, achieves consistent improvements on the challenge sets while only costs small drop on the corresponding in-distribution performance, e.g., +6pp, +8pp, and +5pp on HANS, PAWS, and Scramble, respectively;
(3) Although vanilla prompt-based finetuning performs relatively poorly, it still has an advantage over standard classifier head finetuning by +2.5pp, +2.0pp, and +1.0pp on the average scores of each in-distribution and challenge dataset pair. Additionally, Figure 2 shows rFT's improvement over vanilla prompt-based finetuning across data regimes on MNLI and HANS. We observe that the advantage of rFT is the strongest on the lexical overlap subset, which initially shows the highest zero-shot performance. The results also suggest that the benefit of rFT peaks at mid data regimes (e.g., K = 32), before saturating when training data size is increased further. We also note that our results are consistent when we evaluate alternative prompt templates, or finetune for varying number of epochs. 5 The latter indicates that the adoption of inference heuristics is more likely attributed to the amount of training examples rather than the number of learning steps.
Related Work
Inference Heuristics Our work relates to a large body of literature on the problem of "bias" in the training datasets and the ramifications to the resulting models across various language understanding tasks (Niven and Kao, 2019;Poliak et al., 2018;Tsuchiya, 2018;Gururangan et al., 2020). Previ-ous work shows that the artifacts of data annotations result in spurious surface cues, which gives away the labels, allowing models to perform well without properly learning the intended task. For instance, models are shown to adopt heuristics based on the presence of certain indicative words or phrases in tasks such as reading comprehension ( Although the problem has been extensively studied, most works focus on models that are trained in standard settings where larger training datasets are available. Our work provides new insights in inference heuristics in models that are trained in zero-and few-shot settings. (2020) show that methods improving compute and memory efficiency using pruning and quantization may be at odds with robustness and fairness. They report that while performance on standard test sets is largely unchanged, the performance of efficient models on certain underrepresented subsets of the data is disproportionately reduced, suggesting the importance of a more comprehensive evaluation to estimate overall changes in performance.
Heuristics Mitigation
Conclusion
Our experiments shed light on the negative impact of low resource finetuning to the models' overall performance that is previously obscured by standard evaluation setup. The results indicate that while finetuning helps prompt-based models to rapidly gain the in-distribution improvement as more labeled data are available, it also gradually increases models' reliance on surface heuristics, which we show to be less present in the zero-shot evaluation. We further demonstrate that applying regularization that preserves pretrained weights during finetuning mitigates the adoption of heuristics while also maintains high in-distribution performances. The last 3 rows are automatically generated templates and label words that are shown by Gao et al. (2021) to improve the few-shot finetuning further. Note that we use the corresponding task's template when evaluating on the challenge datasets.
References
Challenge datasets
We provide examples from each challenge datasets considered in our evaluation to illustrate sentence pairs that support or are against the heuristics. Table 4 shows examples for HANS, PAWS, and Scramble Test. Following Mc-Coy et al. (2019), we obtain the probability for the non-entailment label by summing the probabilities assigned by models trained on MNLI to the neutral and contradiction labels. We use the same-type subset of Scramble Test (Dasgupta et al., 2018) which contain examples of both entailment (support) and contradiction (against) relations.
HANS details HANS dataset is designed based on the insight that the word overlapping between premise and hypothesis in NLI datasets is spuriously correlated with the entailment label. HANS consists of examples in which relying to this correlation leads to incorrect label, i.e., hypotheses are not entailed by their word-overlapping premises. HANS is split into three test cases: (a) Lexical overlap (e.g., "The doctor was paid by the actor" → "The doctor paid the actor"), (b) Subsequence (e.g., "The doctor near the actor danced" → "The actor danced"), and (c) Constituent (e.g., "If the artist slept, the actor ran" → "The artist HANS (McCoy et al., 2019) premise
The artists avoided the senators that thanked the tourists. hypothesis The artists avoided the senators. label entailment (support) premise The woman is more cheerful than the man. hypothesis The woman is more cheerful than the man. label entailment (support)
premise The woman is more cheerful than the man. hypothesis The man is more cheerful than the woman. label contradiction (against) slept"). Each subset contains both entailment and non-entailment examples that always exhibit word overlap.
Hyperparameters Following Schick and Schütze (2021b,a), we use a fixed set of hyperparameters for all finetuning: learning rate of 1e −5 , batch size of 8, and maximum length size of 256.
Regularization implementation We use the RecAdam implementation by Chen et al. (2020) with the following hyperparameters. We set the quadratic penalty λ to 5000, and the linear combination factor α is set dynamically throughout the training according to a sigmoid function schedule, where α at step t is defined as:
α = s(t) = 1 1 + exp(−k · (t − t 0 ))
where parameter k regulates the rate of the sigmoid, and t 0 sets the point where s(t) goes above 0.5. We set k to 0.01 and t 0 to 0.6 of the total training steps.
B Additional Results
Standard CLS finetuning Previously, Gao et al. (2021) reported that the performance of standard non-prompt finetuning with additional classifier head (CLS) can converge to that of prompt-based counterpart after certain amount of data, e.g., 512. It is then interesting to compare both finetuning paradigm in terms of their heuristics-related behavior. Figure 4 shows the results of standard finetuning using standard classifier head across varying data regimes on MNLI and the 3 subsets of HANS. We observe high instability of the results when only small amount of data is available (e.g., K = 64). The learning trajectories are consistent across the HANS subsets, i.e., they start making random predictions on lower data regime and im- We observe that standard prompt-based finetuning still performs better than CLS finetuning, indicating that prompt-based approach provides good initialization to mitigate heuristics, and employing regularization during finetuning can improve the challenge datasets (out-of-distribution) performance further.
Impact of prompt templates A growing number of work propose varying prompt generation strategies to push be benefits of prompt-based predictions (Gao et al., 2021;Schick et al., 2020). We therefore questions whether different choices of templates would affect the model's behavior related to lexical overlap. We evaluate the 3 topperforming templates for MNLI that are obtained automatically by Gao et al. (2021) and show the results in Table 5. We observe similar behavior from the resulting models over the manual prompt counterpart, achieving HANS average accuracy of around 62% and below 55% on zero-shot and finetuning with 512 examples.
Impact of learning steps
We investigate the degradation of the challenge datasets performance as the function of the number of training data available during finetuning. However, adding more training examples while fixing the number of epochs introduces a confound factor to our finding, which is the number of learning steps to the model's weights. To factor out the number of steps, we perform similar evaluation with a fixed amount of training data and varying number of training epochs.
On 32 examples per label, we finetune for 10, 20, 30, 40, and 50 epochs. Additionally, we finetune on 512 examples for 1 until 10 epochs to see if the difference in learning steps results in different behavior. We plot the results in Figure 3. We observe that both finetuning settings result in similar trajectories, i.e., models start to adopt heuristics immediately in early epochs and later stagnate even with increasing number of learning steps. For instance, finetuning on 32 examples for the same number of training steps as in 512 examples finetuning for 1 epoch still result in higher overall HANS performance. We conclude that the number of finetuning data plays a more significant role over the number of training steps. Intuitively, larger training data is more likely to contain more examples that disproportionately support the heuristics; e.g. NLI pairs with lexical overlap are rarely of non-entailment relation (McCoy et al., 2019).
Regularization across data regimes Figure 5 shows the results improvement of L2 weight regularization over vanilla prompt-based finetuning on QQP and SNLI. Similar to results in MNLI/HANS, the improvements are highest on mid data regimes, e.g., 32 examples per label. Impact of pretrained model In addition to evaluating RoBERTa-large, we also evaluate on other commonly used pretrained language models based on transformers such as RoBERTa-base, BERT-base-uncased, and BERT-large-uncased. The results are shown in Table 6. We observe similar pattern across PLMs, i.e., improved in-distribution scores come at the cost of the degradation in the corresponding challenge datasets.
Figure 1
1Figure 1: In-distribution (bold) vs. challenge datasets (italic) evaluation results of prompt-based finetuning across different data size K (x axis), where K = 0 indicates zero-shot evaluation. In all challenge sets, the overall zero-shot performance (both blue and green plots) degrades as the model is finetuned using more data.
Figure 2 :
2Relative difference between median accuracy of prompt-based finetuning across data regimes (y axis) with and without regularization on MNLI and HANS.
Kaushik and Lipton, 2018), story cloze completion(Schwartz et al., 2017; Cai et al., 2017), fact verification (Schuster et al., 2019), argumentation mining (Niven and Kao, 2019), and natural language inference(Gururangan et al., 2020). Heuristics in models are often investigated using constructed "challenge datasets" consisting of counter-examples to the spurious cues, which mostly result in incorrect predictions(Jia and Liang, 2017; Glockner et al., 2018; Naik et al., 2018; McCoy et al., 2019).
Significant prior work attempt to mitigate the heuristics in models by improving the training dataset. Zellers et al. (2019); Sakaguchi et al. (2020) propose to reduce artifacts in the training data by using adversarial filtering methods; Nie et al. (2020); Kaushik et al. (2020) aim at a similar improvement via iterative data collection using human-in-the-loop; Min et al. (2020); Schuster et al. (2021); Liu et al. (2019a); Rozen et al. (2019) augment the training dataset with adversarial instances; and Moosavi et al. (2020a) augment each training instances with their semantic roles information.
Figure 3 :Figure 4 :
34Results of prompt-based finetuning with varying number of epochs and fixed amount of training examples. Top: finetuning on 32 examples per label for epochs ranging from 10 to 50. Bottom: finetuning on 512 examples per label for 1 to 9 epochs. Both results show an immediate drop of non-entailment HANS performances which later stagnate even after more training steps. Results of non-prompt finetuning.
Figure 5 :
5Relative difference between median accuracy of prompt-based finetuning across data regimes (y axis) with and without regularization on QQP / PAWS and SNLI / Scramble Test.
Table 1 :
1Top: input examples of the NLI task that support or are against the lexical overlap heuristics. Bottom: reformulated NLI instances as masked language model inputs with the expected label words.
In-dist. HANS avg. In-dist. PAWS avg. In-dist. Scramble avg.MNLI (acc.)
QQP (F1)
SNLI (acc.)
Prompt-based
zero-shot #0
51.1
62.6
56.8
35.4
51.8
43.6
49.7
64.7
57.2
FT #512
84.3
54.8
69.5
82.1
29.6
55.8
88.1
50.1
69.1
rFT #512
82.7
60.2
71.5
81.5
37.1
59.3
87.6
55.4
71.5
FT-fix18 #512
76.5
61.6
69.1
78.6
35.6
57.1
84.5
45.3
64.9
FT-fix12 #512
83.5
54.3
68.9
81.9
35.3
57.1
87.1
50.5
68.8
FT-fix6 #512
84.2
52.9
68.5
82.1
32.7
57.4
87.9
50.1
68.9
Classifier head
FT #512
81.4
52.6
67.0
80.9
26.8
53.8
86.5
49.8
68.1
Table 2 :
2Results of different strategies for finetuning prompt-based model (using #k examples). Models are evalu-
ated against the in-distribution set and corresponding challenge sets. The zero-shot row indicates prompting results
before finetuning. The avg columns report the average score on in-distribution and challenge datasets.
Complementary to this, recent work introduces various learning algorithms to avoid adopting heuristics including by re-weighting(He et al., 2019; Karimi Mahabadi et al., 2020; Clark et al., 2020) or regularizing the confidence(Utama et al., 2020a; Du et al., 2021) on the training instances which exhibit certain biases. The type of bias can be identified automatically(Yaghoobzadeh et al., 2021; Utama et al., 2020b; Sanh et al., 2021; Clark et al., 2020) or by hand-crafted models designed based on prior knowledge about the bias. Our finding suggests that prompted zero-shot models are less reliant on heuristics when tested against examples containing the cues, and preserving this learned behavior is crucial to obtain more robust finetuned models.Efficiency and Robustness Prompting formulation enables language models to learn efficiently from a small number of training examples, which in turn reduces the computational cost for training(Le Scao and Rush, 2021). The efficiency benefit from prompting is very relevant to the larger efforts towards sustainable and green NLP models(Moosavi et al., 2020b; Schwartz et al., 2020a)
which encompass a flurry of techniques includ-
ing knowledge distillation (Hinton et al., 2015;
Sanh et al., 2019), pruning (Han et al., 2015),
quantization (Jacob et al., 2018), and early exiting
(Schwartz et al., 2020b; Xin et al., 2020). Recently,
Hooker et al.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics. Zheng Cai, Lifu Tu, and Kevin Gimpel. 2017. Pay attention to the ending:strong neural baselines for the ROC story cloze task. In Proceedings of the 55th An-Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, and Xiangzhan Yu. 2020. Recall and learn: Fine-tuning deep pretrained language models with less forgetting. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7870-7881, Online. Association for Computational Linguistics. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2020. Learning to model and ignore dataset bias with mixed capacity ensembles. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3031-3045, Online. Association for Computational Linguistics. Jesse Dodge, and Noah A. Smith. 2020b. The right tool for the job: Matching model and instance complexities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6640-6651, Online. Association for Computational Linguistics.nual Meeting of the Association for Computational
Linguistics (Volume 2: Short Papers), pages 616-
622, Vancouver, Canada. Association for Computa-
tional Linguistics.
Ishita Dasgupta, Demi Guo, Andreas Stuhlmüller,
Samuel Gershman, and Noah D. Goodman. 2018.
Evaluating compositionality in sentence embed-
dings. In Proceedings of the 40th Annual Meeting
of the Cognitive Science Society, CogSci 2018, Madi-
son, WI, USA, July 25-28, 2018. cognitivescienceso-
ciety.org.
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali
Farhadi, Hannaneh Hajishirzi, and Noah Smith.
2020. Fine-tuning pretrained language models:
Weight initializations, data orders, and early stop-
ping. arXiv preprint arXiv:2002.06305.
Mengnan Du, Varun Manjunatha, Rajiv Jain, Ruchi
Deshpande, Franck Dernoncourt, Jiuxiang Gu, Tong
Sun, and Xia Hu. 2021. Towards interpreting and
mitigating shortcut learning behavior of NLU mod-
els. arXiv preprint arXiv:2103.06922.
R. French. 1999. Catastrophic forgetting in connection-
ist networks. Trends in Cognitive Sciences, 3:128-
135.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot
learners. In Proceedings of the 59th Annual Meet-
ing of the Association for Computational Linguistics
and the 11th International Joint Conference on Nat-
ural Language Processing (Volume 1: Long Papers),
pages 3816-3830, Online. Association for Computa-
tional Linguistics.
Max Glockner, Vered Shwartz, and Yoav Goldberg.
2018. Breaking NLI systems with sentences that re-
quire simple lexical inferences. In Proceedings of
the 56th Annual Meeting of the Association for Com-
putational Linguistics (Volume 2: Short Papers),
pages 650-655, Melbourne, Australia. Association
for Computational Linguistics.
Suchin Gururangan, Ana Marasović, Swabha
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In
Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics, pages
8342-8360, Online. Association for Computational
Linguistics.
Song Han, Jeff Pool, John Tran, and William J. Dally.
2015. Learning both weights and connections for
efficient neural networks. In Proceedings of the
28th International Conference on Neural Informa-
tion Processing Systems -Volume 1, NIPS'15, page
1135-1143, Cambridge, MA, USA. MIT Press.
He He, Sheng Zha, and Haohan Wang. 2019. Unlearn
dataset bias in natural language inference by fitting
the residual. In Proceedings of the 2nd Workshop on
Deep Learning Approaches for Low-Resource NLP
(DeepLo 2019), pages 132-142, Hong Kong, China.
Association for Computational Linguistics.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word repre-
sentations. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4129-4138, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean.
2015. Distilling the knowledge in a neural net-
work. In NeurIPS Deep Learning and Representa-
tion Learning Workshop.
Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy
Bengio, and Emily Denton. 2020. Characteris-
ing bias in compressed models. arXiv preprint
arXiv:2010.03058.
Benoit Jacob, Skirmantas Kligys, Bo Chen, Meng-
long Zhu, Matthew Tang, Andrew Howard, Hartwig
Adam, and Dmitry Kalenichenko. 2018. Quanti-
zation and training of neural networks for efficient
integer-arithmetic-only inference. In Proceedings of
the IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR).
Robin Jia and Percy Liang. 2017. Adversarial exam-
ples for evaluating reading comprehension systems.
In Proceedings of the 2017 Conference on Empiri-
cal Methods in Natural Language Processing, pages
2021-2031, Copenhagen, Denmark. Association for
Computational Linguistics.
Rabeeh Karimi Mahabadi, Yonatan Belinkov, and
James Henderson. 2020. End-to-end bias mitiga-
tion by modelling biases in corpora. In Proceedings
of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 8706-8716, On-
line. Association for Computational Linguistics.
Divyansh Kaushik, Eduard Hovy, and Zachary Lipton.
2020. Learning the difference that makes a differ-
ence with counterfactually-augmented data. In 8th
International Conference on Learning Representa-
tions, ICLR 2020, Virtual Conference, 26 April -1
May, 2019. OpenReview.net.
Divyansh Kaushik and Zachary C. Lipton. 2018. How
much reading does reading comprehension require?
a critical investigation of popular benchmarks. In
Proceedings of the 2018 Conference on Empirical
Methods in Natural Language Processing, pages
5010-5015, Brussels, Belgium. Association for
Computational Linguistics.
J. Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz,
J. Veness, G. Desjardins, Andrei A. Rusu, K. Milan,
John Quan, Tiago Ramalho, Agnieszka Grabska-
Barwinska, D. Hassabis, C. Clopath, D. Kumaran,
and R. Hadsell. 2017. Overcoming catastrophic for-
getting in neural networks. Proceedings of the Na-
tional Academy of Sciences, 114:3521 -3526.
Teven Le Scao and Alexander Rush. 2021. How many
data points is a prompt worth? In Proceedings of the
2021 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies, pages 2627-2636, On-
line. Association for Computational Linguistics.
Nelson F. Liu, Roy Schwartz, and Noah A. Smith.
2019a. Inoculation by fine-tuning: A method for
analyzing challenge datasets. In Proceedings of the
2019 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies, Volume 1 (Long and
Short Papers), pages 2171-2179, Minneapolis, Min-
nesota. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
RoBERTa: A robustly optimized BERT pretraining
approach. arXiv preprint arXiv:1907.11692.
Charles Lovering, Rohan Jha, Tal Linzen, and Ellie
Pavlick. 2021. Predicting inductive biases of pre-
trained models. In International Conference on
Learning Representations, ICLR 2021, Virtual Con-
ference, 3 May -8 May, 2021. OpenReview.net.
James Martens. 2020. New insights and perspectives
on the natural gradient method. Journal of Machine
Learning Research, 21(146):1-76.
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019.
Right for the wrong reasons: Diagnosing syntactic
heuristics in natural language inference. In Proceed-
ings of the 57th Annual Meeting of the Association
for Computational Linguistics, pages 3428-3448,
Florence, Italy. Association for Computational Lin-
guistics.
Junghyun Min, R. Thomas McCoy, Dipanjan Das,
Emily Pitler, and Tal Linzen. 2020. Syntactic
data augmentation increases robustness to inference
heuristics. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 2339-2352, Online. Association for Computa-
tional Linguistics.
Nafise Sadat Moosavi, Marcel de Boer, Prasetya Ajie
Utama, and Iryna Gurevych. 2020a.
Improv-
ing robustness by augmenting training sentences
with predicate-argument structures. arXiv preprint
arXiv:2010.12510.
Nafise Sadat Moosavi, Angela Fan, Vered Shwartz,
Goran Glavaš, Shafiq Joty, Alex Wang, and Thomas
Wolf, editors. 2020b. Proceedings of SustaiNLP:
Workshop on Simple and Efficient Natural Language
Processing. Association for Computational Linguis-
tics, Online.
Marius Mosbach, Maksym Andriushchenko, and Diet-
rich Klakow. 2021. On the stability of fine-tuning
BERT: misconceptions, explanations, and strong
baselines. In 9th International Conference on Learn-
ing Representations, ICLR 2021, Virtual Event, Aus-
tria, May 3-7, 2021. OpenReview.net.
Aakanksha Naik, Abhilasha Ravichander, Norman
Sadeh, Carolyn Rose, and Graham Neubig. 2018.
Stress test evaluation for natural language inference.
In Proceedings of the 27th International Conference
on Computational Linguistics, pages 2340-2353,
Santa Fe, New Mexico, USA. Association for Com-
putational Linguistics.
Yixin Nie, Yicheng Wang, and Mohit Bansal. 2019.
Analyzing compositionality-sensitivity of nli mod-
els. Proceedings of the AAAI Conference on Arti-
ficial Intelligence, 33(01):6867-6874.
Yixin Nie, Adina Williams, Emily Dinan, Mohit
Bansal, Jason Weston, and Douwe Kiela. 2020. Ad-
versarial NLI: A new benchmark for natural lan-
guage understanding. In Proceedings of the 58th An-
nual Meeting of the Association for Computational
Linguistics, pages 4885-4901, Online. Association
for Computational Linguistics.
Timothy Niven and Hung-Yu Kao. 2019. Probing neu-
ral network comprehension of natural language ar-
guments. In Proceedings of the 57th Annual Meet-
ing of the Association for Computational Linguis-
tics, pages 4658-4664, Florence, Italy. Association
for Computational Linguistics.
Adam Poliak, Jason Naradowsky, Aparajita Haldar,
Rachel Rudinger, and Benjamin Van Durme. 2018.
Hypothesis only baselines in natural language in-
ference. In Proceedings of the Seventh Joint Con-
ference on Lexical and Computational Semantics,
pages 180-191, New Orleans, Louisiana. Associa-
tion for Computational Linguistics.
Alec Radford, Jeff Wu, Rewon Child, David Luan,
Dario Amodei, and Ilya Sutskever. 2019. Language
models are unsupervised multitask learners. Techni-
cal report, OpenAI.
Ohad Rozen, Vered Shwartz, Roee Aharoni, and Ido
Dagan. 2019. Diversify your datasets: Analyzing
generalization via controlled variance in adversar-
ial datasets. In Proceedings of the 23rd Confer-
ence on Computational Natural Language Learning
(CoNLL), pages 196-205, Hong Kong, China. Asso-
ciation for Computational Linguistics.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat-
ula, and Yejin Choi. 2020. Winogrande: An adver-
sarial winograd schema challenge at scale. In The
Thirty-Fourth AAAI Conference on Artificial Intelli-
gence, AAAI 2020, pages 8732-8740. AAAI Press.
Victor Sanh, Lysandre Debut, Julien Chaumond, and
Thomas Wolf. 2019. DistilBERT, a distilled version
of BERT: smaller, faster, cheaper and lighter. arXiv
preprint arXiv:1910.01108.
Victor Sanh, Thomas Wolf, Yonatan Belinkov, and
Alexander M. Rush. 2021. Learning from others'
mistakes: Avoiding dataset biases without modeling
them. In 9th International Conference on Learning
Representations, ICLR 2021, Virtual Event, Austria,
May 3-7, 2021. OpenReview.net.
Timo Schick, Helmut Schmid, and Hinrich Schütze.
2020. Automatically identifying words that can
serve as labels for few-shot text classification. In
Proceedings of the 28th International Conference
on Computational Linguistics, pages 5569-5578,
Barcelona, Spain (Online). International Committee
on Computational Linguistics.
Timo Schick and Hinrich Schütze. 2021a. Exploiting
cloze-questions for few-shot text classification and
natural language inference. In Proceedings of the
16th Conference of the European Chapter of the As-
sociation for Computational Linguistics: Main Vol-
ume, pages 255-269, Online. Association for Com-
putational Linguistics.
Timo Schick and Hinrich Schütze. 2021b. It's not just
size that matters: Small language models are also
few-shot learners. In Proceedings of the 2021 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, pages 2339-2352, Online. As-
sociation for Computational Linguistics.
Tal Schuster, Adam Fisch, and Regina Barzilay. 2021.
Get your vitamin C! robust fact verification with con-
trastive evidence. In Proceedings of the 2021 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, pages 624-643, Online. Asso-
ciation for Computational Linguistics.
Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel
Roberto Filizzola Ortiz, Enrico Santus, and Regina
Barzilay. 2019. Towards debiasing fact verification
models. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing
and the 9th International Joint Conference on Natu-
ral Language Processing (EMNLP-IJCNLP), pages
3419-3425, Hong Kong, China. Association for
Computational Linguistics.
Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren
Etzioni. 2020a. Green AI. Communications of the
ACM, 63(12):54-63.
Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila
Zilles, Yejin Choi, and Noah A. Smith. 2017. The
effect of different writing tasks on linguistic style: A
case study of the ROC story cloze task. In Proceed-
ings of the 21st Conference on Computational Natu-
ral Language Learning (CoNLL 2017), pages 15-25,
Vancouver, Canada. Association for Computational
Linguistics.
Roy Schwartz,
Gabriel Stanovsky,
Swabha
Swayamdipta, Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019.
BERT rediscovers the classical NLP pipeline. In
Proceedings of the 57th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 4593-
4601, Florence, Italy. Association for Computational
Linguistics.
Masatoshi Tsuchiya. 2018.
Performance impact
caused by hidden bias of training data for recog-
nizing textual entailment. In Proceedings of the
Eleventh International Conference on Language Re-
sources and Evaluation (LREC 2018), Miyazaki,
Japan. European Language Resources Association
(ELRA).
Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna
Gurevych. 2020a. Mind the trade-off: Debiasing
NLU models without degrading the in-distribution
performance. In Proceedings of the 58th Annual
Meeting of the Association for Computational Lin-
guistics, pages 8717-8729, Online. Association for
Computational Linguistics.
Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna
Gurevych. 2020b. Towards debiasing NLU models
from unknown biases. In Proceedings of the 2020
Conference on Empirical Methods in Natural Lan-
guage Processing (EMNLP), pages 7597-7610, On-
line. Association for Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Fe-
lix Hill, Omer Levy, and Samuel Bowman. 2018.
GLUE: A multi-task benchmark and analysis plat-
form for natural language understanding. In Pro-
ceedings of the 2018 EMNLP Workshop Black-
boxNLP: Analyzing and Interpreting Neural Net-
works for NLP, pages 353-355, Brussels, Belgium.
Association for Computational Linguistics.
Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun
Liu, and Samuel R. Bowman. 2020. Learning which
features matter: RoBERTa acquires a preference for
linguistic generalizations (eventually). In Proceed-
ings of the 2020 Conference on Empirical Methods
in Natural Language Processing (EMNLP), pages
217-235, Online. Association for Computational
Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman.
2018. A broad-coverage challenge corpus for sen-
tence understanding through inference. In Proceed-
ings of the 2018 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume
1 (Long Papers), pages 1112-1122, New Orleans,
Louisiana. Association for Computational Linguis-
tics.
Ji Xin, Rodrigo Nogueira, Yaoliang Yu, and Jimmy Lin.
2020. Early exiting BERT for efficient document
ranking. In Proceedings of SustaiNLP: Workshop on
Simple and Efficient Natural Language Processing,
pages 83-88, Online. Association for Computational
Linguistics.
Yadollah Yaghoobzadeh, Soroush Mehri, Remi Ta-
chet des Combes, T. J. Hazen, and Alessandro Sor-
doni. 2021. Increasing robustness to spurious cor-
relations using forgettable examples. In Proceed-
ings of the 16th Conference of the European Chap-
ter of the Association for Computational Linguistics:
Main Volume, pages 3319-3332, Online. Associa-
tion for Computational Linguistics.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali
Farhadi, and Yejin Choi. 2019. HellaSwag: Can
a machine really finish your sentence?
In Pro-
ceedings of the 57th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 4791-
4800, Florence, Italy. Association for Computational
Linguistics.
Yuan Zhang, Jason Baldridge, and Luheng He. 2019.
PAWS: Paraphrase adversaries from word scram-
bling. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long and Short Papers), pages
1298-1308, Minneapolis, Minnesota. Association
for Computational Linguistics.
A Experimental Details
Manual templates and mapping We use the
following prompt templates and word-to-label map-
ping for the three tasks we evaluate on:
Template
Label Words
MNLI (manual): entailment, neutral, contradiction
s1?[MASK], s2
Yes, Maybe, No
SNLI (manual): entailment, neutral, contradiction
s1?[MASK], s2
Yes, Maybe, No
QQP (manual): duplicate, non-duplicate
s1[MASK], s2
Yes, No
MNLI (auto): entailment, neutral, contradiction
s1.[MASK], you are right , s2
Fine, Plus, Otherwise
s1.[MASK], you're right , s2
There, Plus, Otherwise
s1.[MASK] ! s2
Meaning, Plus, Otherwise
Table 3 :
3Templates and label words used to finetune and evaluate on MNLI, SNLI, and QQP.
Table 4 :
4Sampled examples from each of the challenge datasets we used for evaluation.
MNLI (acc.)
IN
HANS
manual
51.1 62.6
manual Ft-#512
84.3 54.8
template-1
46.3 62.0
template-1 Ft-#512 84.2 53.2
template-2
49.9 61.3
template-2 Ft-#512 83.9 52.7
template-3
44.5 61.7
template-3
Table 5 :
5Evaluation results of different MNLI templates provided by Gao et al. (2021). Models are evaluated against both the in-distribution (IN) set and corresponding challenge set of MNLI. mediately adopt heuristics by predicting almost all examples exhibiting lexical overlap as entailment.
Table 6 :
6Evaluation results of different pretrained language models. Models are evaluated against both the indistribution (In.) set and corresponding challenge set.Q Q P (d u p )
Q Q P (n o n )
P A W S (d u p )
P A W S (n o n )
512
256
128
64
32
16
-0.48 -0.34 0.92 13.07
-1.09 -0.42 2.61 18.37
-1.15 -2.12 0.59 16.13
-1.76 -1.50 2.17 10.16
-3.32 -0.67 -2.55 22.73
-0.94 -2.46 3.98 -2.63
QQP / PAWS
The code is available at https://github.com/ UKPLab/emnlp2021-prompt-ft-heuristics 2 E.g., appending a cloze prompt "It was [MASK]" to a sentiment prediction input sentence "Delicious food!", and obtaining the sentiment label by comparing the probabilities assigned to the words "great" and "terrible".
See appendix A for details of HANS, PAWS, and Scramble Test test sets.
See Appendix A for implementation details.
See Appendix B for the detailed results.
AcknowledgementWe thank Michael Bugert, Tim Baumgärtner, Jan Buchman, and the anonymous reviewers for their constructive feedback. This work is funded by the German Research Foundation through the research training group AIPHES (GRK 1994/1) and by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. | [] |
[
"Learning to Format Coq Code Using Language Models",
"Learning to Format Coq Code Using Language Models"
] | [
"Pengyu Nie pynie@utexas.edu \nThe University of Texas at Austin\nAustinTXUSA\n",
"Karl Palmskog palmskog@acm.org \nKTH Royal Institute of Technology\nStockholmSweden\n",
"Junyi Jessy Li \nThe University of Texas at Austin\nAustinTXUSA\n",
"Milos Gligoric gligoric@utexas.edu \nThe University of Texas at Austin\nAustinTXUSA\n"
] | [
"The University of Texas at Austin\nAustinTXUSA",
"KTH Royal Institute of Technology\nStockholmSweden",
"The University of Texas at Austin\nAustinTXUSA",
"The University of Texas at Austin\nAustinTXUSA"
] | [] | Should the final right bracket in a Record declaration be on a separate line? Should arguments to rewrite be separated by a single space? Coq code tends to be written in distinct manners by different people and teams. The expressiveness, flexibility, and extensibility of Coq's languages and notations means that Coq projects have a wide variety of recognizable coding styles, sometimes explicitly documented as conventions on naming and formatting. In particular, even inexperienced users can distinguish vernacular using the standard library and plain Ltac from idiomatic vernacular using the Mathematical Components (MathComp) library and SSReflect.While coding conventions are important for comprehension and maintenance, they are costly to document and enforce. Rule-based formatters, such as Coq's beautifier, have limited flexibility and only capture small fractions of desired conventions in large verification projects. We believe that application of language models-a class of Natural Language Processing (NLP) techniques for capturing regularities in corpora-can provide a solution to this conundrum [1]. More specifically, we believe that an approach based on automatically learning conventions from existing Coq code, and then suggesting idiomatic code to users in the proper context, can be superior to manual approaches and static analysis tools-both in terms of effort and results.As a first step, we here outline initial models to learn and suggest space formatting in Coq files, with a preliminary implementation for Coq 8.10, and evaluated using on a corpus based on MathComp 1.9.0 which comprises 164k lines of Coq code from four core projects[3]. | null | [
"https://arxiv.org/pdf/2006.16743v1.pdf"
] | 220,265,862 | 2006.16743 | 1c98995fce472d11974ce281870490d8a479f4f5 |
Learning to Format Coq Code Using Language Models
Jun 2020
Pengyu Nie pynie@utexas.edu
The University of Texas at Austin
AustinTXUSA
Karl Palmskog palmskog@acm.org
KTH Royal Institute of Technology
StockholmSweden
Junyi Jessy Li
The University of Texas at Austin
AustinTXUSA
Milos Gligoric gligoric@utexas.edu
The University of Texas at Austin
AustinTXUSA
Learning to Format Coq Code Using Language Models
Jun 2020
Should the final right bracket in a Record declaration be on a separate line? Should arguments to rewrite be separated by a single space? Coq code tends to be written in distinct manners by different people and teams. The expressiveness, flexibility, and extensibility of Coq's languages and notations means that Coq projects have a wide variety of recognizable coding styles, sometimes explicitly documented as conventions on naming and formatting. In particular, even inexperienced users can distinguish vernacular using the standard library and plain Ltac from idiomatic vernacular using the Mathematical Components (MathComp) library and SSReflect.While coding conventions are important for comprehension and maintenance, they are costly to document and enforce. Rule-based formatters, such as Coq's beautifier, have limited flexibility and only capture small fractions of desired conventions in large verification projects. We believe that application of language models-a class of Natural Language Processing (NLP) techniques for capturing regularities in corpora-can provide a solution to this conundrum [1]. More specifically, we believe that an approach based on automatically learning conventions from existing Coq code, and then suggesting idiomatic code to users in the proper context, can be superior to manual approaches and static analysis tools-both in terms of effort and results.As a first step, we here outline initial models to learn and suggest space formatting in Coq files, with a preliminary implementation for Coq 8.10, and evaluated using on a corpus based on MathComp 1.9.0 which comprises 164k lines of Coq code from four core projects[3].
Language Models for Coq Formatting
Natural language has repeating patterns which can be predicted statistically at the level of, say, individual words with high accuracy. Programming languages have similar predictability, usually called naturalness, which can be exploited to perform a variety of software engineering tasks [1]. We consider, from this view, the problem of predicting spacing between tokens obtained from Coq's lexer. For example, according to MathComp's contribution guide, there should be no space between the tactic tokens move and =>, which we can learn by observing the relative locations of the two tokens in a large Coq corpus adhering to the conventions. n-gram model: We constructed a baseline model based on predicting the next token after observing the n − 1 previous tokens, as often used in NLP and software engineering. To capture formatting, we inserted special tokens holding spacing information before each token. Neural model: We constructed a sophisticated model based on bi-directional recurrent neural networks [4]. The model embeds Coq tokens and spacing information into vectors, and predicts token formatting using the embedding vectors of both the left-hand and right-hand context.
Preliminary Implementation and Evaluation
To implement learning and suggesting of spacing in Coq files based on our models, we modified the SerAPI library [2] to serialize tokens in Coq files, organized at the sentence level. We then serialized all sentences in our MathComp corpus, and extracted information on token kind and spacing using the source file location information included in each token. Finally, we implemented our language models using the PyTorch machine learning framework. To evaluate the models using our implementation, we divided corpus files into training, validation, and testing sets, and calculated the top-1 and top-3 accuracy of space prediction on the testing set after training. According to the results, which can be seen in Table 1, both the n-gram and the neural model are able to learn and suggest formatting conventions with high accuracy. However, the more sophisticated neural model performs significantly better than the n-gram model.
Challenges and Future Directions
Despite the high accuracy achieved by our preliminary implementation even when using the baseline n-gram model, we believe our spacing prediction (based only on raw token streams) needs significant tuning for practical use. For example, newlines before Qed sentences often get mispredicted, and unlike for name suggestions [3], it is usually inconvenient to inspect more than the top-1 suggestion for spacing. Moreover, for MathComp, we were able to construct, with help from maintainers, a sufficiently large corpus with strict adherence to conventions; for other projects, it may be more challenging, e.g., due to project size or lack of consensus on conventions. We ask the Coq community for input on the following challenges and directions: Measuring successful predictions: Certain formatting errors, such as improper midsentence line breaks, are usually considered worse than others. Can we collaboratively define Coq-specific measures of formatting prediction success and use them to improve the models? Finding conventions and corpus code: Which files in which projects are idiomatically formatted? What are the main coding styles used in the Coq community? With agreement on these questions, style conformity for files and whole projects can be precisely measured.
Manually improving generated suggestions: How do we best represent and apply rulebased conventions to do reranking of suggestions generated by a trained language model? How should we weigh manually specified conventions against learned ones?
Refactoring of code to adhere to conventions: Our preliminary implementation only modifies spacing, but code may require refactoring to properly address convention requirements, most simply, introducing or changing bullets and specific notations.
Integrating suggestions into the development process: How do we best provide our tools for suggesting conventions to the community? For example, displaying formatting suggestions during code reviews of pull requests on GitHub may work well for large projects, but small projects may have different workflows and thus benefit more from integration with IDEs.
Table 1 :
1Evaluation of Formatting Suggestions on our MathComp Corpus. Top-1 Acc. Top-3 Acc.Model Neural
96.8%
99.7%
n-gram
93.4%
98.9%
A survey of machine learning for big code and naturalness. M Allamanis, E T Barr, P Devanbu, C Sutton, ACM Computing Surveys. 51481M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton. A survey of machine learning for big code and naturalness. ACM Computing Surveys, 51(4):81, 2018.
SerAPI: Machine-friendly, data-centric serialization for Coq. E J Arias, MINES ParisTech. Technical reportE. J. Gallego Arias. SerAPI: Machine-friendly, data-centric serialization for Coq. Technical report, MINES ParisTech, 2016. https://hal-mines-paristech.archives-ouvertes.fr/hal-01384408.
Deep generation of Coq lemma names using elaborated terms. P Nie, K Palmskog, J J Li, M Gligoric, IJCAR. 2020To appearP. Nie, K. Palmskog, J. J. Li, and M. Gligoric. Deep generation of Coq lemma names using elaborated terms. In IJCAR, 2020. To appear. Extended version at https://arxiv.org/abs/2004.07761 .
Semi-supervised sequence tagging with bidirectional language models. M E Peters, W Ammar, C Bhagavatula, R Power, ACL. M. E. Peters, W. Ammar, C. Bhagavatula, and R. Power. Semi-supervised sequence tagging with bidirec- tional language models. In ACL, pages 1756-1765, 2017.
| [] |
[
"Improving and Simplifying Pattern Exploiting Training",
"Improving and Simplifying Pattern Exploiting Training"
] | [
"Derek Tam \nUNC Chapel Hill\n\n",
"Rakesh R Menon rrmenon@cs.unc.edu \nUNC Chapel Hill\n\n",
"Mohit Bansal mbansal@cs.unc.edu \nUNC Chapel Hill\n\n",
"Shashank Srivastava ssrivastava@cs.unc.edu \nUNC Chapel Hill\n\n",
"Colin Raffel craffel@cs.unc.edu \nUNC Chapel Hill\n\n"
] | [
"UNC Chapel Hill\n",
"UNC Chapel Hill\n",
"UNC Chapel Hill\n",
"UNC Chapel Hill\n",
"UNC Chapel Hill\n"
] | [] | Recently, pre-trained language models (LMs) have achieved strong performance when finetuned on difficult benchmarks like Super-GLUE. However, performance can suffer when there are very few labeled examples available for fine-tuning. Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning. However, PET uses task-specific unlabeled data. In this paper, we focus on few shot learning without any unlabeled data and introduce ADAPET, which modifies PET's objective to provide denser supervision during fine-tuning. As a result, ADAPET outperforms PET on Su-perGLUE without any task-specific unlabeled data. Our code can be found at https:// github.com/rrmenon10/ADAPET. | 10.18653/v1/2021.emnlp-main.407 | [
"https://arxiv.org/pdf/2103.11955v3.pdf"
] | 232,307,483 | 2103.11955 | e812919d2cd818e7262f01b32dc5e630fc825af1 |
Improving and Simplifying Pattern Exploiting Training
Derek Tam
UNC Chapel Hill
Rakesh R Menon rrmenon@cs.unc.edu
UNC Chapel Hill
Mohit Bansal mbansal@cs.unc.edu
UNC Chapel Hill
Shashank Srivastava ssrivastava@cs.unc.edu
UNC Chapel Hill
Colin Raffel craffel@cs.unc.edu
UNC Chapel Hill
Improving and Simplifying Pattern Exploiting Training
Recently, pre-trained language models (LMs) have achieved strong performance when finetuned on difficult benchmarks like Super-GLUE. However, performance can suffer when there are very few labeled examples available for fine-tuning. Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning. However, PET uses task-specific unlabeled data. In this paper, we focus on few shot learning without any unlabeled data and introduce ADAPET, which modifies PET's objective to provide denser supervision during fine-tuning. As a result, ADAPET outperforms PET on Su-perGLUE without any task-specific unlabeled data. Our code can be found at https:// github.com/rrmenon10/ADAPET.
Introduction
Pre-trained language models (LMs) have shown significant gains across a wide variety of natural language processing (NLP) tasks in recent years (Devlin et al., 2019;Radford et al., 2018;Raffel et al., 2020). Most of these gains are obtained by fine-tuning language models on labeled data for a particular task. However, performance can suffer when there is very limited labeled data available for a downstream task (Xie et al., 2020;Chen et al., 2020).
Recently, GPT-3 (Brown et al., 2020) demonstrated how language models, when scaled to hundreds of billions of parameters, can learn well when primed with only a few labeled examples. However, the scale of GPT-3 (175B parameters) makes it impractical to study. There is, therefore, a need to develop smaller language models that can work equally well with limited labeled data.
Pattern-Exploiting Training (PET; Schick and Schütze, 2021a,b) reformulates natural language understanding tasks as cloze-style questions and * Equal contribution performs gradient-based fine-tuning. In doing so, PET outperforms GPT-3 with few labeled examples using ALBERT (Lan et al., 2020). However, PET uses additional task-specific unlabeled data.
We propose ADAPET (A Densely-supervised Approach to Pattern Exploiting Training) that uses more supervision by decoupling the losses for the label tokens and a label-conditioned masked language modeling (MLM) objective over the full original input. On SuperGLUE (Wang et al., 2019) with 32 labeled examples per task, ADAPET outperforms iPET without any unlabeled data.
Background
Cloze-style questions and MLM. A cloze task is a problem where certain parts of a text are removed, and the goal is to replace the missing portion based on the context (Taylor, 1953). Here, the text that has some parts removed is considered a cloze-style question. Inspired by cloze tasks, BERT introduces the MLM objective that tries to predict the original word at the masked out Here, the blue boxes refer to the inputs from a task (entailment, in this case). Figure 2a shows the decoupling label objective. The model has to predict the correct and incorrect labels at the masked out position, using a BCE loss over all labels. For the label conditioning objective in Figure 2b, the input text either includes the correct or incorrect label. At a randomly masked out position, the model should predict the original token when the input text has the correct label, and should not predict the original token when the input text has an incorrect label.
positions in a cloze question.
Notation. Let G represent a language model, x represent the input example converted into a cloze-style question, and y represent the label at the masked location m. We are interested in the quantity [[G m (x)]] z which represents the logit value for a specific token z at the mask location m.
Unlabeled Data Access
Schick and Schütze (2021a,b) assumes access to task-specific unlabeled data. For some applications such as sentiment analysis, unlabeled data can be cheap to acquire. But for SuperGLUE, where the examples are pairs of text with a label that is constructed to test a model's natural language understanding abilities, it might be more expensive to acquire unlabeled data. For example, the construction of BoolQ requires annotators to filter good question-article pairs before assigning labels (Clark et al., 2019). Hence, for our setup, we do not assume access to task-specific unlabeled data, which aligns with the setup in Brown et al. (2020).
PET
Our work primarily builds on top of PET (Schick and Schütze, 2021a,b). PET converts an example into a cloze-style question, similar to the input format used during pre-training. The query-form in PET is defined by a Pattern-Verbalizer Pair (PVP). Each PVP consists of • a pattern which describes how to convert the inputs into a cloze-style question with masked out tokens. We illustrate this for an entailment task in Figure 2a. Here, we convert the premise ("Oil prices fall back") and the hypothesis ("Oil prices rise") into a clozestyle question with the pattern: <premise> ? <mask>, <hypothesis>.
• a verbalizer which describes the way to convert the classes into the output space of tokens. In Figure 2a, the verbalizer maps "Not Entailment/Entailment" to "No/Yes".
After hand-designing a PVP for a given task, PET obtains logits from the model G m (x) (in the singletoken label case). Given the space of output tokens Y, (in Figure 2a {"Yes", "No"}) PET computes a softmax over y ∈ Y, using the logits from G m (x). The final loss is shown in Equation 2.
q(y|x) = exp([[G m (x)]] y ) ∑ y ′ ∈Y exp([[G m (x)]] y ′ )(1)L = CE(q(y * |x), y * )(2)
PET additionally distils knowledge from an ensemble of models trained with different patterns on both labeled and unlabeled data. iPET is an iterative variant of PET that trains models across iterations. The size of the training set gradually increases each iteration based on the labels of previous iterations. For a description of the different patterns used across the tasks (Schick and Schütze, 2021b), we refer the reader to Appendix A.1.
ADAPET
Our proposed approach, called ADAPET, modifies the objective from PET so that it can provide more supervision and learn without task-specific unlabeled data.
Decoupling Label Losses
PET computes class probabilities using the logits that correspond to the labels for a specific task. This discards the information from all the other logits in the vocabulary that do not correspond to a label. For example, in Figure 2a, "oil" is not a class token so the LM head should assign a low probability to "oil". However, because PET only extracts the token logits that correspond to labels, the non-label tokens will never have any gradient signal.
One solution is to change the objective to a regular MLM objective. In that case, there would be no distinction between tokens corresponding to incorrect classes and any other token in the vocabulary. For example, in Figure 2a, the model would be trained to treat "Yes" (the incorrect token) the same as any other token such as "oil". While we want the model to discourage "oil", the training objective should still specifically suppress "Yes".
In ADAPET, we penalize incorrect class tokens and encourage correct class tokens. Specifically, the model computes the probability of each token as a softmax normalized across all tokens so that each probability is influenced by the logits of all the vocabulary tokens. Then, we maximize the probability of the correct class tokens and minimize the probability of incorrect class tokens. This is equivalent to binary cross entropy, as shown in Figure 2a. Formally, if y * is the true label for an example,
q(y|x) = exp([[G m (x)]] y ) ∑ v ′ ∈V exp([[G m (x)]] v ′ )(3)L D = log q(y * |x) − y≠y * log q(y|x)(4)
The loss can be rewritten using binary cross entropy or regular cross entropy as:
L D = BCE(q(y * |x), 1) + y≠y * BCE(q(y|x), 0) (5) = CE(q(y * |x), y * ) − y≠y * CE(q(y|x), y)(6)
Unified Loss for Different Tasks
For normal tasks where the label is exactly one token, PET uses the formulation described in Equation 2. For WSC (Levesque et al., 2012), which does not have incorrect class labels, PET uses the original MLM objective rather than Equation 2. This is equivalent to Equation 5 without the second term in ADAPET.
For other tasks with multi-token labels (COPA (Roemmele et al., 2011), ReCoRD (Zhang et al., 2018)), PET computes the probability of the classes as the sum of the log probabilities of the individual tokens. However, it is not obvious how to convert these label probabilities into a valid probability distribution.
Rather than normalizing the probabilities, PET uses a hinge loss to ensure a margin between the correct label and the incorrect labels.
In ADAPET, for each token in the label, L D discriminates the correct token from every other tokens, via the following loss:
1 L D = z * ∈y * BCE(q(z * |x), 1) + y≠y * z∈y BCE(q(z|x), 0) (7)
This objective splits a single loss based on multiple tokens into multiple losses over single tokens. As a result, we do not need to to multiply the probabilities of the individual tokens, and thus do not run into normalization issues.
Label Conditioning
The PET objective encapsulates the question: "Given the input, what is the right label?." However, since the input space and output space both consist of tokens, we can also ask the inverse question, "Given the answer, what is the correct context?". The model is trained to predict the input given the label. Formally, let x ′ be the original input x modified by randomly masking out tokens from the context and x m be the original context tokens masked out in x ′ . In the label conditioning objective, we are interested in the quantity P (x m |x ′ , y), which encourages the model to predict the masked out tokens in the input given the label.
During training, if the label is correct, the model has to predict the original token, as shown in Fig
P (x m |x ′ , y * ) and minimize P (x m |x ′ , y) ∀ y ≠ y * .
This objective is the same as the decoupling label losses approach described in Equation 5, except with different inputs and outputs.
q(x m |x ′ , y) = exp([[G m (x ′ , y)]] x m ) ∑ v ′ ∈V exp([[G m (x ′ , y)]] v ′ ) (8) L M =BCE(q(x m |x ′ , y * ), 1) + y≠y * BCE(q(x m |x ′ , y), 0)(9)
The final loss for ADAPET is a sum of the decoupled label loss and the label-conditioned MLM loss.
Results and Analyses
We run experiments on SuperGLUE, and follow the same data split as Schick and Schütze (2021b), which consists of 32 labeled examples for each task.
Our code is implemented in Pytorch (Paszke et al., 2019) using HuggingFace (Wolf et al., 2020). We use the same pre-trained model and hyperparameters as PET, except we increased the number of training batches to 1k and choose the best checkpoint on the dev set, since it has been shown that training longer can help even with few samples (Zhang et al., 2021). For all ablation experiments, we only use the first pattern 3 and train for 250 batches. We refer the reader to Appendix B for more details.
Since we do not assume access to unlabeled data (see Section 2.1), we do not apply the three-step training procedure of PET and iPET to ADAPET. We still assume access to the full development set to choose the best masking ratio and checkpoint model, since PET presumably used the full development set to choose their hyperparameters which we copy. Table 1 and Table 2 shows our results on the validation and test sets on SuperGLUE. We compare against GPT-3 and PET/iPET. Note that PET/iPET uses unlabeled data and a three step training procedure (Schick and Schütze, 2021b). For fair comparison, we train PET with a single pattern (sPET) for 1k batches, and report scores for the best performing pattern on the validation set. We include a further analysis of how well the models perform for each pattern in Appendix A.2.
Results
On the dev set, ADAPET outperforms all models that do not use unlabeled data, and even outperforms PET's iterative variant, iPET, by 0.5 points absolute. Surprisingly, sPET outperforms PET, but still loses to iPET by 2.6 points. But, this is in line with the ablation from Schick and Schütze (2021b), which shows that ensembling sPET models, trained with only labeled data, outperforms PET. Also, Gao et al. (2021) show that the model with the best performing pattern outperforms ensembling sPET models.
On the test set, ADAPET outperforms all other models including iPET without access to the unlabeled examples (∼9k on average per task) and achieves state-of-the-art for few-shot learning on SuperGLUE.
Loss Ablation
Conclusion
In this paper, we propose ADAPET, a new method for few-shot natural language understanding. Crucially, our work does not use unlabeled data and instead leverages more supervision to train the model. Assuming the same data budget, our model outperforms GPT-3 on SuperGLUE using just 0.1% as many parameters. However, our method has limitations; for example, we use a naive random masking strategy, which might not make sense for label conditioning. Future work could look into better masking strategies for labeled conditioned MLM, such as masking important tokens based on the the gradients of the logits for an example, as has been done for interpreting models (Simonyan et al., 2014). For this QA task, we are given a paragraph pand a yes/no question q. We use two forms of labels for this task yes/no and true/false. This is a textual entailment task similar to CB, except that we have just two labels for classification, entailment and not entailment. We map these two labels to yes and no respectively in the PVPs. In this task, we are given two sentences s 1 and s 2 and we need to identify if a word w occurs in the same sense in both sentences.
• Pattern : "s 1 " / "s 2 " Similar sense of "w"? ___ . Verbalizer: yes/no Here, we are given a sentence s that contains some nouns and pronouns. We are tasked with finding the correct noun that a specific pronoun p refers to. Within the FewGLUE dataset, we are provided with the only positive examples and hence our verbalizer contains just the correct noun phrase.
• Pattern : s The pronoun '*p*' refers to ___. Verbalizer: correct noun
• Pattern : s In the previous sentence, the pronoun '*p*' refers to __.
Verbalizer: correct noun
• Pattern : s In the passage above, what does the pronoun '*p*' refer to? Answer: __.
Verbalizer: correct noun
A. 1.7 MultiRC (Khashabi et al., 2018) In this task, we are given a passage pand multiple questions q. We are tasked with finding the right answer from a list of candidate answers e. Here, we pose it as a binary classification task where we predict yes if the e answers q with context p, else no.
• Pattern : p. Question: q? Is it e? ___. Verbalizer: yes/no
• Pattern : p. Question: q? Is the correct answer "e"? ___.
Verbalizer: yes/no
• Pattern : p. Based on the previous passage, q? Is "e" a correct answer? __.
Verbalizer: yes/no A.1.8 ReCoRD (Zhang et al., 2018) For this task, given a passage p and cloze question q, we are supposed to find the right replacement for a '@placeholder' token in the question. Since the task itself is already framed in a cloze-style format, we merely concatenate the passage with the cloze question to form the input to the language model.
A.2 Results on Individual Patterns
We train the sPET and ADAPET models using the same experimental setup mentioned in Section 4 and report results across all patterns for all datasets on the validation dataset of SuperGLUE. Note that the numbers in Table 1 contains the best numbers from this table for the dev results. Our results can be found in Table 4 We trained ADAPET for 1k batches and compared to PET/iPET which were trained for 250 batches. In this section, we compare sPET and ADAPET trained for 250 and 1k batches in Table 6. Note that training for 1k batches is not guaranteed to outperform training for 250 batches, even if we checkpoint every 250 batches, since the learning rate scheduler will have to accommodate for a different number of total batches. Overall, ADAPET gets a boost by training longer, especially on ReCoRD, while sPET peaks at 250 batches.
C.2 Multi-Task Multi-Pattern Training
We also tried training the model with multiple patterns at once, as compared to ensembling and distilling them. We formulated this as a multitask training problem, where different patterns are viewed as different tasks, and the model would sample a pattern to train from each batch. We compare sPET, ADAPET, and ADAPET without the label conditioning objective. The results are shown in Table 7. In general, multi-task multi-pattern training hurts performance for ADAPET, is mixed on sPET, and is beneficial for ADAPET with the label conditioning objective.
C.3 Replacement Token Detection (RTD)
In our formulation, the decoupled label objective can be viewed as a binary classifier that seeks to assign high probability to the correct label token, and low probability to the incorrect label token. In reality though, the model has a softmax classifier head on top that is converted into a one-vs-all classifier. Another way to achieve the same objective would be to use a binary classifier head on top. Rather than feeding in the "[MASK]" token, we would feed in either the correct label token or the incorrect label token, and the model must distinguish whether these tokens make sense in context or not. This objective would be very similar to the RTD objective for ELECTRA (Clark et al., 2020).
Inference would be slower since the number of forward passes would scale up by the number of labels. For multi token labels though, because there is not need to condition on other label tokens, the number of forward passes would scale down by the number of tokens in the labels. Table 8 shows the results of using the RTD objective with a binary classifier. Overall, the RTD objective seems to perform worse than the decoupled label objective. There are several reasons why using a RTD head might perform worse. First, the RTD head would have |V | times fewer parameters, but relative to the whole model, the change in number of parameters is not substantial. Second, the softmax classifier has been pretrained, and contains lots of information, which is now lost when we discard the softmax classifier and randomly initialize a binary classifier head from scratch.
We also experiment with using a binary classifier head initialized with ELECTRA, but the results were the same and so we omit them from the table. We note that ALBERT (xxlarge-v2) is a much better performing model than BERT, and ELEC-TRA is more comparable to BERT than ALBERT (xxlarge-v2).
C.4 Label Conditioning with Important Words Masked Out
For the label conditioning component, we randomly mask out tokens in the input text, and the model tries to predict the original token when conditioned on the correct label, and not predict the original token when conditioned on an incorrect label. This makes sense if the masked out token is an influential token that affects the label, like "Yes" in Figure 2a, but makes less sense if the masked out token is an unimportant word like "the". We experiment with only masking out important words, using TFIDF as an approximation of how important a word is. The results are shown in table 9. Overall, using TFIDF as an approximation for masking out important words hurts performance.
C.5 Ensembles
PET/iPET ensemble and distill with unlabeled data. However, it is not clear how beneficial unlabeled data is for ensembling, so we show results of ensembling models trained only on labeled data with different patterns and different seeds. For ensembling, we average the logits across the different models. C.5.1 Across Patterns Table 10 shows our results ensembling across patterns. In general, ensembling across patterns provides mixed results for ADAPET and sPET. This corroborates the finding in Gao et al. (2021) where sometimes the best performing model performs better than ensembling across patterns. Table 11 shows our results ensembling across seeds. We fix the pattern (pattern 1) and train with different seeds. For this experiment, we ensemble across models for seeds 41, 42, 43. From our results in Table 11, we find that ensembling patterns across seeds provides mixed results. Hence, we do not apply ensembling for our final results.
C.5.2 Across Seeds
C.6 Masking Ratio
We experiment with several different masking schemes, where we mask out a fixed percentage (FIXED), or up to a fixed percentage (VARIABLE) in Table 12. If x is the number of tokens masked out in FIXED masking, we mask out between 1 and x tokens for VARIABLE masking. For the ablation, we tested with multiples of 1.5 for the masking ratio (in addition to 10%), to match the 15% ratio of ALBERT pre-training. From our results in Table 12, we find that 10.5% VARIABLE mask ratio provided the best trade-off between scores for all models. Hence, we choose that for our final experiments in the main paper.
C.7 What if we had unlabeled data?
One of the key motivations of our work is to eliminate the need for unlabeled data during few-shot training on language understanding tasks. In this section, we push that limitation of prior methods
Figure 1 :
1Performance of ADAPET vs iPET/PET and GPT-3 on SuperGLUE. While iPET/PET are parameterefficient, they use ∼9K unlabeled examples in addition to 32 labeled examples per task. ADAPET uses just 32 labeled examples, and performs better than iPET.
Figure 2 :
2We illustrate the training with the two components of ADAPET.
•
Pattern : h? | ___,p Verbalizer: yes/no • Pattern : "h"? | ___,"p" Verbalizer: yes/no • Pattern : h? | ___.p Verbalizer: yes/no • Pattern : "h?" | ___."p" Verbalizer: yes/no A.1.4 COPA (Roemmele et al., 2011) Given a premise p, we need to find which of the options c 1 or c 2 is the responsible cause/effect for this task. For effect examples: • Pattern : "c 1 " or "c 2 "? p, so ___. Verbalizer: c 1 /c 2 • Pattern : c 1 or c 2 ? p, so ___. Verbalizer: c 1 /c 2 For cause examples: • Pattern : "c 1 " or "c 2 "? p, because ___. Verbalizer: c 1 /c 2 • Pattern : c 1 or c 2 ? p, because ___. Verbalizer: c 1
• Pattern : s 1 s 2
2Does w have the same meaning in both sentences?_. Verbalizer: yes/no • Pattern : w. Sense (1) (a) "s 1 " (_ ) "s 2 " Verbalizer: b/2 A.1.6 WSC (Levesque et al., 2012)
ure 2b. Additionally, if the label is wrong, theCOPA RTE
WiC
WSC
MultiRC
ReCoRD
Avg
Method
Acc.
Acc./F1
Acc.
Acc.
Acc.
Acc.
EM/F1a
Acc./F1
-
ALBERT
55.7
68.6 / 49.1
63.0
50.5
41.4
81.7
3.6 / 49.8
84.1/83.5
57.7
GPT-3 (LAB; SINGLE)
77.5
82.1 / 57.2
92.0 ♣ 72.9
55.3 ♣♦ 75.0
32.5 / 74.8
89.0 / 90.1 ♣♦ 73.2
sPET (LAB; SINGLE)
76.9
87.5 / 85.4
89.0
67.1
49.7
82.7 ♣♦ 31.2 / 74.6
85.0 / 91.9
74.2
ADAPET (LAB; SINGLE)
80.3 ♣ 89.3 / 86.8 ♣ 89.0
76.5 ♣♦ 54.4
81.7
39.2 / 80.1 ♣♦ 85.4 / 92.1
77.3 ♣♦
PET (LAB + UNLAB; ENSEMBLE) 79.4
85.1 / 59.4
95.0 ♦ 69.8
52.4
80.1
37.9 / 77.3
86.0 / 86.5
74.1
iPET (LAB + UNLAB; ENSEMBLE) 80.6 ♦ 92.9 / 92.4 ♦ 95.0 ♦ 74.0
52.2
80.1
33.0 / 74.0
86.0 / 86.5
76.8
Table 1: Few-shot classification results on SuperGLUE with 32 labeled examples on the dev set. Note, we do not
have access to the train split of GPT-3, so we follow the split provided by (Schick and Schütze, 2021b). ♣=BEST
SINGLE PATTERN MODEL, ♦=BEST MODEL OVERALL, LAB=LABELED DATA, UNLAB=UNLABELED DATA
BoolQ CB
COPA
RTE
WiC
WSC
MultiRC
ReCoRD
Avg
Method
Acc.
Acc./F1
Acc.
Acc.
Acc.
Acc.
EM/F1a
Acc./F1
-
GPT-3 (LAB; SINGLE)
76.4
75.6 / 52.0
92.0 ♣♦ 69.0
49.4
80.1
30.5 / 75.4
90.2 / 91.1 ♣♦ 71.8
ADAPET (LAB; SINGLE)
80.0 ♣ 92.0 / 82.3 ♣♦ 85.4
75.0 ♣♦ 53.5 ♣♦ 85.6 ♣ 35.7 / 76.2 ♣ 85.5 / 86.1
76.0 ♣♦
PET (LAB + UNLAB; ENSEMBLE) 79.1
87.2 / 60.2
90.8
67.2
50.7
88.4 ♦ 36.4 / 76.6 ♦ 85.4 / 85.9
74.0
iPET (LAB + UNLAB; ENSEMBLE) 81.2 ♦ 88.8 / 79.9
90.8
70.8
49.3
88.4 ♦ 31.7 / 74.1
85.4 / 85.9
75.4
Table 2 :
2Few-shot classification results on SuperGLUE with 32 labeled examples on the hidden test set. ♣=BEST SINGLE PATTERN MODEL, ♦=BEST MODEL OVERALL, LAB=LABELED DATA, UNLAB=UNLABELED DATAmodel is forced to not predict the original to-
ken.
2 We maximize
Table 3
3shows our ablation analysis for the loss functions we introduce in this paper. From the results, we see that label conditioning (LC) is extremely beneficial for ADAPET, especially on CB. Comparing our modified decoupled label objective (ADAPET W/O LC) with sPET, we see that it does worse for CB on F1, but does much better on RTE and MultiRC. Next, we compare against LC conditioned only on the correct label. We see that this hurts on BoolQ, but helps on CB. We ablate other model choices in Appendix C.BoolQ
CB
RTE MultiRC
Method
Acc.
Acc./F1
Acc. EM / F1a
ADAPET
79.4
91.1 / 88.1 75.1 38.6 / 79.8
ADAPET W/O LC
78.1
75.0 / 62.8 64.3 37.0 / 79.1
ADAPET LC (POS. EX. ONLY)
75.4
83.9 / 80.9 72.2 31.3 / 76.9
sPET
77.5
75.0 / 72.8 57.0 26.5 / 73.2
Table 3 :
3Ablation of ADAPET with different components. Best numbers have been bolded. (LC= LABEL CONDITIONING)
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05, page 177-190, Berlin, Heidelberg. Springer-Verlag. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Representations.Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mix-
Text: Linguistically-informed interpolation of hid-
den space for semi-supervised text classification. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 2147-
2157, Online. Association for Computational Lin-
guistics.
Christopher Clark, Kenton Lee, Ming-Wei Chang,
Tom Kwiatkowski, Michael Collins, and Kristina
Toutanova. 2019. BoolQ: Exploring the surprising
difficulty of natural yes/no questions. In Proceed-
ings of the 2019 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2924-2936, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and
Christopher D. Manning. 2020.
Electra: Pre-
training text encoders as discriminators rather than
generators. In International Conference on Learn-
ing Representations.
Marie-Catherine de Marneffe, Mandy Simons, and Ju-
dith Tonhauser. 2019. The commitmentbank: Inves-
tigating projection in naturally occurring discourse.
Proceedings of Sinn und Bedeutung, 23(2):107-124.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021.
Making pre-trained language models better few-shot
learners. In Proceedings of the 59th Annual Meet-
ing of the Association for Computational Linguistics
and the 11th International Joint Conference on Nat-
ural Language Processing (Volume 1: Long Papers),
pages 3816-3830, Online. Association for Computa-
tional Linguistics.
Suchin Gururangan, Ana Marasović, Swabha
Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A. Smith. 2020. Don't stop pretraining:
Adapt language models to domains and tasks. In
Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics, pages
8342-8360, Online. Association for Computational
Linguistics.
Daniel Khashabi, Snigdha Chaturvedi, Michael Roth,
Shyam Upadhyay, and Dan Roth. 2018. Looking be-
yond the surface: A challenge set for reading com-
prehension over multiple sentences. In Proceedings
of the 2018 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Pa-
pers), pages 252-262, New Orleans, Louisiana. As-
sociation for Computational Linguistics.
Hector J. Levesque, Ernest Davis, and Leora Morgen-
stern. 2012. The winograd schema challenge. In
Proceedings of the Thirteenth International Confer-
ence on Principles of Knowledge Representation
and Reasoning, KR'12, page 552-561. AAAI Press.
Adam Paszke, Sam Gross, Francisco Massa, Adam
Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca
Antiga, Alban Desmaison, Andreas Kopf, Edward
Yang, Zachary DeVito, Martin Raison, Alykhan Te-
jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang,
Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learn-
ing library. In Advances in Neural Information Pro-
cessing Systems, volume 32. Curran Associates, Inc.
Mohammad Taher Pilehvar and Jose Camacho-
Collados. 2019. WiC: the word-in-context dataset
for evaluating context-sensitive meaning represen-
tations. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 1267-1273, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Alec Radford, Karthik Narasimhan, Tim Salimans, and
Ilya Sutskever. 2018. Improving language under-
standing by generative pre-training.
Colin Raffel, Noam Shazeer, Adam Roberts, Kather-
ine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J. Liu. 2020. Exploring
the limits of transfer learning with a unified text-to-
text transformer. Journal of Machine Learning Re-
search, 21(140):1-67.
Melissa Roemmele, Cosmin Bejan, and Andrew Gor-
don. 2011. Choice of plausible alternatives: An eval-
uation of commonsense causal reasoning.
Timo Schick and Hinrich Schütze. 2021a. Exploiting
cloze-questions for few-shot text classification and
natural language inference. In Proceedings of the
16th Conference of the European Chapter of the As-
sociation for Computational Linguistics: Main Vol-
ume, pages 255-269, Online. Association for Com-
putational Linguistics.
Timo Schick and Hinrich Schütze. 2021b. It's not just
size that matters: Small language models are also
few-shot learners. In Proceedings of the 2021 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, pages 2339-2352, Online. As-
sociation for Computational Linguistics.
William Taylor. 1953. Cloze procedure: A new tool for
measuring readability. Journalism Bulletin.
Alex Wang, Yada Pruksachatkun, Nikita Nangia,
Amanpreet Singh, Julian Michael, Felix Hill, Omer
Levy, and Samuel Bowman. 2019. Superglue: A
stickier benchmark for general-purpose language un-
derstanding systems. In Advances in Neural Infor-
mation Processing Systems, volume 32. Curran As-
sociates, Inc.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Remi Louf, Morgan Funtow-
icz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
Teven Le Scao, Sylvain Gugger, Mariama Drame,
Quentin Lhoest, and Alexander Rush. 2020. Trans-
formers: State-of-the-art natural language process-
ing. In Proceedings of the 2020 Conference on Em-
pirical Methods in Natural Language Processing:
System Demonstrations, pages 38-45, Online. Asso-
ciation for Computational Linguistics.
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong,
and Quoc Le. 2020. Unsupervised data augmenta-
tion for consistency training. In Advances in Neural
Information Processing Systems, volume 33, pages
6256-6268. Curran Associates, Inc.
Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng
Gao, Kevin Duh, and Benjamin Van Durme. 2018.
Record: Bridging the gap between human and ma-
chine commonsense reading comprehension. arXiv
preprint arXiv:1810.12885.
Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q
Weinberger, and Yoav Artzi. 2021. Revisiting few-
sample BERT fine-tuning. In International Confer-
ence on Learning Representations.
Appendix
A Patterns and Pattern Performances
A.1 Pattern Verbalizer Pairs
We list the patterns and the verbalizers used by
the PET and ADAPET models for the SuperGLUE
dataset here. For improved readability of the pat-
terns, we first list a legend for the different letter
combinations that we use throughout the patterns
and then proceed to enumerate the patterns for each
dataset.
• p: passage/paragraph/pronoun
• q: question
• h: hypothesis
• e: entity
• w: word
• c i : choice i
• s i : sentence i
A.1.1 BoolQ (Clark et al., 2019)
Table 4 :
4Performance of sPET and ADAPET models on the validation set of SuperGLUE for different patterns after training for 1000 batches. The patterns we use are the same as PET (Schick and Schütze, 2021b). Note thatTable 1uses the best pattern (♦) results from this table for each model to report validation set scores.♣ = BEST MODEL FOR EACH PATTERNPattern/
Model
1
2
3
4
5
6
BoolQ
Acc.
sPET
77.5
77.1
73.9
75.6
74.2
66.8
ADAPET 79.4 ♣
78.3 ♣
78.7 ♣
77.7 ♣
78.2 ♣
76.8 ♣
CB
Acc./F1
sPET
75/72.8
85.7/83.5
83.9/68.9
85.7/82.3
-
-
ADAPET 91.1/88.1 ♣ 87.5/85.5 ♣ 87.5/78.7 ♣ 89.3/85 ♣
-
-
COPA
Acc.
sPET
90 ♣
87
-
-
-
-
ADAPET 73
89 ♣
-
-
-
-
MultiRC
F1a/EM
sPET
29.9/72.8
30.2/73.3
23.6/69.0
27.4/72.8
16.1/65.7
23.9/70.3
ADAPET 36.4/79.4 ♣ 36.0/78.6 ♣ 38.1/79.0 ♣ 34.6/77.9 ♣ 33.2/77.8 ♣ 31.4/75.1 ♣
RTE
Acc.
sPET
57
54.5
56.7
71.7
-
-
ADAPET 74.7♣
69.7 ♣
75.1 ♣
73.6 ♣
-
-
WiC
Acc.
sPET
49.8
47.8
49.5
-
-
-
ADAPET 51.1 ♣
49.5 ♣
50.8 ♣
-
-
-
WSC
Acc.
sPET
82.7 ♣
78.8 ♣
79.8
-
-
-
ADAPET 76.9
74
79.8
-
-
-
ReCoRD
Acc./F1
sPET
82.3/91 ♣
-
-
-
-
-
ADAPET 77.4/87.2
-
-
-
-
-
Table 5 :
5Performance of sPET and ADAPET models on the validation set of SuperGLUE for different patterns aftertraining for 250 batches. The patterns we use are the same as PET (Schick and Schütze, 2021b). ♣ = BEST MODEL
FOR EACH PATTERN
Table 7 :
7Comparison of sPET and ADAPET with Multi-Pattern Multi-Task training MPMT = MULTI PATTERN MULTI TASK. Best numbers have been bolded. (LC= LABEL CONDITIONING)BoolQ
CB
RTE MultiRC
Method
Acc.
Acc./F1
Acc. EM / F1a
ADAPET W/O LC
77.8
78.6 / 54.9 71.5 32.5 / 74.8
ADAPET RTD
69.8
82.1 / 80.2 57.8 21.7 / 72.2
Table 8 :
8Comparison of decoupled label objective and with the replacement token detection (RTD) objective. Best numbers have been bolded. (LC= LABEL CON-DITIONING)
BoolQ
CB
RTE MultiRC
Method
Acc.
Acc./F1
Acc. EM / F1a
ADAPET
79.4
91.1 / 88.1 74.7 36.4 / 79.4
ADAPET TFIDF
76.1
76.8/61.8 72.9 31.1 / 77.1
Table 9 :
9Comparison of ADAPET with random masking and masking tokens based on TFIDF. Best numbers have been bolded. (LC= LABEL CONDITIONING)
Table 11 :
11Ensemble of sPET and ADAPET across seeds. Best numbers have been bolded.
We ignore tokens that are common in all labels.
This assumes the context only makes sense with the correct label. Empirically though, we find this to be reasonable.3 The first pattern for each task can be found in App. A.1
Note: for MultiRC and ReCoRD we use 512 tokens as per (Schick and Schütze, 2021b).
AcknowledgmentsThis work was supported in part by ONR Grant N00014-18-1-2871, DARPA YFA17-D17AP00022, and NSF-CAREER Award 1846185. The views contained in this article are those of the authors and not of the funding agency.ReferencesCOPA RTE WiC WSC MultiRCReCoRD Avg MethodAcc. Acc./F1 Acc. Acc. Acc. Acc. EM/F1a Acc./F1 dev aside and seek to know "if" such unlabeled data were available, can ADAPET leverage unlabeled data to improve performance. Instead of adopting the multi-stage iterative approach in iPET, we experiment with pre-training the model on the unlabeled data before fine-tuning on the labeled dataset. This has been shown to improve performance on textclassification tasks previously(Gururangan et al., 2020). Specifically, we experiment with Task Adaptive Pre-training (TAPT) (Gururangan et al., 2020) and pre-train our base LM for 2500 batches on the unlabeled data of FewGLUE. Following that, we fine-tune the models using ADAPET, sPET and regular (CLS-head) fine-tuning on the labeled set. The results can be found inTable 13. For regular fine-tuning, TAPT improves performance on three out of four datasets. However, for sPET and ADAPET, TAPT hurts performance significantly for all datasets. We speculate this is because during TAPT, the model never sees the pattern, and so it hurts pattern-based models. This leaves the question of how to improve pattern-based few-shot methods, like ADAPET, when unlabeled data is available as an open challenge.
Performance of the models trained with 250 batches vs 1k batches BoolQ CB RTE MultiRC Method Acc. Acc./F1 Acc. EM / F1a. 6Table 6: Performance of the models trained with 250 batches vs 1k batches BoolQ CB RTE MultiRC Method Acc. Acc./F1 Acc. EM / F1a
C B Boolq, Rte Multirc Method Acc, Acc, /F1 Acc. EM / F1a. BoolQ CB RTE MultiRC Method Acc. Acc./F1 Acc. EM / F1a
Ensemble of sPET and ADAPET across patterns. We use the best pattern. 10instead of patternTable 10: Ensemble of sPET and ADAPET across pat- terns. We use the best pattern (instead of pattern
Adapet, ENS= EN-SEMBLEPAT= PATTERN) Best numbers have been bolded. BoolQ CB RTE MultiRC Method Acc. Acc./F1 Acc. EM / F1a. numbers for ADAPET and sPET here. (ENS= EN- SEMBLE) (PAT= PATTERN) Best numbers have been bolded. BoolQ CB RTE MultiRC Method Acc. Acc./F1 Acc. EM / F1a
C B Boolq, Rte, MultiRC Masking Ratio Acc. Acc./F1 Acc. EM / F1a. BoolQ CB RTE MultiRC Masking Ratio Acc. Acc./F1 Acc. EM / F1a
| [] |
[
"A machine transliteration tool between Uzbek alphabets",
"A machine transliteration tool between Uzbek alphabets"
] | [
"Ulugbek Salaev \nDepartment of Information Technologies\nUrgench State University\n14, Kh.Alimdjan str\n\nUrgench city\n220100Uzbekistan\n",
"Elmurod Kuriyozov \nDepto. de Computación y Tecnologías de la Información\nFacultade de Informática\nUniversidade da Coruña\nCITIC, Grupo LYS\nCampus de Elviña15071A CoruñaSpain\n",
"Carlos Gómez-Rodríguez \nDepto. de Computación y Tecnologías de la Información\nFacultade de Informática\nUniversidade da Coruña\nCITIC, Grupo LYS\nCampus de Elviña15071A CoruñaSpain\n"
] | [
"Department of Information Technologies\nUrgench State University\n14, Kh.Alimdjan str",
"Urgench city\n220100Uzbekistan",
"Depto. de Computación y Tecnologías de la Información\nFacultade de Informática\nUniversidade da Coruña\nCITIC, Grupo LYS\nCampus de Elviña15071A CoruñaSpain",
"Depto. de Computación y Tecnologías de la Información\nFacultade de Informática\nUniversidade da Coruña\nCITIC, Grupo LYS\nCampus de Elviña15071A CoruñaSpain"
] | [] | Machine transliteration, as defined in this paper, is a process of automatically transforming written script of words from a source alphabet into words of another target alphabet within the same language, while preserving their meaning, as well as pronunciation. The main goal of this paper is to present a machine transliteration tool between three common scripts used in low-resource Uzbek language: the old Cyrillic, currently official Latin, and newly announced New Latin alphabets. The tool has been created using a combination of rule-based and fine-tuning approaches. The created tool is available as an open-source Python package, as well as a web-based application including a public API. To our knowledge, this is the first machine transliteration tool that supports the newly announced Latin alphabet of the Uzbek language. | 10.48550/arxiv.2205.09578 | [
"https://arxiv.org/pdf/2205.09578v1.pdf"
] | 248,887,416 | 2205.09578 | 2115a4a24a016fd995ddeac36b265ea12c568558 |
A machine transliteration tool between Uzbek alphabets
Ulugbek Salaev
Department of Information Technologies
Urgench State University
14, Kh.Alimdjan str
Urgench city
220100Uzbekistan
Elmurod Kuriyozov
Depto. de Computación y Tecnologías de la Información
Facultade de Informática
Universidade da Coruña
CITIC, Grupo LYS
Campus de Elviña15071A CoruñaSpain
Carlos Gómez-Rodríguez
Depto. de Computación y Tecnologías de la Información
Facultade de Informática
Universidade da Coruña
CITIC, Grupo LYS
Campus de Elviña15071A CoruñaSpain
A machine transliteration tool between Uzbek alphabets
transliterationuzbek languagenatural language processinglow-resource language
Machine transliteration, as defined in this paper, is a process of automatically transforming written script of words from a source alphabet into words of another target alphabet within the same language, while preserving their meaning, as well as pronunciation. The main goal of this paper is to present a machine transliteration tool between three common scripts used in low-resource Uzbek language: the old Cyrillic, currently official Latin, and newly announced New Latin alphabets. The tool has been created using a combination of rule-based and fine-tuning approaches. The created tool is available as an open-source Python package, as well as a web-based application including a public API. To our knowledge, this is the first machine transliteration tool that supports the newly announced Latin alphabet of the Uzbek language.
Introduction
The term transliteration is ambiguous, as it refers to two similar tasks of Natural Language Processing (NLP), which differ according to their either inter-language or intra-language nature. More specifically, a transliteration can be described as a process of representing words from one language using the alphabet of another language [1], while the other use of the term stands for the act of transforming words from one alphabet into another alphabet within the same language [2]. We take the latter case as our goal in this work, and present a method for transforming words between three equally-important alphabets of the low-resource Uzbek language.
Uzbek language (native: O'zbek tili) is a low-resource, highly-agglutinative language with null-subject and null-gender characteristics from the Karluk branch of the Turkic language family. It is an official language of Uzbekistan, with more than 30 million speakers inside and around the country, making it the second most widely spoken language among Turkic languages (right after Turkish language) 1 .
The Cyrillic alphabet had been in use for a long time for Uzbek language, until it was replaced with Latin script in 1993 2 (with a reformation in 1995 3 ), which is still an official alphabet. The use of both Cyrillic and Latin alphabets is equally popular in all areas of written language (law, books, web, media, etc.) even these days. Availability of texts in two writing systems make it harder and costlier for NLP researchers and practitioners to work on the language, such as by limiting the amount of collected data for a specific alphabet, or by creating a need to develop language resources and models for both alphabets. Furthermore, there is a new reformation 4 that has been introduced to change all the existing digraphs and replace them with diacritical signs 5 , so every letter in the alphabet would be written with only a single character. Throughout this paper, we refer to this reformed Latin alphabet as "New Latin" alphabet.
Considering the existence of three distinctive alphabets currently in use in Uzbek language, we propose a methodology to perform the task of transliteration between those three alphabets, which is a combination of basic rule-based character-mapping, more sophisticated cross-alphabet specific rules, as well as fine-tuning approaches. Although there are some available web tools that offer transliteration between Cyrillic and Latin alphabets for Uzbek, none of them offer neither an open source code, nor an Application Programming Interface (API) for integration with other tools. Moreover, the only one tool with a good quality from Savodxon project 6 is commercial, and the free ones are not practical enough to be used, due to a bad implementation. In this paper, we also present a publicly available Python code 7 for research integration, together with a web-based tool 8 that also includes an API, which is, to our knowledge, the first ever transliteration tool between all three alphabets.
Related work
One of the very early mentions of machine transliteration was raised by Nida and Taber [3], stating that a problem of "untranslatability" arises when an exact equivalence of meanings is required in translation, rather than a comparative equivalence, so they referred to transliteration to tackle the issue. In early mentions, transliteration was described as a process of representing words from one language using the alphabet of another language, as part of machine translation [1]. Later on, it also has been used for similar purposes, but with intra-language perspective, describing it as a conversion of words from one written script to another one within the same language [2,4].
Instances of early works on transliteration can be Arabic-English names transliteration using a combination of a rule-based system with neural networks [1], and Japanese-English using finite state transducers [5]. Both approaches dealt with phonetic representations of words, which were replaced by a spelling-based approach to achieve higher results, as in the case of the Arabic-English model of [6]. Later modern approaches to transliteration include models with long short-term memories (LSTM) [4], and recurrent neural networks (RNN) [7], which perform equally well. Combination of old rule-based approaches with recent deep-learning methods improves the quality, according to a comparative study [8].
Transliteration between Cyrillic and Latin alphabets of Uzbek language has been done by Mansurovs [9], who used a data-driven approach, by aligning words and training a decision-tree classifier. Among some other NLP work that has been done on low-resource Uzbek language so far, there are a morphological analyzer [10], WordNet type synsets [11], Uzbek stopwords dataset [12], sentiment analysis and text classification [13,14,15], cross-lingual word-embeddings [16], as well as a pretrained Uzbek language model based on the BERT architecture [17].
Methodology
To check the accuracy of our tool, we collected text from the spelling dictionary of Uzbek language [18]. This dictionary is a printed resource that contains about 14K commonly-used words of Uzbek language in Latin and Cyrillic variants. We did not include multiword expressions, because we used word-level evaluation to check the performance analysis, and using them by splitting into single words would create duplications. After also removing words that could not be successfully digitalized using OCR, we ended up with around 9600 words to use in our experiments.
Although the dictionary size is limited, it includes words that are prone to spelling errors between Cyrillic and Latin. Since there is no publicly available data for the New Latin alphabet yet, we transliterated those words from Latin to New Latin, then manually checked resulting words, correcting them where necessary. Manual correction was only possible within our resources thanks to the fact that the majority of words stayed the same as in Latin, we focused only on words that changed their form.
The methodology used in this work is very similar to the work from Mansurovs [9], but we extend it by adding the New Latin alphabet. Additionally, instead of training a classifier, we rely on string replacement techniques for the sake of simplicity and speed. Following are the steps followed by the tool, and the steps that need more detail are explained separately afterwards:
1. Tokenization: Feeding text from the source alphabet as string buffer, and splitting into tokens; 2. Replacement of exceptional words: Checking each token to see if it is or contains a word from the exceptional words dataset (excluding punctuations, emojis, or unrecognized characters), if so, replacing it with its target version; 3. Replacement using rules:Going through a set of mapping rules specific to the pair of alphabets and conversion direction that were designed to use where one-to-one character mapping does not apply. Technically, each rule consists of a simple regular exrpression that looks for a specific sub-string (usually one to three character long), and replaces it with desired sub-string(either empty, one or more characters long); 4. Character-mapping:Replacing the rest of characters from source alphabet to the target one using one-to-one mapping. This can also be made by very simple regular expression that replaces one character with another in a string; 5. Re-uniting:Merging resulting tokens that contain target alphabet characters back again, and returning them as a whole string.
Replacement of exceptional words
This is the step we came up with affier applying a ne-tuning approach to the created tool.
There are words that cannot be transliterated using a rule-based approach. Only one-directional transliteration (like from Cyrillic to Latin) may be possible, but it could fail in the opposite direction (like from Latin to Cyrillic). To solve this issue, we extracted words from the collected data that did not provide the same output when transliterated and back-transliterated between dierent combinations. So far, there are 233 words with their form in all three alphabets that are stored in the tool as an exceptional words database. Some examples of such words can be seen in Table 1. One interesting insight about those words is that they are mostly loan words from Russian language, and there is usually a change when converting Cyrillic letters ц, ь (phonetic glottal stop), and я. Although this process was done affier the tool's creation, it is required that this step has to be applied before any further conversion steps are applied.
Character-mapping
Steps 3 and 4 of the conversion deal with mapping characters from source alphabet to the target one. Although the majority of letters are replaced in a straightforward manner, the remaining characters require set of pairwise rules based on the alphabets involved, and the direction of the conversion. A general idea of conversion between alphabets is given in Table 2.
Throughout the process, we found out that some conversion rules are not as straightforward as expected. There is a problem with handling a single character uppercase letter when converting to a digraph letter in other alphabet. For instance, if we convert Cyrillic uppercase letters Ш and Ю into SH and YU (respectively) in Latin, an error like these happen: "Шўрва>"SHo'rva"(soup), or "Юлдуз>"YUlduz"(star); But if we convert it into Sh and Yu, then an error with acronyms occurs like these: "АҚШ >"AQSh"(USA), or "ЮНЕСКО>"YuNESKO"(UNESCO). A solution to this kind of problem is to consider surrounding letters when performing conversion.
Another complicated situation with mapping rules is the phonetic glottal stop (native: Tutuq belgisi), which is also part of an alphabet in Uzbek language. There are some words that a glottal stop appears in its Cyrillic form and is omitted in its Latin form. For instance: "факультет->"fakultet"(faculty), or "кальций>"kalsiy"(calcium). The problem with this omission is twofold: The algorithm has to be taught whether to omit it or not, also when these words are transliterated back to Cyrillic, the glottal stop has to appear out of nowhere. A solution to this kind of problem is to include this kind of words in the exceptional words list.
Table 2
Character-level mapping between alphabets for transliteration. Cyr. stands for Cyrillic alphabet, Lat. stands for Latin alphabet, and NewLat. stands for New Latin alphabet. ∅ denotes an empty string. Highlighted rows indicate a complex mapping, where one character from source alphabet is mapped to either two or zero characters from target alphabet. The character at the very end of the table is called a phonetic glottal stop (native: Tutuq belgisi), and although it is not a real letter, still it is considered a part of the Uzbek alphabet.
Cyr.
Lat
Ғ Ғ G' g'Ḡḡ Ж ж J j J j С с S s S s Ш ш Sh shŞş З з Z z Z z Т т T t T t Ч ч Ch chÇç И и I i I i У у U u U u Нг нг Ng ngÑň Й й Y y Y y Ф ф F f F f ъ '/∅ '/∅ К к K k K k Х х X x X x
Results
The created tool has been analysed using the collected parallel text data for all three alphabets, and comparing the tool's output for each word with the actual expected output. We have calculated micro-averaged F1 scores of each conversion using the metrics module of scikit- Table 3 Micro-averaged F1 scores of word level transliteration process between alphabets. The direction of the transliteration is from the alphabet shown in the row to the alphabet shown in the column. . F1-scores are calculated at the word level (i.e., by considering words that the system transliterates correctly or incorrectly). Table 3 shows the results between each pair and each direction.
Alphabets Latin Cyrillic New Latin
Although the analysis has been done using very limited amount of data, it gives us some insights about the tool's performance: The best performing pair is Latin->New Latin conversion (0.94 F1 score) due to the reason that there are only five letters that change during conversion with no exceptional cases (to our best knowledge), and those errors that still occur are only because of the problem with handling the abbreviations.The worst performing pair is Latin->Cyrillic (0.89 F1 score), likely due to many conversion rules to consider, plus many exceptional cases. Furthermore, It is also possible to see that transliteration to and from the New Latin alphabet performs better than any other alphabets do, which can be explained by the minimum number of conversion rules required compared to its counterparts. More specifically, Transliteration between New Latin and Latin would require only 5 specific conversion rules (and no exceptional cases), and 6 rules (plus exceptional cases) between New Latin and Cyrillic, while the same process would require 11 rules (and exceptional cases) from a transliteration between Latin and Cyrillic alphabets.
The Python tool created for this work is openly-accessible, and also can be easily installed, using the following command that is popular for the Python community: p i p i n s t a l l U z T r a n s l i t e r a t o r
The user interface of the created web tool can be seen in Figure 1. There is also a public API based on this tool, and more detailed information about it can be found at the project's GitHub repository.
Discussion
Although the created tool is practical enough to be used for transliteration, there are some certain cases we still have to consider and improve on the go:
• Our database of exceptional cases (a result of fine-tuning approach) contains only lemmas of words, and due to the highly agglutinative nature of the Uzbek language, words mostly appear as inflections and derivations. For this reason we have to either store their root forms or add syntactic knowledge to handle all possible forms of lemmas; • New loan words and proper nouns adopted from other languages might not produce expected output, thus we have to keep updating the database of exceptional cases; • We dealt with legal text properly written in Uzbek language, which is not always the case with user-generated text. Especially, there is a big deal of inconsistency in writing o' and g' letters in the currently official alphabet due to the use of apostrophe, which comes in many ways, such as o',o',o', and g',g',g' forms respectively; • Due to the lack of texts created in the New Latin alphabet, we worked only with manually created text, which is very limited and requires more analysis as the coverage starts to enlarge.
Conclusion
In this paper, we presented a Python code, a web tool, and an API created for the low-resource Uzbek language that performs machine transliteration between two popularly used Cyrillic and Latin alphabets, as well as a newly reformed version of the Latin alphabet which, according to the governmental decree, all legal texts will have been completely adapted to by year 2023. We have also shown the cases of alphabet-specific problems related to the transliteration between those three scripts that do not allow for a simple character mapping, including ongoing attempts to tackle user-input related issues.
Our future work will be to strengthen the output quality of the current tool by implementing more mapping rules, user input cleaning techniques, as well as integrating a pretrained neural language model that can handle unseen cases. Furthermore, we hope to be able to make a pipeline that can perform useful NLP tasks for Uzbek language, such as tokenization, POS tagging, morphological analysis, and parsing in a foreseen future.
Figure 1 :
1Web-interface of the created transliteration tool.
Table 1
1Some examples from the exceptional words database where rule-based transliteration does not apply.Latin
Cyrillic
New Latin
English
aksent
акцент
aksent
accent
budilnik
будильник
budilnik
alarm clock
batalyon
батальон
batalyon
batalion
feldsher
фельдшер
feldşer
paramedic
fransuz
француз
fransuz
french
intervyu
интервью
intervyu
interview
koeitsient коэффициент koeitsient coeicient
korrupsiya коррупция
korrupsiya corruption
kuryer
курьер
kuryer
courier
medalyon
медальон
medalyon
medallion
oktabr
октябрь
oktabr
october
pavilyon
павильон
pavilyon
pavilion
porshen
поршень
porşen
piston
shpatel
шпательşpatel
scraper (putty knife)
cherepitsa
черепицаçerepitsa
roof tile (shingle)
More about Uzbek language: https://en.wikipedia.org/wiki/Uzbek_language 2 Law of the Republic of Uzbekistan "On the introduction of the Uzbek alphabet based on the Latin script" (September 2, 1993 year, reg. number: 931-XII): https://lex.uz/docs/-112286. 3 On Amendments to the Law of the Republic of Uzbekistan "On Introduction of the Uzbek Alphabet Based on the Latin Script" (May 6, 1995 year, reg. number: 71-I): https://lex.uz/docs/-116158. 4 Resolution of the Cabinet of Ministers of the Republic of Uzbekistan "On measures to ensure a gradual transition to the Uzbek alphabet based on the Latin script." (February 10, 2021 year, reg. number: 61): https: //lex.uz/uz/docs/-5281850.5 More about alphabets used in Uzbek language: https://www.omniglot.com/writing/uzbek.htm. 6 https://savodxon.uz/ 7 https://github.com/UlugbekSalaev/UzTransliterator 8 https://nlp.urdu.uz/?menu=translit
https://scikit-learn.org/0.15/modules/classes.html#module-sklearn.metrics
AcknowledgmentsThis work has received funding from ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), from Xunta de Galicia (ED431C 2020/11), and from Centro de Investigación de Galicia "CITIC", funded by Xunta de Galicia and the European Union (ERDF -Galicia 2014-2020 Program), by grant ED431G 2019/01. Elmurod Kuriyozov was funded for his PhD by El-Yurt-Umidi Foundation under the Cabinet of Ministers of the Republic of Uzbekistan.
Algorithms for arabic name transliteration. M Arbabi, S M Fischthal, V C Cheng, E Bart, IBM Journal of research and Development. 38M. Arbabi, S. M. Fischthal, V. C. Cheng, E. Bart, Algorithms for arabic name transliteration, IBM Journal of research and Development 38 (1994) 183-194.
The transliteration of ottoman turkish for library and general purposes. E Birnbaum, Journal of the American Oriental Society. 87E. Birnbaum, The transliteration of ottoman turkish for library and general purposes, Journal of the American Oriental Society 87 (1967) 122-156.
The theory and practice of [Biblical] translation. E A Nida, C R Taber, BrillE. A. Nida, C. R. Taber, The theory and practice of [Biblical] translation, Brill, 1969.
Sequence to sequence networks for roman-urdu to urdu transliteration. M Alam, S Hussain, 2017 International Multi-topic Conference (INMIC). IEEEM. Alam, S. ul Hussain, Sequence to sequence networks for roman-urdu to urdu transliter- ation, in: 2017 International Multi-topic Conference (INMIC), IEEE, 2017, pp. 1-7.
K Knight, J Graehl, cmp-lg/9704003Machine transliteration. arXiv preprintK. Knight, J. Graehl, Machine transliteration, arXiv preprint cmp-lg/9704003 (1997).
Machine transliteration of names in arabic texts. Y Al-Onaizan, K Knight, Proceedings of the ACL-02 workshop on Computational approaches to semitic languages. the ACL-02 workshop on Computational approaches to semitic languagesY. Al-Onaizan, K. Knight, Machine transliteration of names in arabic texts, in: Proceedings of the ACL-02 workshop on Computational approaches to semitic languages, 2002.
Low-resource machine transliteration using recurrent neural networks. N T Le, F Sadat, L Menard, D Dinh, ACM Transactions on Asian and Low-Resource Language Information Processing. 18TALLIPN. T. Le, F. Sadat, L. Menard, D. Dinh, Low-resource machine transliteration using recurrent neural networks, ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) 18 (2019) 1-14.
Comparison of assorted models for transliteration. S Najafi, B Hauer, R R Riyadh, L Yu, G Kondrak, Proceedings of the Seventh Named Entities Workshop. the Seventh Named Entities WorkshopS. Najafi, B. Hauer, R. R. Riyadh, L. Yu, G. Kondrak, Comparison of assorted models for transliteration, in: Proceedings of the Seventh Named Entities Workshop, 2018, pp. 84-88.
Uzbek cyrillic-latin-cyrillic machine transliteration. B Mansurov, A Mansurov, arXiv:2101.05162arXiv preprintB. Mansurov, A. Mansurov, Uzbek cyrillic-latin-cyrillic machine transliteration, arXiv preprint arXiv:2101.05162 (2021).
Representation of uzbek morphology in prolog. G Matlatipov, Z Vetulani, Aspects of Natural Language Processing. SpringerG. Matlatipov, Z. Vetulani, Representation of uzbek morphology in prolog, in: Aspects of Natural Language Processing, Springer, 2009, pp. 83-110.
Uzwordnet: A lexical-semantic database for the uzbek language. A Agostini, T Usmanov, U Khamdamov, N Abdurakhmonova, M Mamasaidov, Proceedings of the 11th Global Wordnet conference. the 11th Global Wordnet conferenceA. Agostini, T. Usmanov, U. Khamdamov, N. Abdurakhmonova, M. Mamasaidov, Uzword- net: A lexical-semantic database for the uzbek language, in: Proceedings of the 11th Global Wordnet conference, 2021, pp. 8-19.
Automatic detection of stop words for texts in the uzbek language. K Madatov, S Bekchanov, J Vičič, 10.20944/preprints202204.0234.v1doi:10. 20944/preprints202204.0234.v1K. Madatov, S. Bekchanov, J. Vičič, Automatic detection of stop words for texts in the uzbek language, 2022. URL: https://www.preprints.org/manuscript/202204.0234/v1. doi:10. 20944/preprints202204.0234.v1.
Building a new sentiment analysis dataset for uzbek language and creating baseline models. E Kuriyozov, S Matlatipov, Multidisciplinary Digital Publishing Institute Proceedings2137E. Kuriyozov, S. Matlatipov, Building a new sentiment analysis dataset for uzbek language and creating baseline models, in: Multidisciplinary Digital Publishing Institute Proceedings, volume 21, 2019, p. 37.
Investigating the effect of emoji in opinion classification of uzbek movie review comments. I Rabbimov, I Mporas, V Simaki, S Kobilov, International Conference on Speech and Computer. SpringerI. Rabbimov, I. Mporas, V. Simaki, S. Kobilov, Investigating the effect of emoji in opinion classification of uzbek movie review comments, in: International Conference on Speech and Computer, Springer, 2020, pp. 435-445.
Multi-class text classification of uzbek news articles using machine learning. I Rabbimov, S Kobilov, Journal of Physics: Conference Series. IOP Publishing154612097I. Rabbimov, S. Kobilov, Multi-class text classification of uzbek news articles using machine learning, in: Journal of Physics: Conference Series, volume 1546, IOP Publishing, 2020, p. 012097.
E Kuriyozov, Y Doval, C Gomez-Rodriguez, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceCross-lingual word embeddings for turkic languagesE. Kuriyozov, Y. Doval, C. Gomez-Rodriguez, Cross-lingual word embeddings for turkic languages, in: Proceedings of The 12th Language Resources and Evaluation Conference, 2020, pp. 4054-4062.
B Mansurov, A Mansurov, arXiv:2108.09814Uzbert: pretraining a bert model for uzbek. arXiv preprintB. Mansurov, A. Mansurov, Uzbert: pretraining a bert model for uzbek, arXiv preprint arXiv:2108.09814 (2021).
Sharq" nashriyot-matbaa aksiyadorlik kompaniyasi bosh tahririyati. T Tog'ayev, G Tavaldiyeva, M Akromova, Taskent, UzbekistanO'zbek tilining kirill va lotin alifbolaridagi imlo lug'atiT. Tog'ayev, G. Tavaldiyeva, M. Akromova, O'zbek tilining kirill va lotin alifbolaridagi imlo lug'ati, "Sharq" nashriyot-matbaa aksiyadorlik kompaniyasi bosh tahririyati, Taskent, Uzbekistan, 1999.
| [
"https://github.com/UlugbekSalaev/UzTransliterator"
] |
[
"InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis",
"InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis"
] | [
"Kevin Scaria kscaria@asu.edu \nArizona State University\n\n",
"Himanshu Gupta hgupta35@asu.edu \nArizona State University\n\n",
"Siddharth Goyal sgoyal41@asu.edu \nArizona State University\n\n",
"Saurabh Arjun Sawant ssawan13@asu.edu \nArizona State University\n\n",
"Swaroop Mishra srmishr1@asu.edu \nArizona State University\n\n",
"Chitta Baral cbaral@asu.edu \nArizona State University\n\n"
] | [
"Arizona State University\n",
"Arizona State University\n",
"Arizona State University\n",
"Arizona State University\n",
"Arizona State University\n",
"Arizona State University\n"
] | [] | In this paper, we present InstructABSA, Aspect Based Sentiment Analysis (ABSA) using the instruction learning paradigm for the ABSA subtasks: Aspect Term Extraction (ATE), Aspect Term Sentiment Classification (ATSC), and Joint Task modeling. Our method introduces positive, negative, and neutral examples to each training sample, and instruction tunes the model (Tk-Instruct) the ABSA subtasks, yielding significant performance improvements. Experimental results on the Sem Eval 2014, 15, and 16 datasets demonstrate that InstructABSA outperforms the previous state-of-the-art (SOTA) approaches on the three ABSA subtasks (ATE, ATSC, and Joint Task) by a significant margin, outperforming 7x larger models. In particular, InstructABSA surpasses the SOTA on the Rest14 ATE subtask by 5.69% points, Rest15 ATSC subtask by 9.59% points, and on the Lapt14 Joint Task by 3.37% points. Our results also suggest a strong generalization ability to new domains across all three subtasks 1 . | 10.48550/arxiv.2302.08624 | [
"https://export.arxiv.org/pdf/2302.08624v5.pdf"
] | 257,020,097 | 2302.08624 | 3669f076502a6da23dad96e7583dfe72addaa2a0 |
InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis
Kevin Scaria kscaria@asu.edu
Arizona State University
Himanshu Gupta hgupta35@asu.edu
Arizona State University
Siddharth Goyal sgoyal41@asu.edu
Arizona State University
Saurabh Arjun Sawant ssawan13@asu.edu
Arizona State University
Swaroop Mishra srmishr1@asu.edu
Arizona State University
Chitta Baral cbaral@asu.edu
Arizona State University
InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis
In this paper, we present InstructABSA, Aspect Based Sentiment Analysis (ABSA) using the instruction learning paradigm for the ABSA subtasks: Aspect Term Extraction (ATE), Aspect Term Sentiment Classification (ATSC), and Joint Task modeling. Our method introduces positive, negative, and neutral examples to each training sample, and instruction tunes the model (Tk-Instruct) the ABSA subtasks, yielding significant performance improvements. Experimental results on the Sem Eval 2014, 15, and 16 datasets demonstrate that InstructABSA outperforms the previous state-of-the-art (SOTA) approaches on the three ABSA subtasks (ATE, ATSC, and Joint Task) by a significant margin, outperforming 7x larger models. In particular, InstructABSA surpasses the SOTA on the Rest14 ATE subtask by 5.69% points, Rest15 ATSC subtask by 9.59% points, and on the Lapt14 Joint Task by 3.37% points. Our results also suggest a strong generalization ability to new domains across all three subtasks 1 .
Introduction
Aspect Based Sentiment Analysis (ABSA) is an important task in understanding fine-grained sentiments in user expressions (Zhang and Liu, 2012). As shown in Figure 1, ABSA extracts aspects and classifies the aspect's sentiment polarity by understanding the author's opinions. Encoder, decoder approaches (Jiang et al., 2019;Zhang and Qian, 2020), that utilize transformer-based models (He et al., 2020;Radford et al., 2019) have been proposed but have limitations like information loss and ignoring semantic labels (Hosseini-Asl et al., 2022;Kamila et al., 2022;Peper and Wang, 2022).
Instruction learning paradigm (Mishra et al., 2022b;Wei et al., 2022b) has significantly improved the reasoning abilities of large language Figure 1: Illustration of the three ABSA subtasks where S i is the i th sentence, a i is the aspect terms, sp i is the sentiment polarity and o i is the opinion term. models and has shown impressive results across various tasks (Zhang and Chai, 2021;Ouyang et al., 2022a;Wang et al., 2022b;Lu et al., 2022). Owing to its previous success, we propose InstructABSA, for aspect based sentiment analysis (ABSA). Our approach involves further instruction tuning of the Tk-Instruct model (Wang et al., 2022c) to address the three subtasks of ABSA: ATE, ATSC, and Joint Task. We add instruction prompts specific to the downstream ABSA subtasks in the form of task definitions, followed by positive, negative, and neutral examples. The proposed approach for the ATSC subtask is presented in Figure 2.
We carried out extensive experiments on the SemEval 2014, 15, and 16 datasets Pontiki et al. (2014Pontiki et al. ( , 2015Pontiki et al. ( , 2016, which comprises the laptops and restaurants domain. Across the three subtasks in both domains, InstructABSA outperforms SOTA approaches. Specifically, for the 2014 ATE subtask, InstructABSA obtains F1-score of 92.3 and 92.76 (Lapt14, Rest14), surpassing SOTA by 4.37% and 5.69% points respectively. For the ATSC subtask, InstructABSA attains an accuracy of 84.50 in the Rest15 dataset exceeding the previous results by 9.59% points. In the Rest14 dataset ATSC subtask, our approach gets a competitive accuracy score of 86.25 compared to the SOTA of 90.86. For the Joint Task, InstructABSA exhibits strong performance and achieves F1-score of 79.34 The input consists of instruction prompt and sentence. The output label is the sentiment polarity for the corresponding aspect. This is done for each ABSA subtasks. and 79.47 (Lapt14, Rest14), outperforming SOTA by 3.37% and 1.4% points, respectively. We also provide evidence of cross-domain and joint-domain generalizations arising as part of our proposed approach. The extensive analysis further leads to several interesting findings, which are detailed in (Section 4). Contribution: This work makes the following contributions: (a) In this paper, we introduce InstructABSA, which achieves remarkable performance gains on three ABSA subtasks of SemEval 2014,15 and 16 datasets, surpassing the previous SOTA models. (b) Despite having only 200M parameters, our results outperform or get competitive results over the prior SOTA models with 1.5B parameters. (c) We also demonstrate the generalization capabilities of our approach by conducting cross and joint-domain evaluations and finding that additional data improves the performance of our model with limited impact on cross-domain performance. (d) Finally, we provide a comprehensive analysis of the impact of our method on various task categories.
2 InstructABSA: Instruction Learning for ABSA
We describe the mathematical formulation of ABSA subtasks and then the proposed approach.
Aspect Term Extraction (ATE)
Given the i th review sentence in the training sample, S i = {w 1 i , w 2 i ,...w n i }, where n is the number of tokens in the sentence, this subtask aims to extract the set of aspect terms A i = a 1 i , a 2 i .., a m i , where m is the number of aspect terms in the sentence S i . The ATE subtask can be formulated as
A i = LM AT E (S i ).
Here, LM denotes Language Model. S i , which is the i th sentence, is passed as the input to the model during training, and the corresponding aspect terms A i in the sentence are the output labels, respectively.
Aspect Term Sentiment Classification (ATSC)
In this subtask, we extract sentiment polari- negative, neutral], for each m aspect terms in the review sentence S i . As the polarity sp k i for each aspect term a k i is extracted individually, we get k additional training samples for each sentence S i . This subtask is mathematically represented as:
ties SP i = sp 1 i , sp 2 i .., sp m i , where sp k i ∈ [positive,sp k i = LM AT SC (S i , a k i ).
Here, S i , and a k i which is the k th aspect term is passed as input to the model, and the output label is the sentiment polarity sp k i , the polarity for the k th aspect term.
Joint Task
Joint Task is the task of extracting the aspect terms and their corresponding sentiment polarity pairs simultaneously for a given review sentence S i . This subtask is formulated as:
[A i , SP i ] = LM Joint (S i ).
During training, the language model takes in the sentence S i as input and the aspect term -sentiment polarity pair
[A i , SP i ] = {(a k i , sp k i ); a k i ∈ A i , sp k i ∈ SP i }
are the corresponding output labels.
Proposed Approach
By instruction tuning a language model LM using instruction-equipped data, we get instruction tuned model LM Inst . This model is further finetuned on downstream tasks of ABSA. The task is formulated as follows for the ATE subtask:
A i = LM Inst (Inst, S i ), ATSC subtask: sp k i = LM Inst (Inst, S i , a k i ) and Joint Task: [A i , SP i ] = LM Inst (Inst, S i ).
The instruction prompts Inst comprise the task definition, followed by a combination of positive, negative, and neutral examples, which are described in detail in Appendix C.
This section describes the experimental setup for our proposed IntructABSA approach. We use the Tk-Instruct-base-def-pos as the instructiontuned model LM Inst . We use two configurations of instructions as prompts for our experiments. InstructABSA-1 has the instruction prompt that includes the definition of the ABSA subtasks followed by 2 positive examples for the respective task. InstructABSA-2 has the definition followed by 2 positive, negative, and neutral examples.
Dataset: SemEval 2014,15 and 16 datasets are used for our experimentation. The dataset is used as a benchmark for ABSA tasks and has customer reviews from three domains; laptops (Lapt14), hotels (Hotel15), and restaurants (Rest14, Rest15, and Rest16). More details can be found in Appendix B.
Cross-Domain & Joint-Domain Experiments:
We conduct two additional experiments in our analysis. Firstly, we perform cross-domain experiments; we train the model on the Lapt14 and test it on the Rest14 dataset and vice versa. We also do a similar experiment by training with Rest15 and evaluating with Hotel15 dataset. This experiment is performed to check the domain generalizability of the model. Secondly, we perform a joint-domain experiment where we combine Lapt14 and Rest14 datasets while training. To evaluate the performance of this approach, it is tested on both datasets individually. This analysis is done to check how additional data of a different domain help for each ABSA subtask. All the experiments are performed across both instruction prompt configurations and all three subtasks on the above mentioned dataset domains.
Evaluation Metric: We use the F1-score for ATE and Joint Task, and the accuracy metric for ATSC subtask, following previous approaches Luo et al., 2020).
Analysis
In this subsection, we analyze how InstructABSA performs in non-conventional training approaches. Cross-Domain Evaluation: In this experiment, we evaluated the performance of our models in a cross-domain setting, where the models were trained on a train set from one domain and tested on a test set from another domain. The evaluation was performed on all three subtasks for both instruction-tuned models (InstructABSA-1 & 2).
Conclusion
In this work, we proposed InstructABSA, an instruction-tuned modeling approach for the three subtasks of ABSA. Our findings show that InstructABSA surpassed the previous scores using a significantly smaller model than previous approaches. We further analyzed the performance of the approach in cross-domain and joint-domain settings, revealing several interesting findings. Finally, we release our code and hope that our work will encourage further research in this direction.
Limitations
Our study is limited to the Sem Eval 2014, 15, and 16 datasets, that are widely used in recent works. Future studies should include the extension of this work on other ABSA datasets to test the generalizability of our findings. We conducted our experiments using a 200M model, which may limit the applicability of our findings to smaller models. Future studies could consider using even smaller instruction-tuned models to analyze their performance. Our study was conducted using Tk-Instruct models for the English language. As a result, our findings may not be directly applicable to other languages. Future studies should include a multilingual dataset and a multilingual instructiontuned model to investigate the model's performance across different languages.
Ethical Considerations
We acknowledge that the T5 model used in our experiments may have inherent biases due to the pretraining and instruction-tuning data used. While stress testing was not conducted, we believe that from our research no additional issues arise related to privacy, fairness, bias, and discrimination. We Our work directly contributes to the topic of aspect based sentiment analysis and we believe that our work will have a positive impact on the scientific community. We remain dedicated to advancing the responsible use of AI and will continue to prioritize ethical considerations in all our future research endeavors. Table 8: The overall results that report the precision (P), recall (R), and F1 scores of InstructABSA-1 and 2 for the Joint task. The table also includes the results of a vanilla T5 fine-tuned using instruction-equipped data similar to InstructABSA-2. Table 6, 7, 8 reveal the detailed results of all the models that were trained in the process. Model-1 and Model-2 are InstructABSA-1 and 2, respectively. The T5 model in the Table is the base model finetuned with instruction-equipped data. Similar to InstructABSA-2, T5 uses instructions with positive, negative, and neutral examples. ATE and Joint Task show precision, recall, and F1 scores, whereas ATSC shows accuracy and F1. In all three tables, we see base T5 with instruction-equipped data also surpasses SOTA in many cases. Table 9. To maintain consistency with the previous approaches for the ATSC task, we also ignore conflict labels.
References
B Detailed dataset description
C InstructABSA prompt examples
The instruction prompts for InstructABSA-1, and InstructABSA-2 are presented in detail for all three ABSA subtasks. Table 11, 12, and 13 presents the prompts provided for InstructABSA-2 model for the ATE, ATSC, and Joint Task, respectively. For the InstructABSA-1 model, the instruction prompts are similar, with the difference that negative and neutral examples are not provided in the instruction prompts.
D Related Work
LMs and Deep learning methods have been used for plethora of downstream tasks for ling time (Yin et al., 2018;Li et al., 2017;Das, 2015;Gupta et al., 2020Gupta et al., , 2021bHusain et al., 2019;Feng et al., 2020;Vijayakumar et al., 2018). Several recent works have leveraged NLP methods and simple sampling methods for different downstream results (Xu et al., 2018;Alon et al., 2018;Allamanis et al., 2017;Balog et al., 2016;Ogundokun et al., 2022;Kehinde et al., 2022;Gupta et al., 2019). The study of whether existing LMs can understand instructions by Efrat and Levy (2020) Train 1557 930 354 140 43 10 6 3 1 -1 3045 Test 378 266 105 34 10 6 1 ----800 Rest14 Train 1020 1022 572 269 104 30 15 5 3 1 -3041 Test 194 290 186 80 30 14 3 2 --1 800 Rest15 Train 482 576 174 58 22 2 --1 --1315 Test 284 294 82 18 6 -1 ----685 Hotels15 Test 98 135 23 7 2 1 -----266 Rest16 Train 766 868 258 76 28 2 1 -1 --2000 Test 256 298 87 22 9 3 ----1 676 (2022) showed that adding knowledge with instruction helps LMs understand the context better. Wang et al. (2022a) developed an instruction-based multi-task framework for few-shot Named Entity Recognition (NER) tasks. Furthermore, several approaches have been proposed to improve model performance using instructions, including (Wu et al., 2022;Lin et al., 2021;Wang et al., 2022c;Luo et al., 2022;Kuznia et al., 2022;Patel et al., 2022;Mishra and Nouri, 2022;Puri et al., 2022). Several studies are present that show adding knowledge with instruction helps LMs understand the context better (Gupta et al., 2021d).
Task
Aspect Term Extraction (ATE) Definition Definition: The output will be the aspects (both implicit and explicit) which have an associated opinion that is extracted from the input text. In cases where there are no aspects, the output should be noaspectterm.
Positive
Example Input 1: With the great variety on the menu, I eat here often and never get bored.
Example
Example Output 1: menu Example Input 2: Great food, good size menu, great service, and an unpretentious setting. Example output 2: food, menu, service, setting Negative
Negative input 1: They did not have mayonnaise, forgot our toast, Example left out ingredients (ie, cheese in an omelet), below hot temperatures and the bacon was so overcooked it crumbled on the plate when you touched it. Negative output 1: toast, mayonnaise, bacon, ingredients, plate Negative input 2: The seats are uncomfortable if you are sitting against the wall on wooden benches. Negative output 2: seats Neutral Neutral Input 1: I asked for a seltzer with lime, no ice.
Example
Neutral Output 1: seltzer with lime Neutral Input 2: They wouldn't even let me finish my glass of wine before offering another. Neutral Output 2: glass of wine Input Now complete the following exampleinput: My son and his girlfriend both wanted cheeseburgers and they were huge! output: cheeseburgers
Task
Aspect Term Sentiment Classification (ATSC) Definition The output will be 'positive' if the aspect identified in the sentence contains a positive sentiment. If the sentiment of the identified aspect in the input is negative, the answer will be 'negative.' Otherwise, the output should be 'neutral.' For the aspects which are classified as noaspectterm, the sentiment is none.
Positive
Example Input 1: With the great variety on the menu, I eat here often and never get bored.
Example
Aspect: menu Example Output 1: positive Example Input 2: Great food, good size menu, great service, and an unpretentious setting. Aspect: food. Example Output 2: positive Negative Example Input 1: They did not have mayonnaise, forgot our toast, left out ingredients Example (i.e., cheese in an omelet), below hot temperatures and the bacon was so overcooked it crumbled on the plate when you touched it. Aspect: toast Example Output 1: negative Example Input 2: The seats are uncomfortable if you are sitting against the wall on wooden benches. Aspect: seats Example Output 2: negative Neutral Example Input 1: I asked for a seltzer with lime, no ice. Aspect: seltzer with lime Example Example Output 1: neutral Example Input 2: They wouldn't even let me finish my glass of wine before offering another. Aspect: a glass of wine Example Output 2: neutral Input Now complete the following exampleinput: My son and his girlfriend both wanted cheeseburgers and they were huge! Aspect: cheeseburgers. output: positive
Task
Joint Task Definition Definition: The output will be the aspects (both implicit and explicit), and the aspects sentiment polarity. In cases where there are no aspects, the output should be no aspect-tern: none.
Positive
Example Input 1: With the great variety on the menu, I eat here often and never get bored.
Example
Example Output 1: menu:positive Example Input 2: Great food, good size menu, great service, and an unpretentious setting. Example Output 2: food:positive Negative Example Input 1: They did not have mayonnaise, forgot our toast, left out ingredients Example (i.e., cheese in an omelet), below hot temperatures, and the bacon was so overcooked it crumbled on the plate when you touched it. Example Output 1: toast:negative Example Input 2: The seats are uncomfortable if you are sitting against the wall on wooden benches. Aspect: seats Example Output 2: negative Neutral Example Input 1: I asked for a seltzer with lime, no ice.
Example
Example Output 1: seltzer with lime: neutral Example Input 2: They wouldn't even let me finish my glass of wine before offering another. Example Output 2: glass of wine:neutral Input Now complete the following exampleinput: My son and his girlfriend both wanted cheeseburgers and they were huge! output: cheeseburgers: positive Table 13: Illustrating InstructABSA-2 instruction prompting for the joint task.
Figure 2 :
2Formulation of InstructABSA for ATSC task.
Table 1, 2 and 3 denotes the results of ATE, ATSC, and Joint Task, respectively. All the results reported are the average values from 5 runs for each experiment. Both InstructABSA-1 and 2 exhibit strong performance across all three subtasks.Hyperparameters: Model: Tk-Instruct-base-
def-pos 2 , GPU: 1xNvidia Tesla P40, Train Batch
Size for ATE and ATSC was set to 16, whereas for
Joint Task it was 8, Gradient Accumulation Steps:
2, Initial learning rate: 5e-5, Number of Epochs: 4
4 Results and Analysis
4.1 Sub Task Results
Model
Lapt14 Rest14 Rest15 Rest16
GPT2 med
82.04
75.94
-
-
GRACE
87.93
85.45
-
-
BARTABSA
83.52
87.07
75.48
-
IT-MTL
76.93
-
74.03
79.41
InstructABSA1
91.40
92.76
75.23
81.48
InstructABSA2
92.30
92.10
76.64
80.32
Table 1: ATE subtask results denoting F1 scores.
GPT2 med , GRACE, BARTABSA and IT-MTL results
are from Hosseini-Asl et al. (2022), Luo et al. (2020),
Yan et al. (2021) and Varia et al. (2022) respectively.
For ATE subtask (Table 1), InstructABSA sur-
passes SOTA by 4.37, 5.69, 1.16, and 2.07 percent-
age points on Lapt14, Rest14, 15, and 16 datasets
respectively. It is to be noted that InstructABSA
is a 200M parameter model whereas Hosseini-Asl
et al. (2022) uses a model with 1.5B parameters.
Model
Lapt14 Rest14 Rest15 Rest16
ABSA-DeBERTa
82.76
89.46
-
-
LSAT
86.31
90.86
-
-
RACL-BERT
73.91
81.61
74.91
-
Dual-MRC
75.97
82.04
73.59
-
InstructABSA1
80.62
86.25
83.02
89.10
InstructABSA2
81.56
85.17
84.50
89.43
Table 2 :
2ATSC subtask results denoting accuracy.
ABSA-DeBERTa, LSAT, RACL-BERT and dual-MRC
are from Marcacini and Silva (2021), Yang and Li
(2021), Chen and Qian (2020) and Mao et al. (2021)
respectively.
For the ATSC subtask (Table 2), InstructABSA-
1 gets a competitive accuracy of 86.25 on the
Rest14 dataset as compared to the best score of
90.86 by LSAT-XdeBERTa (355M parameters).
InstructABSA-2 also establishes the score of 89.43
for the Rest16 dataset.
For the Joint Task (Table 3), both
InstructABSA-1 and 2 surpass SOTA for
the Lapt14 dataset by a significant margin (78.89
and 79.34 as compared to 75.97 obtained by
GRACE). For Rest14, 15, and 16 datasets, our
model surpasses the previous SOTA scores to
get 79.47, 69.39, and 74.24, respectively. Other
Table 3 :
3Results of the Joint Task denoting F1 scores.GPT2 med , SPAN, GAS, GRACE, BARTABSA and IT-
MTL results are from Hosseini-Asl et al. (2022), Hu
et al. (2019), Zhang et al. (2021), Luo et al. (2020), (Yan
et al., 2021) and (Varia et al., 2022) respectively
approaches performing the Joint Task use models,
such as GPT2 med (1.5B) and SPAN (345M)
which are significantly larger than InstructABSA.
Results for all three subtasks containing precision
and recall are present in Appendix A.
Table 4 :
4Results of the cross-domain evaluation where the model is trained on Lapt14 and the test set is of Rest14 and vice versa. The results of the model trained on Rest15 and evaluated on Hotel15 is also reported.
Table 4
4presents the experiment's results. The F1 scores for both models were close when trained on Rest14 and tested on Lapt14 for ATE and Joint Task, with values of 71.98 and 65.30, respectively. For the setting trained on Rest14 and evaluated on Lapt14, the second model performed better than the second in the ATSC task (80.56 vs. 82.44) and showed comparable performance to the results obtained using the same domain train and test sets. When trained on Lapt14 and tested on Rest14, InstructABSA-1 showed a drop in F1-score for the ATE and Joint Task compared to respectively. For the ATSC task, similar trends were obtained with an accuracy of 75.53 from InstructABSA-1 and 80.56 from InstructABSA-2.Task
Model
ATE ATSC Joint
Lapt14
InstructABSA-1 90.35 81.09 80.07
InstructABSA-2 93.28 83.60 80.47
Rest14
InstructABSA-1 88.88 86.42 80.81
InstructABSA-2 93.55 88.03 79.70
Table 5 :
5Results of joint-domain evaluation where the model is trained on both Lapt14 and Rest14 datasets and evaluated on the respective test set.Joint-Domain Evaluation:In this setting, the train data of the domains (laptops and restaurants) are combined to train the model, and it is evaluated on both test sets. The experiment is performed on all three subtasks and for both instruction-tuned models, and the results are presented inTable 5. The availability of additional training data for ATE subtask helps the language models as the proposed model surpasses the previously achieved SOTA. For Lapt14 and Rest14 datasets, the F1-scores of 93.28 and 93.55(Table 5)surpass the SOTA achieved by InstructABSA when modeled as individual ATE subtasks 92.3 and 92.76 (Table 1) respectively. A similar trend is observed in the Joint Task as well. The F1 scores obtained on the Lapt14 and Rest14 data via joint-domain training are 80.47 and 80.81 (Table 5) which surpasses the individual Joint Task scores of 79.34 and 79.47 (Table 3) respectively. For the ATSC subtask accuracies of 83.60 and 88.03 (Table 5) are obtained for the Lapt14 and Rest14 datasets, which are higher as compared to individual ATSC subtask training 81.56 and 86.25 (Table 2) respectively.
Hamel Husain, Hongqi Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Codesearchnet challenge: Evaluating the state of semantic code search. ArXiv, abs/1909.09436. Yue Mao, Yi Shen, Chao Yu, and Longjun Cai. 2021. A joint training dual-mrc framework for aspect based sentiment analysis. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 13543-13551.Miltiadis Allamanis, Marc Brockschmidt, and Mah-
moud Khademi. 2017.
Learning to repre-
sent programs with graphs.
arXiv preprint
arXiv:1711.00740.
Uri Alon, Meital Zilberstein, Omer Levy, and Eran Ya-
hav. 2018. code2vec: learning distributed representa-
tions of code. Proceedings of the ACM on Program-
ming Languages, 3:1 -29.
Matej Balog, Alexander L Gaunt, Marc Brockschmidt,
Sebastian Nowozin, and Daniel Tarlow. 2016. Deep-
coder: Learning to write programs. arXiv preprint
arXiv:1611.01989.
Zhuang Chen and Tieyun Qian. 2020. Relation-aware
collaborative learning for unified aspect-based sen-
timent analysis. In Proceedings of the 58th Annual
Meeting of the Association for Computational Lin-
guistics, pages 3685-3694, Online. Association for
Computational Linguistics.
Subhasis Das. 2015. Contextual code completion using
machine learning.
Avia Efrat and Omer Levy. 2020. The turking test: Can
language models understand instructions? arXiv
preprint arXiv:2010.11982.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xi-
aocheng Feng, Ming Gong, Linjun Shou, Bing Qin,
Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Code-
BERT: A pre-trained model for programming and
natural languages. In Findings of the Association
for Computational Linguistics: EMNLP 2020, pages
1536-1547, Online. Association for Computational
Linguistics.
Himanshu Gupta, Abhiram Anand Gulanikar, Lov Ku-
mar, and Lalita Bhanu Murthy Neti. 2021a. Em-
pirical analysis on effectiveness of nlp methods for
predicting code smell. In Computational Science and
Its Applications -ICCSA 2021, pages 43-53, Cham.
Springer International Publishing.
Himanshu Gupta, Tanmay Girish Kulkarni, Lov Kumar,
and Neti Lalita Bhanu Murthy. 2020. A novel ap-
proach towards analysis of attacker behavior in ddos
attacks. In Machine Learning for Networking, pages
392-402, Cham. Springer International Publishing.
Himanshu Gupta, Tanmay Girish Kulkarni, Lov Ku-
mar, Lalita Bhanu Murthy Neti, and Aneesh Krishna.
2021b. An empirical study on predictability of soft-
ware code smell using deep learning models. In
Advanced Information Networking and Applications,
pages 120-132, Cham. Springer International Pub-
lishing.
Himanshu Gupta, Lov Kumar, and Lalita Bhanu Murthy
Neti. 2019. An empirical framework for code smell
prediction using extreme learning machine. In 2019
9th Annual Information Technology, Electromechan-
ical Engineering and Microelectronics Conference
(IEMECON), pages 189-195.
Himanshu Gupta, Sanjay Misra, Lov Kumar, and
N. L. Bhanu Murthy. 2021c. An empirical study
to investigate data sampling techniques for improv-
ing code-smell prediction using imbalanced data. In
Information and Communication Technology and Ap-
plications, pages 220-233, Cham. Springer Interna-
tional Publishing.
Himanshu Gupta, Neeraj Varshney, Swaroop Mishra,
Kuntal Kumar Pal, Saurabh Arjun Sawant, Kevin
Scaria, Siddharth Goyal, and Chitta Baral. 2022. "
john is 50 years old, can his son be 65?" evaluat-
ing nlp models' understanding of feasibility. arXiv
preprint arXiv:2210.07471.
Himanshu Gupta, Shreyas Verma, Tarun Kumar, Swa-
roop Mishra, Tamanna Agrawal, Amogh Badugu,
and Himanshu Sharad Bhatt. 2021d. Context-ner:
Contextual phrase generation at scale. arXiv preprint
arXiv:2109.08079.
Peter Hase and Mohit Bansal. 2022. When can mod-
els learn from explanations? a formal framework
for understanding the roles of explanation data. In
Proceedings of the First Workshop on Learning with
Natural Language Supervision, pages 29-39, Dublin,
Ireland. Association for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and
Weizhu Chen. 2020.
Deberta: Decoding-
enhanced bert with disentangled attention. ArXiv,
abs/2006.03654.
Ehsan Hosseini-Asl, Wenhao Liu, and Caiming Xiong.
2022. A generative language model for few-shot
aspect-based sentiment analysis. In Findings of the
Association for Computational Linguistics: NAACL
2022, pages 770-787, Seattle, United States. Associ-
ation for Computational Linguistics.
Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng
Li, and Yiwei Lv. 2019. Open-domain targeted senti-
ment analysis via span-based extraction and classifi-
cation. In Proceedings of the 57th Annual Meeting of
the Association for Computational Linguistics, pages
537-546, Florence, Italy. Association for Computa-
tional Linguistics.
Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and
Min Yang. 2019. A challenge dataset and effec-
tive models for aspect-based sentiment analysis. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 6280-
6285, Hong Kong, China. Association for Computa-
tional Linguistics.
Sabyasachi Kamila, Walid Magdy, Sourav Dutta, and
MingXue Wang. 2022. AX-MABSA: A framework
for extremely weakly supervised multi-label aspect
based sentiment analysis. In Proceedings of the 2022
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 6136-6147, Abu Dhabi,
United Arab Emirates. Association for Computa-
tional Linguistics.
Adeniyi Jide Kehinde, Abidemi Emmanuel Adeniyi,
Roseline Oluwaseun Ogundokun, Himanshu Gupta,
and Sanjay Misra. 2022. Prediction of students' per-
formance with artificial neural network using demo-
graphic traits. In Recent Innovations in Computing,
pages 613-624, Singapore. Springer Singapore.
Kirby Kuznia, Swaroop Mishra, Mihir Parmar, and
Chitta Baral. 2022. Less is more: Summary of long
instructions is better for program synthesis. In Pro-
ceedings of the 2022 Conference on Empirical Meth-
ods in Natural Language Processing, pages 4532-
4552, Abu Dhabi, United Arab Emirates. Association
for Computational Linguistics.
Jian Li, Yue Wang, Michael R Lyu, and Irwin King.
2017. Code completion with neural attention and
pointer networks. arXiv preprint arXiv:1711.09573.
Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu
Wang, Shuohui Chen, Daniel Simig, Myle Ott, Na-
man Goyal, Shruti Bhosale, Jingfei Du, et al. 2021.
Few-shot learning with multilingual language models.
arXiv preprint arXiv:2112.10668.
Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-
Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. 2022. Learn to explain:
Multimodal reasoning via thought chains for science
question answering. In Advances in Neural Informa-
tion Processing Systems.
Huaishao Luo, Lei Ji, Tianrui Li, Daxin Jiang, and Nan
Duan. 2020. GRACE: Gradient harmonized and cas-
caded labeling for aspect-based sentiment analysis.
In Findings of the Association for Computational
Linguistics: EMNLP 2020, pages 54-64, Online. As-
sociation for Computational Linguistics.
Man Luo, Sharad Saxena, Swaroop Mishra, Mihir Par-
mar, and Chitta Baral. 2022. Biotabqa: Instruction
learning for biomedical table question answering.
arXiv preprint arXiv:2207.02419.
Ricardo Marcondes Marcacini and Emanuel Silva. 2021.
Aspect-based sentiment analysis using bert with dis-
entangled attention. LatinX in AI at International
Conference on Machine Learning 2021.
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Han-
naneh Hajishirzi. 2022. MetaICL: Learning to learn
in context. In Proceedings of the 2022 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, pages 2791-2809, Seattle, United States.
Association for Computational Linguistics.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin
Choi, and Hannaneh Hajishirzi. 2022a. Reframing
instructional prompts to GPTk's language. In Find-
ings of the Association for Computational Linguistics:
ACL 2022, pages 589-612, Dublin, Ireland. Associa-
tion for Computational Linguistics.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, and
Hannaneh Hajishirzi. 2022b. Cross-task generaliza-
tion via natural language crowdsourcing instructions.
In Proceedings of the 60th Annual Meeting of the
Association for Computational Linguistics (Volume
1: Long Papers), pages 3470-3487, Dublin, Ireland.
Association for Computational Linguistics.
Swaroop Mishra and Elnaz Nouri. 2022. Help me think:
A simple prompting strategy for non-experts to cre-
ate customized content with models. arXiv preprint
arXiv:2208.08232.
A Additional results
ATE
Lapt14
Rest14
Model
P
R
F1
P
R
F1
Vanilla T5 93.17 88.56 90.80 95.39 90.30 92.78
Model-1
93.92 89.01 91.40 94.60 90.98 92.76
Model-2
92.84 91.76 92.30 93.49 90.75 92.10
Table 6 :
6The overall results that report the precision (P),
recall (R), and F1 scores of InstructABSA-1 and 2 for
the ATE subtask. The table also includes the results of
a vanilla T5 fine-tuned using instruction-equipped data
similar to InstructABSA-2.
ATSC
Lapt14
Rest14
Model
Accuracy
F1
Accuracy
F1
Vanilla T5
80.04
79.98
81.38
80.39
Model-1
80.62
79.37
86.25
85.15
Model-2
81.56
80.84
85.17
84.47
Table 7 :
7The overall results that report the Accuracy
and F1 scores of InstructABSA-1 and 2 for the ATSC
subtask. The table also includes the results of a vanilla
T5 fine-tuned using instruction-equipped data similar to
InstructABSA-2.
Joint Task
Lapt14
Rest14
Model
P
R
F1
P
R
F1
Vanialla T5 79.73 75.09 77.34 79.71 74.98 77.27
Model-1
80.11 77.71 78.89 77.86 74.53 76.16
Model-2
80.42 78.29 79.34 82.45 76.7 79.47
Table 10
10displays the dataset description for the ATE and Joint Task. For the training set, 1557Dataset Split Pos. Neg. Neut.
Lapt14 Train
987 866
460
Test
341 128
169
Rest14 Train 2164 805
633
Test
728 196
196
Rest15 Train
912 256
36
Test
326 182
34
Hotel15 Test
163
45
7
Rest16 Train 1240 439
69
Test
468 117
30
Table 9 :
9Dataset Statistics for ATSC subtask denoting
number of samples. Pos., Neg., and Neut. represent
Positive, Negative, and Neutral, respectively
reviews in Lapt14 and 1020 reviews in Rest14 have
no aspect terms and their corresponding polarities.
Similarly, in the test set, 378 reviews in Lapt14 and
194 reviews in the Rest14 have no aspect terms and
corresponding polarities. The dataset description
for the ATSC subtask is presented in
Table 10 :
10Count of Aspects/Aspects-Sentiment Polarity Pairs for the Aspect Term Extraction & Joint Task. #k is the count of samples that have k aspects/aspect-sentiment polarity pairs in them. #NO is the number of samples that have no aspect/aspect-sentiment polarity pairs in them. demonstrate that language models can follow instructions. Weller et al. (2020) developed a framework that focuses on developing NLP systems that solve new tasks after reading their descriptions. Mishra et al. (2022b) proposed natural language instructions for cross-task generalization of LMs. PromptSource and FLAN (Wei et al., 2022a; Sanh et al., 2022) were built to leverage instructions and achieve zero-shot generalization on unseen tasks. Moreover, Parmar et al. (2022) shows the effectiveness of instructions in multi-task settings for the biomedical domain. Mishra et al. (2022a) discussed the impact of task instruction reframing on model response, while Min et al. (2022) introduced a framework to better understand in-context learning. Additionally, Ouyang et al. (2022b) proposed the InstructGPT model, which is fine-tuned with human feedback to follow instructions. Gupta et al.
Table 11 :
11Illustrating InstructABSA-2 instruction prompting for the ATE sub task.
Table 12 :
12Illustrating InstructABSA-2 instruction prompting for the ATSC sub task.
Experiments and results are available at https:// github.com/kevinscaria/InstructABSA
https://huggingface.co/allenai/ tk-instruct-base-def-pos
Appendix
Computational intelligence approaches for heart disease detection. Roseline Oluwaseun Ogundokun, Sanjay Misra, Peter Ogirima Sadiku, Himanshu Gupta, Robertas Damasevicius, and Rytis Maskeliunas. 2022. Singapore; SingaporeSpringerRecent Innovations in ComputingRoseline Oluwaseun Ogundokun, Sanjay Misra, Pe- ter Ogirima Sadiku, Himanshu Gupta, Robertas Damasevicius, and Rytis Maskeliunas. 2022. Compu- tational intelligence approaches for heart disease de- tection. In Recent Innovations in Computing, pages 385-395, Singapore. Springer Singapore.
Training language models to follow instructions with human feedback. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, Ryan Lowe, 10.48550/arXiv.2203.02155abs/2203.02155CoRRLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022a. Training language models to follow instructions with human feedback. CoRR, abs/2203.02155.
Training language models to follow instructions with human feedback. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, L Carroll, Pamela Wainwright, Chong Mishkin, Sandhini Zhang, Katarina Agarwal, Slama, Alex Rayet al. 2022b.. PreprintLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022b. Training language models to follow instruc- tions with human feedback. Preprint.
Murad Mohammad, and Chitta Baral. 2022. In-BoXBART: Get instructions into biomedical multitask learning. Mihir Parmar, Swaroop Mishra, Mirali Purohit, Man Luo, 10.18653/v1/2022.findings-naacl.10Findings of the Association for Computational Linguistics: NAACL 2022. Seattle, United StatesAssociation for Computational LinguisticsMihir Parmar, Swaroop Mishra, Mirali Purohit, Man Luo, Murad Mohammad, and Chitta Baral. 2022. In- BoXBART: Get instructions into biomedical multi- task learning. In Findings of the Association for Com- putational Linguistics: NAACL 2022, pages 112-128, Seattle, United States. Association for Computational Linguistics.
Is a question decomposition unit all we need?. Pruthvi Patel, Swaroop Mishra, Mihir Parmar, Chitta Baral, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingAbu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsPruthvi Patel, Swaroop Mishra, Mihir Parmar, and Chitta Baral. 2022. Is a question decomposition unit all we need? In Proceedings of the 2022 Confer- ence on Empirical Methods in Natural Language Processing, pages 4553-4569, Abu Dhabi, United Arab Emirates. Association for Computational Lin- guistics.
Generative aspectbased sentiment analysis with contrastive learning and expressive structure. J Joseph, Lu Peper, Wang, 10.48550/arXiv.2211.07743abs/2211.07743CoRR. Joseph J. Peper and Lu Wang. 2022. Generative aspect- based sentiment analysis with contrastive learning and expressive structure. CoRR, abs/2211.07743.
Semeval-2015 task 12: Aspect based sentiment analysis. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, International Workshop on Semantic Evaluation. Suresh Manandhar, and Ion AndroutsopoulosMaria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment anal- ysis. In International Workshop on Semantic Evalua- tion.
Suresh Manandhar, Ion Androutsopoulos, Núria Bel, and Gülşen Eryigit. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, International Workshop on Semantic Evaluation. Semeval-2016 task 5: Aspect based sentiment analysisMaria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, Ion Androutsopoulos, Núria Bel, and Gülşen Eryigit. 2016. Semeval-2016 task 5: As- pect based sentiment analysis. In International Work- shop on Semantic Evaluation.
Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, 10.3115/v1/S14-2004Proceedings of the 8th International Workshop on Semantic Evaluation (Se-mEval 2014). the 8th International Workshop on Semantic Evaluation (Se-mEval 2014)Dublin, IrelandSemEval-2014 task 4: Aspect based sentiment analysis. Association for Computational LinguisticsMaria Pontiki, Dimitris Galanis, John Pavlopoulos, Har- ris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (Se- mEval 2014), pages 27-35, Dublin, Ireland. Associa- tion for Computational Linguistics.
Mihir Parmar, and Chitta Baral. 2022. How many data samples is an additional instruction worth?. Ravsehaj Singh Puri, Swaroop Mishra, arXiv:2203.09161arXiv preprintRavsehaj Singh Puri, Swaroop Mishra, Mihir Parmar, and Chitta Baral. 2022. How many data samples is an additional instruction worth? arXiv preprint arXiv:2203.09161.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Multitask prompted training enables zero-shot task generalization. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, Canwen Bari, Urmish Xu, Shanya Thakker, Eliza Sharma Sharma, Taewoon Szczechla, Gunjan Kim, Nihal Chhablani, Debajyoti Nayak, Jonathan Datta, Mike Chang, Tian-Jian, Han Jiang, Matteo Wang, Sheng Manica, Shen, International Conference on Learning Representations. Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le ScaoStella Biderman, Leo Gao, Thomas Wolf, and Alexander M RushVictor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Tr- ishala Neeraj, Jos Rozen, Abheesht Sharma, An- drea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multi- task prompted training enables zero-shot task gener- alization. In International Conference on Learning Representations.
Smaranda Muresan, and Dan Roth. 2022. Instruction tuning for few-shot aspect-based sentiment analysis. Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Ann Neha, Rishita John, Anubhai, abs/2210.06629ArXiv. Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Neha Ann John, Rishita Anubhai, Smaranda Mure- san, and Dan Roth. 2022. Instruction tuning for few-shot aspect-based sentiment analysis. ArXiv, abs/2210.06629.
Neural-guided deductive search for real-time program synthesis from examples. J Ashwin, Abhishek Vijayakumar, Oleksandr Mohta, Dhruv Polozov, Prateek Batra, Sumit Jain, Gulwani, abs/1804.01186ArXiv. Ashwin J. Vijayakumar, Abhishek Mohta, Oleksandr Polozov, Dhruv Batra, Prateek Jain, and Sumit Gul- wani. 2018. Neural-guided deductive search for real-time program synthesis from examples. ArXiv, abs/1804.01186.
Liwen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu, arXiv:2203.03903structionner: A multi-task instruction-based generative framework for few-shot ner. arXiv preprintLiwen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, and Weiran Xu. 2022a. In- structionner: A multi-task instruction-based gener- ative framework for few-shot ner. arXiv preprint arXiv:2203.03903.
Self-instruct: Aligning language model with self generated instructions. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, A Noah, Daniel Smith, Hannaneh Khashabi, Hajishirzi, arXiv:2212.10560arXiv preprintYizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2022b. Self-instruct: Aligning lan- guage model with self generated instructions. arXiv preprint arXiv:2212.10560.
Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. Tongshuang Wu, Michael Terry, Carrie , Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. the 2022 CHI Conference on Human Factors in Computing SystemsTongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Sys- tems, pages 1-22.
Deeptype: On-device deep learning for input personalization service with minimal privacy concern. Mengwei Xu, Feng Qian, Qiaozhu Mei, Kang Huang, Xuanzhe Liu, 10.1145/3287075Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 24Mengwei Xu, Feng Qian, Qiaozhu Mei, Kang Huang, and Xuanzhe Liu. 2018. Deeptype: On-device deep learning for input personalization service with min- imal privacy concern. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 2(4).
A unified generative framework for aspect-based sentiment analysis. Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, Zheng Zhang, 10.18653/v1/2021.acl-long.188Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416-2429, Online. Association for Computational Linguistics.
Improving implicit sentiment learning via local sentiment aggregation. Heng Yang, Ke Li, Heng Yang and Ke Li. 2021. Improving implicit senti- ment learning via local sentiment aggregation.
Learning to generate task-specific adapters from task description. Qinyuan Ye, Xiang Ren, 10.18653/v1/2021.acl-short.82Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics2Short Papers)Qinyuan Ye and Xiang Ren. 2021. Learning to gener- ate task-specific adapters from task description. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 646-653, Online. Association for Computational Linguistics.
Learning to represent edits. Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander L Gaunt, abs/1810.13337ArXiv. Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, and Alexander L. Gaunt. 2018. Learning to represent edits. ArXiv, abs/1810.13337.
Sentiment analysis and opinion mining. Lei Zhang, B Liu, Encyclopedia of Machine Learning and Data Mining. Lei Zhang and B. Liu. 2012. Sentiment analysis and opinion mining. In Encyclopedia of Machine Learn- ing and Data Mining.
Convolution over hierarchical syntactic and lexical graphs for aspect level sentiment analysis. Mi Zhang, Tieyun Qian, 10.18653/v1/2020.emnlp-main.286Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsMi Zhang and Tieyun Qian. 2020. Convolution over hi- erarchical syntactic and lexical graphs for aspect level sentiment analysis. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 3540-3549, Online. As- sociation for Computational Linguistics.
Towards generative aspect-based sentiment analysis. Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, Wai Lam, 10.18653/v1/2021.acl-short.64Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics2Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021. Towards generative aspect-based sentiment analysis. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 504-510, Online. Association for Computational Linguistics.
Hierarchical task learning from language instructions with unified transformers and self-monitoring. Yichi Zhang, Joyce Chai, 10.18653/v1/2021.findings-acl.368Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsYichi Zhang and Joyce Chai. 2021. Hierarchical task learning from language instructions with unified transformers and self-monitoring. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 4202-4213, Online. Association for Computational Linguistics.
Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. Ruiqi Zhong, Kristy Lee, Zheng Zhang, Dan Klein, 10.18653/v1/2021.findings-emnlp.244Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsRuiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021. Adapting language models for zero-shot learn- ing by meta-tuning on dataset and prompt collections. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2856-2878, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics.
| [] |
[
"Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference",
"Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference",
"Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference",
"Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference"
] | [
"Galen Weld \nUniversity of Washington\n\n",
"Peter West \nUniversity of Washington\n\n",
"Maria Glenski \nPacific Northwest National Laboratory\n\n",
"David Arbour \nAdobe Research\n\n",
"Ryan A Rossi \nAdobe Research\n\n",
"Tim Althoff \nUniversity of Washington\n\n",
"Galen Weld \nUniversity of Washington\n\n",
"Peter West \nUniversity of Washington\n\n",
"Maria Glenski \nPacific Northwest National Laboratory\n\n",
"David Arbour \nAdobe Research\n\n",
"Ryan A Rossi \nAdobe Research\n\n",
"Tim Althoff \nUniversity of Washington\n\n"
] | [
"University of Washington\n",
"University of Washington\n",
"Pacific Northwest National Laboratory\n",
"Adobe Research\n",
"Adobe Research\n",
"University of Washington\n",
"University of Washington\n",
"University of Washington\n",
"Pacific Northwest National Laboratory\n",
"Adobe Research\n",
"Adobe Research\n",
"University of Washington\n"
] | [] | Causal inference studies using textual social media data can provide actionable insights on human behavior. Making accurate causal inferences with text requires controlling for confounding which could otherwise impart bias. Recently, many different methods for adjusting for confounders have been proposed, and we show that these existing methods disagree with one another on two datasets inspired by previous social media studies. Evaluating causal methods is challenging, as ground truth counterfactuals are almost never available. Presently, no empirical evaluation framework for causal methods using text exists, and as such, practitioners must select their methods without guidance. We contribute the first such framework, which consists of five tasks drawn from real world studies. Our framework enables the evaluation of any casual inference method using text. Across 648 experiments and two datasets, we evaluate every commonly used causal inference method and identify their strengths and weaknesses to inform social media researchers seeking to use such methods, and guide future improvements. We make all tasks, data, and models public to inform applications and encourage additional research. | 10.1609/icwsm.v16i1.19362 | [
"https://arxiv.org/pdf/2009.09961v4.pdf"
] | 221,819,019 | 2009.09961 | bc19a9e75fc6718e3e85b1328e64234b8f3dd691 |
Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference
Galen Weld
University of Washington
Peter West
University of Washington
Maria Glenski
Pacific Northwest National Laboratory
David Arbour
Adobe Research
Ryan A Rossi
Adobe Research
Tim Althoff
University of Washington
Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference
Causal inference studies using textual social media data can provide actionable insights on human behavior. Making accurate causal inferences with text requires controlling for confounding which could otherwise impart bias. Recently, many different methods for adjusting for confounders have been proposed, and we show that these existing methods disagree with one another on two datasets inspired by previous social media studies. Evaluating causal methods is challenging, as ground truth counterfactuals are almost never available. Presently, no empirical evaluation framework for causal methods using text exists, and as such, practitioners must select their methods without guidance. We contribute the first such framework, which consists of five tasks drawn from real world studies. Our framework enables the evaluation of any casual inference method using text. Across 648 experiments and two datasets, we evaluate every commonly used causal inference method and identify their strengths and weaknesses to inform social media researchers seeking to use such methods, and guide future improvements. We make all tasks, data, and models public to inform applications and encourage additional research.
Introduction
The massive volume of social media data offers significant potential to help researchers better understand human behavior by making causal inferences. Researchers often formalize casual inference as the estimation of the average treatment effect (AT E) of a specific treatment variable (e.g. therapy) on a specific outcome (e.g. suicide) (Rubin 2005;Rosenbaum 2010;Keith, Jensen, and O'Connor 2020). A major challenge is adjusting for confounders (e.g. comments mentioning depression) that affect both the treatment and outcome (depression affects both an individual's propensity to receive therapy and their risk of suicide) (Keith, Jensen, and O'Connor 2020). Without adjusting for depression as a confounder, we might look at suicide rates among therapy patients and those not receiving therapy, and wrongly conclude that therapy causes suicide.
The gold standard for avoiding confounders is to assign treatment via a randomized controlled trial (RCT). Unfortu-Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Galen Weld and Peter West contributed equally to this work. Figure 1: Causal graph representing the the context of our evaluation framework. All edges have known probabilities. While our framework naturally generalizes to more complex scenarios, we chose binary treatments (T) and outcomes (Y), and a binary latent confounder (class), as even in this simple scenario, current methods struggle.
nately, in many domains, assigning treatments in this manner is not feasible (e.g. due to ethical or practical concerns). Instead, researchers conduct observational studies (Rosenbaum 2010), using alternate methods to adjust for confounders.
Text (e.g. users' social media histories) can be used to adjust for confounding by training an NLP model to recognize confounders (or proxies for confounders) in the text, so that similar treated and untreated observations can be compared. However, a recent review (Keith, Jensen, and O'Connor 2020) finds that evaluating the performance of such methods is "a difficult and open research question" as true AT Es are almost never known, and so, unlike in other NLP tasks, we cannot know the correct answer. We find that this challenge is amplified, as methods disagree with one another on real world tasks ( §3) -how do we know which is correct?
Theoretical bounds on the performance of methods are almost never tight enough to be informative. We derive such bounds for the methods included here (Appendix A 1 ) and find that our empirical evaluation framework produces tighter bounds more than 99% of the time. As ground truth is almost never available, the only 2 practical method to evaluate causal inference methods is with semi-synthetic data, where synthetic treatments and/or outcomes are assigned to real observations, as in Fig. 1 (Dorie et al. 2019;Jensen 2019;Gentzel, Garant, and Jensen 2019). While widelyused semi-synthetic benchmarks have produced positive results in the medical domain (Dorie et al. 2019), no such benchmark exists for causal inference methods using text (Keith, Jensen, and O'Connor 2020).
In this work, we contribute the first evaluation framework for causal inference with text ( §5). Our framework is simple, principled, and can be applied to any method that produces an AT E estimate given observed treatment assignments, outcomes, and text covariates. It includes a broad range of tasks challenging enough to identify where current methods fail and the most promising avenues for improvement. However, no single benchmark can adequately evaluate methods for every application. As such, our framework can be easily extended to include additional tasks relevant to any application and potential confounder ( §5.3).
Inspired by challenges from a wide range of studies (Johansson, Shalit, and Sontag 2016;De Choudhury et al. 2016;Choudhury and Kiciman 2017;Falavarjani et al. 2017;Olteanu, Varol, and Kiciman 2017;Kiciman, Counts, and Gasser 2018;Sridhar et al. 2018;Saha et al. 2019;Veitch, Sridhar, and Blei 2019;Roberts, Stewart, and Nielsen 2020), our framework consists of five tasks ( §5.2): Linguistic Complexity, Signal Intensity, Strength of Selection Effect, Sample Size, and Placebo Test. Each semi-synthetic task is generated from public social media users' profiles, perturbed with synthetic posts to create increasing levels of difficulty. To increase the robustness of our approach, we evaluate methods on these tasks using real data from both Reddit and Twitter. Not all of these tasks are exclusive to causal inference with text, yet all are important to a great deal of textual causal inference studies. As such, their evaluation in this context is important. Using these tasks, we evaluate the specific strengths and weakness of 9 widely-used methods and 3 common estimators, conducting 648 experiments.
Concerningly, we find that almost every method predicts a false significant treatment effect when none is present, which could be greatly misleading to unwary practitioners ( §7). While we find that each method struggles with at least one challenge, methods leveraging recent, hierarchical, transformer-based architectures perform best, although such methods are not yet widely used (Keith, Jensen, and O'Connor 2020). These limitations and findings highlight the importance of continued research on the evaluation of causal inference methods for text.
The ICWSM community consists of researchers who work both on the development of casual inference methods, and practitioners who solve real world problems using causal inference. For methods developers: We make our framework publicly available 3 to enable the evaluation of any causal inference method which uses text, and encourage the development of more robust methods. Our framework can be easily extended to include additional tasks. For practitioners: We identify strengths and weaknesses of commonly used methods, identifying those best suited for 2 Background and Related Work Causal Inference with Social Media Data We formalize causal inference using notation from Pearl (1995). Given a series of n observations (in our context, a social media user), each observation is a tuple O i = (Y i , T i , X i ), where Y i is the outcome (e.g. did user i develop a suicidal ideation?), T i is the treatment (e.g. did user i receive therapy?), and X i is the vector of observed covariates (e.g. user i's textual social media history).
The Fundamental Problem of Causal Inference is that each user is either treated or untreated, and so we can never observe both outcomes. Thus, we cannot compute the
AT E = 1 n n i=1 Y i [T i = 1]−Y i [T i = 0]
directly, and must estimate it by finding comparable treated and untreated observations. To do so, it is common practice to use a model to estimate the propensity score,p(X i ) ≈ p(T i = 1|X i ), for each observation i. As treatments are typically known, propensity score models are effectively supervised classifiers, predicting T i , given X i . Matching, stratifying, or weighting using these propensity scores will produce an unbiased AT E estimate if four assumptions hold: all confounders must be observed, outcomes for each user must not be affected by treatment assignments to other users (SUTVA), propensity scores must be accurate, and there must be overlap in the distribution of covariates in the treated and untreated groups (common support assumption) (Rosenbaum 2010; Hill and Su 2013). In practice, verifying these assumptions is difficult. In particular, ensuring that all confounding factors are observed is practically impossible in real world applications, hence the need for empirical evaluation.
Causal Inference and NLP Until recently, there has been little engagement between causal inference researchers and the NLP research community (Keith, Jensen, and O'Connor 2020). There are many ways to consider text in a causal context, such as text as a mediator (Veitch, Sridhar, and Blei 2019;Landeiro and Culotta 2016), text as treatment (Wood-Doughty, Shpitser, and Dredze 2018;Egami et al. 2018;Fong and Grimmer 2016;Tan, Lee, and Pang 2014;Zhang, Mullainathan, and Danescu-Niculescu-Mizil 2020), text as outcome (Egami et al. 2018;Zhang et al. 2018), and causal discovery from text (Mani and Cooper 2000;Mirza and Tonelli 2016). However, we narrow our focus to text as a confounder. This is an important area of research because the challenge of adjusting for confounding underlies most causal contexts, such as text as treatment or outcome (Keith, Jensen, and O'Connor 2020). Effective adjusting for confounding with text enables causal inference in any situation where observations can be represented with text -e.g. social media, news articles, and dialogue.
Adjusting for Confounding with Text A recent review (Keith, Jensen, and O'Connor 2020, Table 1) summarizes common practices across a diverse range of studies. Almost every method used in practice consists of two parts: a propensity score model, which uses some text represen- Figure 2: Treatment accuracy and AT E for both real world experiments, with bootstrapped 95% confidence intervals. Note that for the Gender Experiment, the models with the highest accuracy have the lowest AT E.
tation to estimate propensity scores, and an ATE estimator. Since such propensity-score based methods are by far the most widely used, in this work, we focus on these methods. While we do not evaluate non-propensity score methods such as doubly-robust methods (Kang and Schafer 2007), TMLE (Schuler and Rose 2017), and matching on values other than propensity scores (Roberts, Stewart, and Nielsen 2020;Mozer et al. 2020), our framework's structure enables evaluation of any AT E estimation method that produces an AT E estimate given observed treatment assignments, outcomes, and text covariates. We believe disentangling the challenges of widely-used methods is a key contribution before moving to more complex and less common methods. Evaluating these other methods is an important area of future work.
Text representations used in propensity score models generally do not yet leverage recent breakthroughs in NLP, and roughly fall into three groups: those using uni-and bigram representations (De Choudhury et al. 2016;Johansson, Shalit, and Sontag 2016;Olteanu, Varol, and Kiciman 2017), those using LDA or topic modeling (Falavarjani et al. 2017;Roberts, Stewart, and Nielsen 2020;Sridhar et al. 2018), and those using neural word embeddings such as GLoVe (Pham and Shen 2017), fastText (Joulin et al. 2017;Chen, Montano-Campos, and Zadrozny 2020), or BERT (Veitch, Sridhar, and Blei 2019), (Pryzant et al. 2018). Three classes of estimators are commonly used to compute the AT E: inverse probability of treatment weighting (IPTW), propensity score stratification, and matching, either using propensity scores or, less frequently, some other distance metric. In our evaluation, we separate the propensity score models from the ATE estimators to better understand each component's individual impact.
Evaluation of Causal Inference In rare specialized cases, researchers can use the unbiased outcomes of a parallel RCT to evaluate those of an observational study, as in Eckles and Bakshy (2017). This practice is known as a constructed observational study, and, while useful, is only possible where parallel RCTs can be conducted. Outside these limited cases, proposed models are typically evaluated on synthetic data generated by their authors. These synthetic datasets often favor the proposed model, and do not reflect the challenges faced by real applications (Keith, Jensen, and O'Connor 2020).
Theoretical evaluation of causal inference methods is generally unhelpful, as theoretically derived performance bounds are mostly much less informative than those derived empirically (Arbour and Dimmery 2019). In this work, we compute the theoretical bounds and find that they are so loose as to not effectively guide practitioners in selecting methods. Our empirical evaluation framework based on realistic tasks produces tighter bounds in more than 99% of cases (Appendix A).
Outside of the text domain, widely used empirical evaluation datasets have been successful, most notably the 2016 Atlantic Causal Inference Competition (Dorie et al. 2019), and a strong case has been made for the empirical evaluation of causal inference models (Gentzel, Garant, and Jensen 2019;Jensen 2019;Lin et al. 2019). In the text domain, matching approaches have been evaluated empirically (Mozer et al. 2020), but this approach evaluates only the quality of matches, not the causal effect estimates. In contrast, our work applies to all estimators, not just matching, and evaluates the entire causal inference pipeline.
Current Models Disagree
Recent causal inference papers (Veitch, Sridhar, and Blei 2019;Roberts, Stewart, and Nielsen 2020;De Choudhury et al. 2016;Chandrasekharan et al. 2017;Bhattacharya and Mehrotra 2016) have used social media histories to adjust for confounding. Each of these papers uses a different propensity score model: BERT in Veitch, Sridhar, and Blei (2019), topic modeling in Roberts, Stewart, and Nielsen (2020), logistic regression in De Choudhury et al. (2016), Mahalanobis distance matching in Chandrasekharan et al. (2017), and Granger Causality in Bhattacharya and Mehrotra (2016). For all of these studies, ground truth causal effects are unavailable, and so we cannot tell if the chosen model was correct. However, we can compute the accuracy of their propensity scores (accuracy of a binary classifier predicting treatment assignment), and see if their AT E estimates agree-if they don't, then at most one disagreeing model can be correct.
Methods We conducted two experiments using real world data from Reddit, inspired by these recent papers. In the Moderation Experiment, we test if having a post removed by a moderator impacts the amount a user later posts to the same community again. In the Gender Experiment, we use data from Veitch, Sridhar, and Blei (2019) to study the impact of the author's gender on the score of their posts. For details on data collection, see Appendix B.
Results Comparing the performance of nine different methods (Fig. 2), we find that all models have similar treatment accuracy in the Moderation Experiment. However, the models using 1,2-gram features perform better in the Gender Experiment than the LDA and SHERBERT models. Most importantly, we see that while many confidence intervals 4 overlap, there are notable differences between ATE estimates for different models, even when treatment accuracy is nearly identical (Fig. 2a,b). That some ATE estimates' confidence intervals overlap 0 while others do not (Fig. 2b) indicates that some models find nonzero treatment effects at the common p = 0.05 threshold while others do not. Lastly, we note that models with the highest treatment accuracy tend to have the lowest ATE estimates (Fig. 2c,d).
Implications This should come as a great concern to the computational social science research community. We do not know which model may be correct, and we do not know whether there may be a more accurate model that would even further decrease the estimated treatment effect. We derive theoretical bounds and compute them (Appendix A), finding that in 99+% of cases, these bounds are looser than those computed empirically using our framework, making them less useful for model selection. This concern underlines the importance and urgency of empirical evaluation for causal inference with text, and motivates our contribution. Next, we describe key challenges in adjusting for confounding with text and present a principled evaluation framework that highlights these challenges and generates actionable insights for future research.
Challenges for Causal Inference with Text
Using the common setting of real social media histories (De Choudhury et al. 2016;Olteanu, Varol, and Kiciman 2017;Veitch, Sridhar, and Blei 2019;Choudhury and Kiciman 2017;Falavarjani et al. 2017;Kiciman, Counts, and Gasser 2018;Saha et al. 2019;Roberts, Stewart, and Nielsen 2020), we identify five challenges consistently present when representing natural language for causal inference:
1. Linguistic Complexity: Natural language uses a diverse set of tokens to express related underlying meaning. Someone who struggles with mental health might write "I feel depressed" or "I am isolated from my peers," which have distinct tokens but both may be indicative of depression. Can models recognize a range of expressions which are correlated with treatment? 2. Signal Intensity: Some users only have a few posts that contain a specific signal (such as poor mental health) whereas others may have many posts with this signal. Signals are especially weak when posts containing the signal constitute only a small fraction of a user's posts. Can models detect weak signals?
3. Strength of Selection Effect: Many studies have few comparable treated and untreated users (Li, Thomas, and Li 2018;Crump et al. 2009). Can models adjust for strong selection effects? 4. Sample Size: Observational studies often face data collection limitations. 5 Can models perform well with limited data samples? 5. Placebo Test: Oftentimes, no causal effect is present between a given treatment and an outcome. Do models falsely predict causality when none is present? While natural language is far more complex than any finite set of challenges can capture, the five we have chosen to highlight are challenges that regularly need to be addressed in many causal inference applications that use natural language. This set of challenges was developed by reviewing a broad set of existing studies (Johansson, Shalit, and Sontag 2016;De Choudhury et al. 2016;Choudhury and Kiciman 2017;Falavarjani et al. 2017;Olteanu, Varol, and Kiciman 2017;Kiciman, Counts, and Gasser 2018;Sridhar et al. 2018;Saha et al. 2019;Veitch, Sridhar, and Blei 2019;Roberts, Stewart, and Nielsen 2020) and identifying commonalities. While the strength of selection effect, sample size, and placebo test challenges are not exclusive to causal inference with text, these challenges are present in most real world studies, and as such, a holistic evaluation framework must consider them. These five challenges also cover three key concepts of model performance: generalizability (linguistic complexity), sensitivity (signal intensity, strength of selection effect), and feasibility (sample size, placebo test) that are critical for comprehensive evaluation. To produce our evaluation framework, we derive a concrete task from each challenge.
Framework for Evaluation
We generate five tasks, each with discrete levels of difficulty, and corresponding semi-synthetic task datasets based on real social media histories. Without the semi-synthetic component, it would not be possible to empirically evaluate a model, as we would not know the true AT E or propensity scores. By basing our user histories on real data, we are able to include much of the realism of unstructured text found 'in the wild.' This semi-synthetic approach to evaluation preserves the best of both worlds: the empiricism of synthetic data with the realism of natural data (Jensen 2019; Gentzel, Garant, and Jensen 2019; Jensen 2019).
Semi-Synthetic Dataset Generation
While the method for generating a semi-synthetic dataset can be arbitrarily complex, we seek the simplest approach which is able to identify and explain where existing methods fail. We generate our datasets according to a simplified model of the universe; where all confounding is present in the text, and where there are only two types of people, class 1 and class 2 (Fig. 1). In the context of mental health, for example, these two classes could simply be people who struggle with depression (class 1), and those Figure 3: Users are first randomly divided into two latent (unobserved) classes with a 50/50 split, and their text histories have synthetic posts inserted specific to each task. Observed binary treatments and outcomes are assigned with conditional probabilities such that Class 1 has a true AT E of .8, and Class 2 has a true AT E of 0. Since the classes are balanced, the overall true AT E is .4. who don't (class 2). If models struggle on even this simple two-class universe, as we find, then it is highly unlikely they will perform better in the more complex real world. In this universe, the user's (latent) class determines the probability of treatment and outcome conditioned on treatment. Dependent on class, but independent of treatment and outcome is the user's comment history, which contains both synthetic and real posts that are input to the model to produce propensity scores. As such, the comment history is an observed proxy for the class confounder.
We produce each dataset using a generative process, as shown in Fig. 3. For each task, we start with the same collection of real world user histories from public Reddit or Twitter profiles. We randomly assign an equal number of users to class 1 and class 2. Into each profile, we insert synthetic posts using a function f n for class n specific to each task, described in §5.2. We assign binary treatments (conditioned on class) and binary outcomes (conditioned on class and treatment) according to a known probability distribution (Fig. 3). These outcomes and treatments could represent anything of interest, and they need not be binary.
To estimate the AT E, there must be overlap between the treated and untreated groups (common support), so we cannot make all users in class 1 treated and all users in class 2 untreated. Instead, users in class 1 are predominantly but not always assigned to treatment (with a .9/.1 split), and users in class 2 are predominantly but not always assigned to control (also with a .9/.1 split), in order to ensure overlap of covariates between the treated and control groups. This overlap provides common support, and thus our observations do not necessitate trimming (Crump et al. 2009;Lee, Lessler, and Stuart 2011;Yang and Ding 2018).
Once a treatment has been assigned according to the class' probabilities, a positive outcome is assigned with probability .9 (treated) and .1 (untreated) for class 1, and .9 regardless of treatment for class 2. Thus, class 1 has a true AT E of .8, and class 2 has a true AT E of 0. Since the the two classes are balanced, the overall true AT E is .4. The objective for propensity score models is to recover the treatment probabilities for each class, which are then used to estimate the true AT E.
Real World User Histories In order to maximize generalizability, we experiment with both Reddit and Twitter user histories as the real world component of our semi-synthetic datasets. Reddit and Twitter are natural data sources as they are both publicly accessible, and widely used for relevant research (Medvedev, Lambiotte, and Delvenne 2019;Yu and Muñoz-Justicia 2020).
We downloaded all Reddit comments for the 2014 and 2015 calendar years from the Pushshift archives (Baumgartner et al. 2020) and grouped comments by user. After filtering out users with fewer than 10 comments, we randomly sampled 8,000 users and truncated users' histories to a maximum length of 60 posts for computational practicality. 6 These users were randomly partitioned into three sets: a 3,200 user training set, an 800 user validation set, and a 4,000 user test set used to compute Treatment Accuracy and AT E Bias.
To gather Twitter data, we used a similar method as that used for Reddit. We used the Streaming API to produce a random sample of all public tweets posted in December 2020, then randomly sampled users from this sample and used the Twitter API to gather their complete Tweet histories. As with Reddit histories, we filtered out users with fewer than 10 Tweets, and truncated the histories of users with more than 60 Tweets, then randomly partitioned the resulting users into a training set, a validation set, and a test set, each of the same size as their Reddit data counterparts. 7
Synthetic Posts When generating semi-synthetic tasks, we insert three types of synthetic posts, representative of major life events that could impact mental health, into real users' Reddit and Twitter histories. Examples are given here, and are listed completely in Appendix C:
• Sickness Posts describe being ill (e.g. 'The doctor told me I have AIDS', 'How do I tell my parents I have leukemia?'). We vary both the illness, as well as way the it is expressed. • Social Isolation Posts indicate a sense of isolation or exclusion. ('I feel so alone, my last friend said they needed to stop seeing me.', 'My wife just left me.') • Death Posts describe the death of companion (e.g. 'I just found out my Mom died', 'I am in shock. My son is gone.'). We vary the phrasing as well as the companion. While these synthetic posts are drawn from the mental health domain, which is commonly represented among previous studies (De Choudhury et al. 2016;Choudhury and Kiciman 2017;Saha et al. 2019), the applicability of our framework is not limited to this specific context. These synthetic posts test a model's ability to recognize different tokens, and the specific tokens used are less critical. Furthermore, the concrete choice of mental health language has the benefit of making the user histories human-readable. Our framework can be easily extended by modifying the synthetic posts to include tokens from different domains ( §5.3).
Tasks
We consider five tasks focused around the previously described common challenges for text-based causal inference methods.
Linguistic Complexity This task tests a model's ability to recognize a diverse set of tokens as being correlated with treatment. We increase the difficulty in four steps by increasing the diversity of synthetic sentences inserted into user histories assigned to class 1 (i.e. the linguistic complexity of the dataset): f 1 initially appends the same Sickness Post to the end of each class 1 user's history; At the second level of difficulty, f 1 selects a Sickness Post uniformly at random; At the third level, f 1 selects either a Sickness or Social Isolation Post; and at the fourth level, f 1 selects a Sickness, Social Isolation, or Death Post. For each level of difficulty, f 2 is the identity function, i.e. user histories assigned to class 2 are unchanged.
Signal Intensity This task tests a model's ability to distinguish between the number of similar posts in a history. There are two levels of difficulty. At the easier level, f 1 appends 10 randomly sampled (with replacement) Sickness Posts, while f 2 is the identity function. At the harder level, f 1 appends only three Sickness Posts, while f 2 appends one.
Strength of Selection Effect
In this and the following tasks, we do not vary f 1 or f 2 . For Strength of Selection Effect, we make causal inference more challenging by increasing the strength of the selection effect, decreasing the overlap between treated and untreated users. We test two levels of difficulty: a weaker selection effect (easier) with the same .9/.1 split to assign the majority of class 1 to the treated group and class 2 to the control group. For the stronger selection effect (harder), we modify the generation framework to increase this split for class 1 to .95/.05. For both the weak and strong selection effects, we use f 1 to append a single random Sickness Post and f 2 as the identity function. Outcome probabilities, conditioned on treatment, are identical to previous tasks. Sample Size In this task, we test how the models' performance drops off as the amount of available training data is reduced. 8 As before, we use f 1 to append a single random Sickness Post and f 2 as the identity function. For the easiest case, we train on all 3,200 users' histories in the training set. We then create smaller training sets by randomly sampling subsets with 1,600 and 800 users.
Placebo Test
The final task assesses a model's tendency to predict a treatment effect when none is present. To do so, we must have asymmetric treatment probabilities between class 1 and class 2. Without this asymmetry, the unadjusted estimate would be equal to the true AT E of zero. We use the same asymmetric class 1 treatment split as in the Strength of Selection Effect task.
We set P (Y = 1|T = 0, class=1) = .05, P (Y = 1|T = 1, class=2) = .95, and the opposite for Y = 0. This gives a treatment effect of +.9 to class 1 and a treatment effect of -.9 to class 2, making the true AT E for the entire task equal 0. As in previous tasks, f 1 appends one random Sickness Post and f 2 is the identity function.
A potential limitation of these tasks may be the placement of synthetic posts at the end of histories, or differences in the length of histories. However in §7.1 we show that our results suggest that these potential limitations are very unlikely to affect the validity of evaluation. Futhermore, the framework may be extended ( §5.3) to further evaluate methods on these aspects.
Generalizability of Evaluation Framework
A key tenet of our evaluation framework is its extensibility. Here, we provide a summary of how the framework can be extended, with step by step instructions for addition of continuous outcome values and new tasks. Additional details and resources are available on our website. 9
Non-binary Treatments and Outcomes The evaluation framework presented here tests methods' ability to handle five distinct challenges which are relevant to many real world studies. It can be applied to any causal inference method which uses text to adjust for confounding. As our framework is the first such framework, we focus on the simplest and most broadly applicable cases: binary treatments and outcomes, as these are the most commonly used in practice. Out of 14 recent casual inference studies, 13 used binary treatments (Keith, Jensen, and O'Connor 2020). While the methods evaluated here have been generalized to handle observations with continuous treatment values (i.e., doseresponse observations) (Hirano and Imbens 2004), these methods are rarely used in practice-we are not aware of a single such application which uses text data. However, our framework trivially generalizes to cases with continuous outcome values, and can be easily modified to include continuous treatments and multiple confounders, including unobserved confounders.
To modify the framework to use continuous outcomes:
1. Select a random distribution to use to assign outcome values, conditioned on treatment. A straightforward option would be to use normal distributions with σ 2 = 0.3 and mean = .1 for (class 1, T = 0) and mean = .9 for (class 1, T = 1), (class 2, T = 0), and (class 2, T = 1). Any distribution may be used that provides reasonable common support as well as sufficient confusion and selection effects.
2. Train propensity score models as with binary outcomes.
Steps 2-4 are identical to the existing framework, as previously described.
Types of Confounding It is difficult or impossible to know exactly what types of confounders threaten the accuracy of real world causal inferences, and even simply making a reasonable guess requires substantial domain expertise (Rosenbaum 2010). As such, our evaluation framework simulates confounders which are common to many topics of interest to the ICWSM community, where specific 'indicator' passages or posts in a longer text document are correlated with a treatment and/or outcome of interest. These 'indicator' confounders have been shown to be common in the mental health (Choudhury and De 2014), personal identity (Haimson 2018), and fake news (Talwar et al. 2019) domains. While our five tasks focus on these common types of confounder, tasks can be easily added to cover other types of confounders, for example by adding a task which applies some rewriting transformation (e.g. desirability (Wang and Culotta 2019;Pryzant et al. 2018) or gender (Wang and Culotta 2019)) to the histories of class 1 and/or class 2. To add an additional task which applies a rewriting transformation:
1. Write a new f 1 function which, for example, replaces all occurrences of the token 'happy' with 'sad.'
2. Write a new f 2 function which, for example, replaces all occurrences of the token 'sad' with 'happy.'
3. Use the same treatment and outcome probabilities as used for other tasks, or, optionally, modify these probabilities to decrease or increase the overlap between the treated and untreated groups, as in the Strength of Selection Effect task.
4. Train and evaluate methods' treatment accuracy and bias of the ATE using the known true ATE.
Causal Inference Pipeline
So many methods for adjusting for confounding with text have been proposed recently that is not possible to evaluate every one in a single paper. Instead, we focus on the most commonly used methods, those which are based upon propensity scores. A recent review found that propensity score-based methods are used in 12/13 recent studies (Keith, Jensen, and O'Connor 2020, Table 1). Evaluating less commonly used methods (such as doubly robust methods (Mayer et al. 2019)) is an important area of future work, and our evaluation framework can be applied to any method for adjusting for confounding with text. The methods we evaluate here consist of three parts: a text representation, a propensity score model, and an AT E estimator.
Text Representations & Propensity Score Models
The Oracle uses the true propensity scores, which are known in our semi-synthetic evaluation framework (Fig. 3).
The Oracle provides an upper-bound on model performance, only differing from the theoretical optimum due to finite sample effects.
We include an Unadjusted Estimator, which uses the naive method of not adjusting for selection effects, producing an estimated treatment effect ofȲ T =1 −Ȳ T =0 , and as such is a lower-bound for models that attempt to correct for selection effects.
We train a Simple Neural Net (with one fully connected hidden layer) in four variants with different text representations: 1-grams with a binary encoding, 1,2-grams with a binary encoding, 1,2-grams with counts, and Latent Dirichlet Allocation (LDA) features (Blei, Ng, and Jordan 2003) based on 1,2-grams, counted. We also train Logistic Regression models on the same four text representations. Vocabulary sizes for n-gram methods are included in Appendix D.
Finally, we propose and evaluate a novel cauSal HiERarchical variant of BERT, which we call SHERBERT. SHER-BERT expands upon Causal BERT proposed by Veitch, Sridhar, and Blei (2019), which is too computationally intensive to scale to user histories containing more than 250 tokens, let alone ones orders of magnitude longer, such as in our tasks. In SHERBERT, we use one pretrained BERT model per post to produce a post-embedding (Appendix E.1 Fig. 5), followed by two hierarchical attention layers to produce a single embedding for the entire history, with a final linear layer to estimate the propensity score. This architecture is similar to HIBERT (Zhang, Wei, and Zhou 2019), but is faster to train on long textual histories, as SHER-BERT fixes the pretrained BERT components. More details on SHERBERT are given in Appendix E.
Average Treatment Effect Estimators
We consider three commonly used AT E estimators -IPTW, stratification, and matching. All three estimators use propensity scores but differ in how they weight or group relevant samples.
Inverse Propensity of Treatment Weighting estimates the AT E by weighting each user by their relevance to selection effects:
AT E IPTW = n i=1 (2 * T i − 1) * Y î p Ti (X i ) * n j=1 1 p T j (Xj )
where T i , Y i , and X i are treatment, outcome, and features for sample i, andp T (X) is the estimated propensity for treatment T on features X. Use of the Hajek estimator (1970) adjustment improves stability compared to simple inverse propensity.
Stratification divides users into strata based on their propensity score, and the AT E for each is averaged: AT E strat = 1 n k n k * AT E k where n is the total number of users, n k is the number of users in the k-th stratum, and AT E k is the unadjusted AT E within the k-th stratum. We report results on 10 strata divided evenly by percentile, but results are qualitatively similar for other k.
Matching can be considered as a special case of stratification, where each strata contains only one treated user. We find that matching produces extremely similar results to stratification, and therefore we include details of our matching approach and results in Appendix F.1.
Metrics for Evaluation
Our semi-synthetic tasks are generated such that we know the true AT E and thus can compute the Bias of ATE. A bias of zero is optimal, indicating a correct estimated AT E. The greater the bias, positive or negative, the worse the model performance. This is the primary metric we use in evaluation, and we compute it for both AT E strat and AT E IPTW . We also consider Treatment Accuracy, the accuracy of the propensity score model's predictions of binary treatment assignment. While higher accuracy is often better, high accuracy does not guarantee low bias and often is instead indicative of strong selection effects. Furthermore, we include two additional metrics in Appendix F.2: First, the Mean Squared Error of IPTW weights, which captures the calibration of propensity scores probabilities and resulting weights, and second, the Spearman's Rank Correlation. In some cases, even if absolute propensity scores are incorrect, their relative rank may still contain useful information that could be exploited in stratification-based methods. The Spearman Correlation measures the correlation between the true and estimated propensity scores for each model, with a value of 1 indicating a perfect ranking.
Results of Evaluation
We apply all five tasks of our evaluation framework to both Reddit and Twitter social media data. We found virtually identical results between the two datasets (e.g. Treatment Accuracies within 1.9% of one another), and so for brevity we report primarily on Reddit results here. Complete Twitter results and discussion are included in Appendix G.
Transformers better model relevant linguistic variation
The Linguistic Complexity task shows many trends in the results manifest in other tasks, including treatment accuracy clustering by text representation (Fig. 4a). SHER-BERT performs well, with uni-and bi-gram methods somewhere in between. Accuracy correlates fairly well with bias (Fig. 4b,c). As in nearly all tasks, LDA methods perform worst, not even outperforming the unadjusted estimator. This is likely because LDA uses an unsupervised (agnostic to treatment) method to generate a compressed feature set that is likely to miss key features when they comprise only a small part of the overall user history.
Transformer models struggle with counting and ordering The Signal Intensity task requires methods to effectively 'count' the number of posts to distinguish between classes.' Here, n-gram methods outperform SHER-BERT (Fig. 4e,f) This suggests that order embeddings are an important inclusion for future transformer-based methods. LDA methods perform slightly better than unadjusted, due to the stronger presence of tokens correlated with treatment.
High accuracy often reflects strong selection effects, not low ATE bias In the Strength of Selection Effect task, we decrease the overlap in propensity scores between treated and untreated users, which makes it easier to distinguish between the two groups. We see corresponding increases in Treatment Accuracy (Fig. 4g), however, bias worsens (Fig. 4h,i). In Appendix F.2, we also consider the Spearman Correlation, which evaluates propensity scores not on their absolute accuracy, but on their relative ranking, as in theory it is possible for inaccurate propensity scores to carry useful information. We expect the Stratified and Matching Estimators to perform better than IPTW in these cases, e.g. in the Signal Intensity Task, where the stratified estimate has lower bias than IPTW (Fig. 4e,f) and the Spearman Correlation is higher than Treatment Accuracy at the harder difficulty level (Appendix F.2 Fig. 7d,f).
In context of observational studies, methods with high treatment accuracy should be used with extreme caution -high accuracy likely reflects treated and control groups that are too disjoint for any meaningful comparison to be drawn. In this case, the common support assumption is violated, preventing causal inference. This highlights the importance of empirical evaluation of the complete causal inference pipeline.
Transformer models fail with limited data The Sample Size task explores methods' performance on small datasets, a common occurrence in real world applications. Generally, SHERBERT performs quite well. In this task, SHERBERT outperforms other methods when trained on the full 3,200 observation training set, but its bias and accuracy quickly deteriorate to worse than n-gram features when training data is reduced (Fig. 4j,k,l). When data is especially scarce, practitioners should carefully consider the data-hungry nature of modern transformer architectures even when they are pretrained. A more sophisticated model is not always the best choice. Furthermore, transfer learning and other means to reduce training data requirements are an important area of future work for causal inference method developers.
Methods estimate non-zero ATEs when the true ATE is zero Alarmingly, in the Placebo Test, every method except SHERBERT's stratified estimate failed to include the (correct) null hypothesis (AT E = 0) in their 95% confidence intervals across both datasets (Fig. 4n,o), including high accuracy methods using bigram features (Fig. 4m). This corresponds to incorrectly rejecting the null hypothesis of no causal treatment effect at the common p-value threshold of 0.05. This result is of greatest concern, as 8/9 methods falsely claim a non-zero effect.
Text representations and propensity score models have greater impact than estimators Each estimator evaluated produced overall similar results (Fig. 6), with the quality of the propensity scores being far more impactful. Methods often cluster based on their text representations, with bigram representations generally performing better than unigrams, which generally perform better than LDA representations. There was generally little difference between Logistic Regression and Simple NN methods when trained on the same text representation. However, the choice of an AT E estimator is still important. IPTW is more sensitive to extreme or miscalibrated propensity scores. This is visible in the Strength of Selection Effect task, where the confidence intervals for IPTW are much larger than for the stratified es- : Results for tasks computed with Reddit data, with bootstrapped 95% confidence intervals, perturbed along the x-axis for readability. Columns represent metrics, and rows correspond to tasks. Within each plot, difficulty increases from left to right. SHERBERT generally does well, especially on Strength of Selection Effect and Placebo Test, but struggles on Signal Intensity. timator (Fig. 4h,i).
Potential Threats to Evaluation Framework Validity
Many of our evaluation tasks ( §5.2) append more synthetic posts to class 1 histories than class 2 histories. As a result, class 1 histories will be 1-3 posts longer, in expectation, than those from class 1. Relative to the lengths of these histories (mean 41.1, standard deviation 19.3), this is a small difference, but in principle, a causal inference method may be able to pick up on the length of the history, an artifact of our framework, rather than the textual clues provided by the synthetic posts themselves. In practice, however, evidence from the Signal Intensity Task suggests that this is very unlikely, as every method tested, including SHERBERT, struggles with the to differentiate between histories by counting the number of synthetic posts (Fig. 4e,f). Thus, if we replaced a random post with a synthetic post, instead of appending, we would find very similar results. As methods improve and are better able to differentiate sequence lengths, such a random replacement strategy would be a reasonable and straightforward extension to our framework. Additionally, our evaluation tasks ( §5.2) always append synthetic posts to users' histories. We conducted additional experiments using an 'Order of Text' task (Appendix H) to evaluate widely used methods' ability to represent the order of posts in a user's history, and find that every method evaluated, including SHERBERT, completely fails to represent order. This is mostly a result of currently used text representations, primarily n-grams, which aggregate across histories by counting occurrences of tokens. For brevity, details of this task are included in Appendix H. We make this task public, along with all other tasks, to assist in the evaluation of future methods. We invite extensions and adaptions of our framework.
Implications & Conclusions
Causal inferences are difficult to evaluate in the absence of ground truth causal effects -a limitation of virtually all real world observational studies. Despite this absence, we can compare different methods' estimates and demonstrate that different methods regularly disagree with one another.
Empirical evaluation requires knowledge of the true treatment effects. Our proposed evaluation framework is reflective of five key challenges for causal inference in natural language, and is easily extensible to include different forms of confounders, different synthetic text content, and non-binary treatments and outcomes ( §5.3).
Our goal with this work is not to unilaterally pronounce one method as superior to another. Instead, we hope that methods developers and practitioners will comprehensively consider the challenges of making valid causal inferences we have described here, as well as assumptions they may be relying upon, and will use our framework to evaluate their methods empirically. To this end, we evaluate every commonly used propensity score method to produce key insights:
For methods developers, we find that continued development of transformer-based models offers a promising path towards rectifying deficiencies of existing models. Models are needed that can effectively represent the order of text, variability in expression, and the counts of key tokens. Given the limited availability of training data in many causal inference applications, more research is needed in adapting pretrained transformers to small data settings (Gururangan et al. 2020). We hope our public framework 10 will provide a principled method for evaluating future NLP models for causal inference. For practitioners, we find that transformer-based methods such as SHERBERT, which we make publicly available, 9 perform the best in most cases except those with very limited data. Propensity score models with high accuracy should be applied with great care, as this is likely indicative of a strong and unadjustable selection effect. Many methods failed our placebo test by making false causal discoveries, a major problem (Aarts et al. 2015;Freedman, Cockburn, and Simcoe 2015).
Adjusting for Confounders with Text: Challenges and an Empirical Evaluation Framework for Causal Inference Appendices
A Theoretical Bounds
We leverage recent results of Arbour and Dimmery (2019) to bound the expected bias of the ATE,Ȳ [T = 1] −Ȳ [T = 0], by considering the weighted risk of the propensity score:
E Ŷ (T ) − E Y (T ) ≤ E Y p(T |X) S(p(T |X),p(T |X)) p(T |X) 2
wherep and p are the estimated and true propensity score, and S is the Brier score (1950). Conceptually, this bound suggests that the bias grows as a function of the Brier score between estimated and true propensity score (numerator), and the inverse of the squared estimate of the propensity score, significantly penalizing very small scores.
Findings We compute these bounds using the estimated propensity score and find that they are largely uninformative in practice. In 250/252 cases, the empirical confidence interval (Fig. 4) provides a tighter bound than the theoretical bound, and in 230/252 cases the Unadjusted Estimator also provides a tighter bound than the theoretical bound. These results again highlight the importance of the principled empirical evaluation framework presented here.
Details of Derivation
The central challenge is estimating the error of the counterfactual quantities, Y (1), and Y (0). Recall that in the case of weighting estimators, when the true propensity score (p(·)) is available, these are estimated as
E [y(T )] = E Y p(T ) ,
where y is the observed outcome. For the problem addressed in this paper, the propensity must be estimated. Estimating the error for each potential outcome under an estimated propensity score results in a bias of
E Ŷ (T ) − E Y (T ) = E Y p(T |X) − E Ŷ p(T |X)
following Proposition 1 of Arbour and Dimmery (2019).
More concretely, an empirical upper bound can be obtained for Equation 1 given a lower bound on the true propensity score. Specifically, replacing the p with the lower bound and using the weighted cross-validated Brier score will provide a conservative bound on the bias of the counterfactual. This bound can be tightened with further assumptions, for example by assuming instance level bounds on p instead of a global bound. Balancing weights may also be used to estimate the bias directly using only empirical quantities (Arbour and Dimmery 2019).
Note that due to the evaluation framework in this paper, the true propensity score p is known, and therefore we do not need to apply loose bounds.
E Ŷ (T ) − E Y (T ) = E y p(T ) − y p(T ) + (p(T ) − p(T )) = E y p(T ) − y p(T ) + (p(T ) − p(T )) = E y(1 + 1 p (p(T ) − p(T ))) p(T )(1 + 1 p (p(T ) − p(T ))) − y p(T ) + (p(T ) − p(T )) = E y + y p (p(T ) − p(T )) p(T ) + (p(T ) − p(T )) − y p(T ) + (p(T ) − p(T )) = E y p (p(T ) − p(T )) p(T ) ≤ E y p (p(T ) − p(T )) 2 p(T ) 2 ≤ E y p(T ) S(p(T ), p(T )) p(T ) 2(1)
After obtaining the bounds on the individual counterfactual quantities, the corresponding lower and upper bias bounds on the average treatment effect can be constructed by considerinĝ
Y (0) + E y p(T = 0|X) S(p(0|X), p(0|X)) p(T = 0|X) 2 (2) Y (1) − E y p(T = 1 | X) S(p(T = 1 | X), p(T = 1 | X)) p(T = 1 | X) 2 (3) andŶ (0) − E y p(T = 0|X) S(p(T = 0|X), p(0)) p(T = 0|X) 2 (4) Y (1) + E y p(T = 1|X) S(p(T = 1|X), p(T = 1|X)) p(T = 1|X) 2(5)
respectively.
B Moderation and Gender Experiments -Data Collection Details
B.1 Moderation Experiment
In the Moderation Experiment, we test if having a post removed by a moderator impacts the amount a user later posts to the same community. For this experiment, we use 13,786 public Reddit histories (all of which contain more than 500 tokens) from users in /r/science from 2015-2017 who had a not had a post removed prior to 2018. Our treated users are those who have had a post removed in 2018. Out untreated users are those who have not had a post removed in 2018 (nor before). The outcome of interest is the number of posts they made in 2019.
To determine which users have had posts removed, we utilize the Pushshift Reddit API (Baumgartner et al. 2020). The data acessible via this API, in combination with publicly available Pushshift dump archives, allow us to compare two snapshots of each Reddit post: one snapshot made within a few seconds of posting, and one made approximately 2 months later. By comparing these two versions, we can tell a) which user made the post, and b) if it was removed. This approach is similar to that of Chandrasekharan et al. (2018).
This experiment mimics the setup in De Choudhury et al. (2016), where each user is represented by their entire Reddit comment history within specific subreddits. While (De Choudhury et al. 2016) has been influential in our work, their dataset is not public, and publicly available comparable data contains only a relatively small set of Reddit users, leading to underpowered experiments with large, uninformative confidence intervals that fail to reproduce the findings in the original paper.
B.2 Gender Experiment
In the Gender Experiment, we use the dataset made public by Veitch, Sridhar, and Blei (2019), which consists of single posts from three subreddits: /r/okcupid, /r/childfree, and /r/keto. Each post is annotated with the gender (male or female) of the poster, which is considered the treatment. The outcome is the score of the post (number of 'upvotes' minus number of 'downvotes').
D Vocabulary Sizes for N-Gram Methods
We report the sizes of the vocabularies used by the uni-and bi-gram text representations across the five tasks in the proposed evaluation framework. The vocabulary size varies slightly for each task due to different synthetic posts used; the maximum is reported here. Reddit unigram 24,164 Reddit bigram 125,319 Twitter unigram 23,791 Twitter bigram 95,934 Table 1: Vocabulary sizes for each n-gram type and real world data source.
Real Data Source N-Gram type Maximum Vocabulary Size
E Model Implementation, Tuning, and Parameters E.1 SHERBERT Architecture Tokens Figure 5: The complete ATE estimation pipeline, with tokens input at the bottom, and an estimate propensity at the top. ATE estimates are computed with IPTW, Stratification, and Matching based on models' propensity scores. This example is instantiated using SHERBERT and detailing its hierarchical architecture. In this pipeline, other propensity score models could replace the 'Representation Learning' box (e.g., Bag-of-n-grams with Logistic Regression).
Our work attempts to expand the success of large pretrained transformers to long history length using a hierarchical attention, which is a problem also explored by the HIBERT model in Zhang, Wei, and Zhou (2019). Essentially, SHERBERT differs from HIBERT in that SHERBERT trains a light-weight hierarchical attention on top of the pretrained BERT model (Devlin et al. 2019) whereas HIBERT is trained from scratch. This results in a relatively simple training procedure for SHERBERT, and lighter limitations on history length, both at the local (50 words for HIBERT v. 512 wordpiece tokens for SHERBERT) and global (30 sentences for HIBERT v. 60 for SHERBERT) scales. This reflects differing tradeoffs; where HIBERT has a more sophisticated attention mechanism for combining local and global information, SHERBERT sacrifices some complexity for fast and simple training and longer text histories.
E.2 Practicality of Models
SHERBERT trades-off practicality for performance in comparison to simpler models. For instance, in most experiments we found SHERBERT takes 10 -12 hours to train, sometimes requiring multiple starts to converge to a reasonable model. In contrast, training all other models collectively requires less than 1 hour. Further, the performance of SHERBERT sharply suffered as the number of users was reduced (Fig. 4j). While effectively training SHERBERT on 1 GPU (Tesla V100) in under 24 hours is quite practical compared to contemporary text pretraining regimes (Devlin et al. 2019), these issues should be considered when deciding on a causal text model.
E.3 Hyperparameters
A complete description of parameters and hyperparameters are available at the paper website. 11 Basic details are included here.
In producing n-gram features, a count threshold of 10 is used to filter out low frequency words, and word tokenization is done using the NLTK word tokenizer. In producing LDA features, we use the Scikit Learn implementation, with 20 topics. To produce BERT word embedding features, we use the uncased model of the 'base' size.
All models use the Adam optimizer (Kingma and Ba 2014), with various learning rates decided empirically depending on model and task to maximize treatment accuracy on the validation set.
For the simple neural network model, we use a hidden size of 10. For SHERBERT, we use hidden sizes of 1000 and dotproduct attention.
F Additional Estimators and Metrics
In order to further detail our findings, we include several additional ATE estimators and metrics for evaluation ( §6.3).
F.1 Matching Estimator
Matching can be considered as a special case of stratification, where each strata contains only one treated user. As our treated and untreated groups are approximately balanced, we implement 1:1 matching, where each treated user is matched to exactly one untreated user.
While there are many implementations of matching, we implement matching with replacement, as in Abadie and Imbens (2016, pg. 784):
AT E match = 1 n n i=1 (2T i − 1) (Y i − Y j ) where j is the matched observation, i.e. j = min j∈{1...n} |p(X i ) −p(X i )| where T i = T j .
A recent evaluation of matching techniques for text found no significant difference in match quality between matches produced with and without replacement (Mozer et al. 2020). We use a caliper value of .2× the standard deviation of propensity scores in the population, as was found to perform the best by Wang et al. (2013) and recommended by Rosenbaum (2010, pg. 251).
For each of the five tasks, the matching estimator produces results extremely similar to those of the stratified estimator (Fig. 6).
F.2 Mean Squared Error of IPTW and Spearman Correlation
In addition to Treatment Accuracy and Bias, we computed the Mean Squared Error (MSE) of the Inverse Probability of Treatment Weights, and the Spearman Correlation of propensity scores.
Mean Squared Error of IPTW shows the absolute error in the calibration of a models' causal weights:
M SE IP T W = n i=1 n j=1 1 p T j (Xj ) −1 p Ti (X i ) − n j=1 1 p T j (Xj ) −1 p Ti (X i ) 2
Notation is the same as in the 'Average Treatment Effect Estimators' section of the main paper, with the addition of p as true propensity, which is known in our semi-synthetic tasks. The MSE is fairly correlated with the Treatment Accuracy, with MSE increasing as accuracy decreases as the tasks become more difficult. This is especially evident in Fig. F.2b,k.
Spearman Correlation instead shows the relative calibration of a models' propensity scores. Propensity scores may have poor absolute calibration, but still have meaningful relative ordering, in which case the Spearman Rank Correlation is close to its maximum value of 1. The Spearman Correlation coefficient is simply the Pearson correlation coefficient between the rank variables for the estimated and actual propensity scores. We find that Spearman Correlation is also quite correlated with the Treatment Accuracy (Fig. 7c,i), and as such, is no more useful than Treatment Accuracy at predicting bias.
G Twitter Results
We evaluate each text representation and propensity score model on two versions of each of the five tasks, one version generated using Reddit data, and one version generated using Twitter data (for details on data collection, see §5.1). Language used on Twitter differs from language used on Reddit, and as a result, the difficulty of our tasks varies slightly from platform to platform. However, the semi-synthetic nature of our tasks leads us to expect the differences in downstream evaluation metrics to be negligible, a hypothesis supported by our findings. Overall, the results are extremely similar across the two datasets, with Treatment Accuracies within 1.9 percentage points of one another. Therefore, for brevity, we focus on the Reddit results in the main body of the paper. Here, we report Twitter results and compare them to the Reddit results. Table 2: Comparison of Reddit and Twitter results. Note that results are highly similar across the two datasets with mean absolute differences of 0.019 for Treatment Accuracy for values between 0 and 1, mean absolute differences of 0.032 for the AT E estimate using stratification,and 0.052 for the AT E estimate using IPTW (a higher variance estimator as noted in §6.2) relative to a true ATE of 0.4 for most tasks. We consider additional means of quantifying similarity, the mean ratio of Twitter:Reddit results, and Pearson's r correlation. Mean absolute difference provides the clearest comparison, as ratios are strongly influenced by small but similar results (e.g. Placebo Test). Pearson's r miscommunicates the similarity of results when performance estimates cluster so closely that there is no consistent trend (but performance is highly similar between estimates and datasets).
To compute overall similarity, we calculate the mean absolute difference between the results on the Twitter and Reddit versions of each task (Table 2). We do this separately for each evaluation metric (e.g. Treatment Accuracy and estimated AT E). When considering all five tasks, the mean absolute difference between Twitter and Reddit results is 0.019 across Treatment Accuracy for values between 0 and 1. This similarity holds when comparing AT E estimates, with a mean absolute difference of 0.032 when comparing Stratified AT E estimates, relative to a true AT E of 0.4 for most tasks. As the IPTW estimator is more sensitive given the reweighting scheme and small probabilities ( §6.2), the mean absolute difference between IPTW AT E estimates is slightly larger, at 0.052.
We also compute additional similarity metrics, the mean ratio of Twitter:Reddit results, and the Pearson's correlation between Twitter and Reddit results (Table 2). These metrics further emphasize the similarity between results computed on the two datasets, but are also strongly influenced by small but similar results, such as the true AT E of 0 in the Placebo Test, in the case of the ratio between Twitter and Reddit, and by closely clustered results without consistent trends (e.g. Figures 8o), in the case of Pearson's r.
Overall, we find that the model performance estimates are highly similar across both data sets with nearly identical performance statistics and relative trends across models and tasks.
In this Appendix, on the following page, we first report complete Twitter results (Fig. 8), along with the Reddit results ( Fig. 9) for ease of comparison. We then provide an additional details and analysis of the differences between Twitter and Reddit results (Appendix G.1).
Mean Absolute Difference between Reddit and Twitter Results
Figure 10: Comparison of Reddit and Twitter results, broken down by task and by text representation and model. Note that mean absolute differences are highly similar across the tasks and methods. Differences are slightly larger for AT E estimates using IPTW, a higher variance estimator (as noted in §6.2), than for estimates using stratification. This further emphasizes the importance of evaluation metrics beyond accuracy.
We compare correlations between Reddit and Twitter results separately for each task and for each text representation and propensity score model. Fig. 10 displays the mean absolute difference between Reddit and Twitter results for each task (left) and text representation and model (right). The number of experiments used to compute the mean difference is given by n. As with the overall results, prediction performance is highly similar across both datasets, consistently across tasks and models. As expected, the increased sensitivity and variance of the IPTW AT E estimate results in larger differences between Reddit and Twitter results than the more stable Stratified estimate (see §6.2 for details). Bias of ATE (Stratified) Figure 11: Treatment Accuracy and Bias computed with IPTW and Stratification for the Order of Text task, generated with Reddit data. No model was able to distinguish between posts at the beginning and posts at the end of users' histories.
The order of social media posts can significantly influence their meaning and implications. For example, a person who posts about having good mental health and then posts a year later that they've lost a loved one might be at higher risk of developing suicidal ideation than a different person who first posts about losing a loved one and then immediately posts that their mental health is still good. Can models recognize order?
This task tests a model's ability to differentiate between the order of posts in a user's history. There are two levels of difficulty. For the baseline level, f 1 appends a random Sickness Post to the end of the user's history, and f 2 is simply the identity function. At the harder level order becomes critical. Here , f 1 still appends a random Sickness Post to the end, but f 2 prepends a random Sickness Post to the beginning of a user's history. The difference in performance between the two prediction tasks reflects the model's ability to differentiate temporal order. The Order of Text task requires models to recognize models and reason about their order. The models tested incorporate information from long histories using simple aggregation across comments. Most aggregate by n-gram counting (including LDA, which compresses n-gram features), while SHERBERT uses a simple hierarchical attention network. Neither method captures this notion of order, so all models end up with effectively unadjusted estimator accuracy (Fig. 11). Developing causal inference methods that capture these temporal dynamics is an important area of future work.
Figure 4
4Figure 4: Results for tasks computed with Reddit data, with bootstrapped 95% confidence intervals, perturbed along the x-axis for readability. Columns represent metrics, and rows correspond to tasks. Within each plot, difficulty increases from left to right. SHERBERT generally does well, especially on Strength of Selection Effect and Placebo Test, but struggles on Signal Intensity.
Fig. 5
5depicts the architecture of SHERBERT as part of the broader ATE estimation pipeline.
Figure 6 :
6Comparison of bias computed using IPTW, Stratification, and Propensity Score Matching, for each task, computed using Reddit data. Note that matching produces extremely similar results to stratification.
Figure 7 :
7Treatment Accuracy, Mean Squared Error, and Spearman Correlation for each task, computed using Reddit data. Spearman Correlation varies directly with Treatment Accuracy, whereas Mean Squared Error increases as accuracy falls.
Figure 9 :
9Results for tasks evaluated with Reddit data, repeated fromFig. 4for ease of comparison.
Mean Absolute Difference Mean Twitter:Reddit Ratio Pearson's rTest Accuracy
0.019
1.006
0.957
ATE (Stratified)
0.032
1.042
0.946
ATE (IPTW)
0.052
1.157
0.887
constant +sick +isolation +death Bias of ATE (IPTW) constant +sick +isolation +deathFigure 8: Results for tasks evaluated with Twitter data, with bootstrapped 95% confidence intervals. Twitter results are extremely similar to those computed with Reddit data, with a mean absolute difference between Treatment Accuracies of 1.9 percentage points.Linguistic Complexity
0.5
0.6
0.7
0.8
0.9
Treatment Accuracy
Logistic Regression
(1-grams)
Simple NN
(1-grams)
Logistic Regression
(1,2-grams)
Simple NN
(1,2-grams)
Logistic Regression
(1,2-grams, counted)
Simple NN
(1,2-grams, counted)
Logistic Regression
(LDA features)
Simple NN
(LDA features)
SHERBERT
Oracle Propensity
Theoretical
Optimum
Unadjusted
Estimator
constant +sick +isolation +death
Linguistic Complexity
0.4
0.3
0.2
0.1
0.0
0.1
Linguistic Complexity
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (Stratified)
large
small
Signal Intensity
0.5
0.6
0.7
0.8
0.9
Treatment Accuracy
large
small
Signal Intensity
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (IPTW)
large
small
Signal Intensity
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (Stratified)
0.9
0.95
Strength of Selection Effect
0.5
0.6
0.7
0.8
0.9
Treatment Accuracy
0.9
0.95
Strength of Selection Effect
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (IPTW)
0.9
0.95
Strength of Selection Effect
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (Stratified)
3,200
1,600
800
Sample Size
0.5
0.6
0.7
0.8
0.9
Treatment Accuracy
3,200
1,600
800
Sample Size
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (IPTW)
3,200
1,600
800
Sample Size
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (Stratified)
Placebo Test
0.5
0.6
0.7
0.8
0.9
Treatment Accuracy
Placebo Test
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (IPTW)
Placebo Test
0.4
0.3
0.2
0.1
0.0
0.1
Bias of ATE (Stratified)
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
constant +sick +isolation +death
Linguistic Complexity
0.5
0.6
0.7
0.8
0.9
Treatment Accuracy
Logistic Regression
(1-grams)
Simple NN
(1-grams)
Logistic Regression
(1,2-grams)
Simple NN
(1,2-grams)
Logistic Regression
(1,2-grams, counted)
Simple NN
(1,2-grams, counted)
Logistic Regression
(LDA features)
Simple NN
(LDA features)
SHERBERT
Oracle Propensity
Theoretical
Optimum
Unadjusted
Estimator
constant +sick +isolation +death
Linguistic Complexity
G.1 Comparison of Results across Twitter and Reddit DatasetsTreatment Accuracy
ATE (IPTW)
ATE
(Stratified)
Linguistic Complexity
Signal Intensity
Strength of Selection Effect
Sample Size
Placebo Test
Overall
0.013
n=36
0.055
n=36
0.032
n=36
0.013
n=18
0.036
n=18
0.014
n=18
0.015
n=18
0.077
n=18
0.037
n=18
0.037
n=27
0.049
n=27
0.038
n=27
0.015
n=9
0.032
n=9
0.039
n=9
0.019
n=108
0.052
n=108
0.032
n=108
by Task
Treatment Accuracy ATE (IPTW)
ATE
(Stratified)
Logistic Regression
(1-grams)
Simple NN
(1-grams)
Logistic Regression
(1,2-grams)
Simple NN
(1,2-grams)
Logistic Regression
(1,2-grams, counted)
Simple NN
(1,2-grams, counted)
Logistic Regression
(LDA features)
Simple NN
(LDA features)
SHERBERT
Overall
0.022
n=12
0.044
n=12
0.031
n=12
0.026
n=12
0.04
n=12
0.034
n=12
0.015
n=12
0.075
n=12
0.035
n=12
0.013
n=12
0.055
n=12
0.035
n=12
0.013
n=12
0.065
n=12
0.039
n=12
0.015
n=12
0.06
n=12
0.03
n=12
0.016
n=12
0.014
n=12
0.013
n=12
0.008
n=12
0.014
n=12
0.014
n=12
0.049
n=12
0.103
n=12
0.055
n=12
0.019
n=108
0.052
n=108
0.032
n=108
by Representation and Model
0.0
0.2
0.4
0.6
0.8
1.0
H Order of Text Taskirrelevant
relevant
Order of Text
0.5
0.6
0.7
0.8
0.9
Treatment Accuracy
Logistic Regression
(1-grams)
Simple NN
(1-grams)
Logistic Regression
(1,2-grams)
Simple NN
(1,2-grams)
Logistic Regression
(1,2-grams, counted)
Simple NN
(1,2-grams, counted)
Logistic Regression
(LDA features)
Simple NN
(LDA features)
SHERBERT
Oracle Propensity
Theoretical
Optimum
Unadjusted
Estimator
irrelevant
relevant
Order of Text
0.3
0.2
0.1
0.0
0.1
Bias of ATE (IPTW)
irrelevant
relevant
Order of Text
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
0.05
Link to Extended Paper with Appendix 2 With the extremely rare exception of constructed observational studies, conducted with a parallel RCT.
https://behavioral-data.github.io/CausalInferenceChallenges/ specific applications, and make these publicly available 3 .
In these experiments, as well as all following experiments, results are reported with bootstrapped 95% confidence intervals computed by resampling from the population and recomputing the estimators ( §6.2) and evaluation metrics.
In Keith, Jensen, and O'Connor (2020, Table 1), 8/12 studies had fewer than 5,000 observations, and 4/12 had fewer than 1,000.
The resulting set of Reddit users had a mean of 41.07 posts per user, mean of 37.37 tokens per post, and a mean of 1523.28 tokens per user. 7 The resulting set of Twitter users had a mean of 57.76 posts per user, mean of 19.90 tokens per post, and a mean of 1149.59 tokens per user.
In Keith, Jensen, and O'Connor (2020, Table 1), 8/12 studies had fewer than 5,000 observations, and 4/12 had fewer than 1,000.
. Compute ATE estimates using IPTW, matching or stratification estimators, all of which can be applied to continuous outcomes without modification. 4. Compute bias by taking the difference from the true ATE, which can be derived from the known treatment and outcome probability distributions. 9 https://behavioral-data.github.io/CausalInferenceChallenges/
https://behavioral-data.github.io/CausalInferenceChallenges/
https://behavioral-data.github.io/CausalInferenceChallenges/
AcknowledgmentsC Templates for Synthetic PostsAs described in the 'Synthetic Posts' section, synthetic sickness, social isolation, and death posts are used to generate our evaluation tasks. These synthetic posts are selected and inserted into social media histories of real world users by randomly sampling a template and word pair, or, in the case of Social Isolation Posts, by randomly sampling a complete post.C.1 Sickness PostsC.2 Social Isolation PostsSocial Isolation Posts are randomly sampled from the following set of complete synthetic posts:{My friends stopped talking to me., My wife just left me., My parents kicked me out of the house today., I feel so alone, my last friend said they needed to stop seeing me., My partner decided that we shouldn't talk anymore last night., My folks just cut me off, they won't talk to me anymore., I just got a message from my brother that said he can't talk to me anymore. He was my last contact in my family., My last friend at work quit, now there's no one I talk to regularly., I tried calling my Mom but she didn't pick up the phone. I think my parents may be done with me., I got home today and my partner was packing up to leave. Our apartment feels so empty now. }C.3 Death Posts
. A Aarts, J Anderson, C Anderson, P Attridge, A Attwood, J Axt, M Babel, S Bahnik, E Baranski, M Barnett-Cowan, E Bartmess, J Beer, R Bell, H Bentley, L Beyan, G Binion, D Borsboom, A Bosch, F Bosco, M Penuliar, Estimating the Reproducibility of Psychological Science. Science. 349Aarts, A.; Anderson, J.; Anderson, C.; Attridge, P.; Attwood, A.; Axt, J.; Babel, M.; Bahnik, S.; Baranski, E.; Barnett-Cowan, M.; Bartmess, E.; Beer, J.; Bell, R.; Bentley, H.; Beyan, L.; Binion, G.; Borsboom, D.; Bosch, A.; Bosco, F.; and Penuliar, M. 2015. Estimating the Reproducibility of Psychological Science. Science 349.
Matching on the Estimated Propensity Score. A Abadie, G W Imbens, Econometrica. 842Abadie, A.; and Imbens, G. W. 2016. Matching on the Estimated Propensity Score. Econometrica 84(2): 781-807.
D Arbour, D Dimmery, arXiv:1901.01230Permutation Weighting. Arbour, D.; and Dimmery, D. 2019. Permutation Weighting. arXiv:1901.01230 .
. J Baumgartner, S Zannettou, B Keegan, M Squire, J Blackburn, arXiv:2001.08435The Pushshift Reddit DatasetBaumgartner, J.; Zannettou, S.; Keegan, B.; Squire, M.; and Black- burn, J. 2020. The Pushshift Reddit Dataset. arXiv:2001.08435 .
The Information Network: Exploiting Causal Dependencies in Online Information Seeking. P Bhattacharya, R Mehrotra, 16Bhattacharya, P.; and Mehrotra, R. 2016. The Information Net- work: Exploiting Causal Dependencies in Online Information Seeking. CHIIR '16.
Latent Dirichlet Allocation. D Blei, A Ng, M I Jordan, J. Mach. Learn. Res. 3Blei, D.; Ng, A.; and Jordan, M. I. 2003. Latent Dirichlet Alloca- tion. J. Mach. Learn. Res. 3: 993-1022.
Verification of Forecasts Expressed in Terms of Probability. G W Brier, Monthly Weather Review. 781Brier, G. W. 1950. Verification of Forecasts Expressed in Terms of Probability. Monthly Weather Review 78(1): 1-3.
You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech. E Chandrasekharan, U Pavalanathan, A Srinivasan, A Glynn, J Eisenstein, E Gilbert, CSCW 1Chandrasekharan, E.; Pavalanathan, U.; Srinivasan, A.; Glynn, A.; Eisenstein, J.; and Gilbert, E. 2017. You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech. CSCW 1.
The Internet's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales. E Chandrasekharan, M Samory, S Jhaver, H Charvat, A Bruckman, C Lampe, J Eisenstein, E Gilbert, CSCW 2Chandrasekharan, E.; Samory, M.; Jhaver, S.; Charvat, H.; Bruck- man, A.; Lampe, C.; Eisenstein, J.; and Gilbert, E. 2018. The Inter- net's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales. CSCW 2.
V Z Chen, F Montano-Campos, W Zadrozny, arXiv:2006.08904Causal Knowledge Extraction from Scholarly Papers in Social Sciences. Chen, V. Z.; Montano-Campos, F.; and Zadrozny, W. 2020. Causal Knowledge Extraction from Scholarly Papers in Social Sciences. arXiv:2006.08904 .
Mental Health Discourse on reddit: Self-Disclosure, Social Support, and Anonymity. M D Choudhury, S De, ICWSM. Choudhury, M. D.; and De, S. 2014. Mental Health Discourse on reddit: Self-Disclosure, Social Support, and Anonymity. In ICWSM.
The Language of Social Support in Social Media and Its Effect on Suicidal Ideation Risk. M D Choudhury, E Kiciman, ICWSM. 2017Choudhury, M. D.; and Kiciman, E. 2017. The Language of Social Support in Social Media and Its Effect on Suicidal Ideation Risk. ICWSM 2017: 32-41.
Dealing with limited overlap in estimation of average treatment effects. R K Crump, V J Hotz, G W Imbens, O A Mitnik, Biometrika. 961Crump, R. K.; Hotz, V. J.; Imbens, G. W.; and Mitnik, O. A. 2009. Dealing with limited overlap in estimation of average treatment ef- fects. Biometrika 96(1): 187-199.
Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media. M De Choudhury, E Kiciman, M Dredze, G Coppersmith, M Kumar, CHI '16. De Choudhury, M.; Kiciman, E.; Dredze, M.; Coppersmith, G.; and Kumar, M. 2016. Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media. In CHI '16.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, NAACL. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In NAACL.
Automated versus Do-It-Yourself Methods for Causal Inference: Lessons Learned from a Data Analysis Competition. V Dorie, J Hill, U Shalit, M Scott, D Cervone, Statist. Sci. 341Dorie, V.; Hill, J.; Shalit, U.; Scott, M.; and Cervone, D. 2019. Automated versus Do-It-Yourself Methods for Causal Inference: Lessons Learned from a Data Analysis Competition. Statist. Sci. 34(1): 43-68.
Bias and high-dimensional adjustment in observational studies of peer effects. D Eckles, E Bakshy, arXiv:1706.04692Eckles, D.; and Bakshy, E. 2017. Bias and high-dimensional ad- justment in observational studies of peer effects. arXiv:1706.04692 .
N Egami, C J Fong, J Grimmer, M E Roberts, B M Stewart, arXiv:1802.02163How to Make Causal Inferences Using Texts. Egami, N.; Fong, C. J.; Grimmer, J.; Roberts, M. E.; and Stew- art, B. M. 2018. How to Make Causal Inferences Using Texts. arXiv:1802.02163 .
Estimating the Effect Of Exercising On Users' Online Behavior. S A M Falavarjani, H Hosseini, Z Noorian, E Bagheri, Falavarjani, S. A. M.; Hosseini, H.; Noorian, Z.; and Bagheri, E. 2017. Estimating the Effect Of Exercising On Users' Online Be- havior. In AAAI 2017.
Discovery of Treatments from Text Corpora. C Fong, J Grimmer, ACL '16. Berlin, GermanyAssociation for Computational LinguisticsFong, C.; and Grimmer, J. 2016. Discovery of Treatments from Text Corpora. In ACL '16. Berlin, Germany: Association for Com- putational Linguistics.
The Economics of Reproducibility in Preclinical Research. L P Freedman, I M Cockburn, T S Simcoe, PLOS Biology. 136Freedman, L. P.; Cockburn, I. M.; and Simcoe, T. S. 2015. The Economics of Reproducibility in Preclinical Research. PLOS Biol- ogy 13(6): 1-9.
The Case for Evaluating Causal Models Using Interventional Measures and Empirical Data. A Gentzel, D Garant, D Jensen, In NeurIPS '19Gentzel, A.; Garant, D.; and Jensen, D. 2019. The Case for Evalu- ating Causal Models Using Interventional Measures and Empirical Data. In NeurIPS '19.
S Gururangan, A Marasović, S Swayamdipta, K Lo, I Beltagy, D Downey, N A Smith, Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. Gururangan, S.; Marasović, A.; Swayamdipta, S.; Lo, K.; Beltagy, I.; Downey, D.; and Smith, N. A. 2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks.
The Social Complexities of Transgender Identity Disclosure on Social Media. O L Haimson, UC IrvinePh.D. thesisHaimson, O. L. 2018. The Social Complexities of Transgender Identity Disclosure on Social Media. Ph.D. thesis, UC Irvine.
A characterization of limiting distributions of regular estimates. J Hájek, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte GebieteHájek, J. 1970. A characterization of limiting distributions of reg- ular estimates. Zeitschrift für Wahrscheinlichkeitstheorie und Ver- wandte Gebiete .
Assessing lack of common support in causal inference using Bayesian nonparametrics: Implications for evaluating the effect of breastfeeding on children's cognitive outcomes. J Hill, Y.-S Su, Ann. Appl. Stat. 73Hill, J.; and Su, Y.-S. 2013. Assessing lack of common support in causal inference using Bayesian nonparametrics: Implications for evaluating the effect of breastfeeding on children's cognitive outcomes. Ann. Appl. Stat. 7(3).
K Hirano, G W Imbens, The Propensity Score with Continuous Treatments. John Wiley & Sons, LtdHirano, K.; and Imbens, G. W. 2004. The Propensity Score with Continuous Treatments, chapter 7, 73-84. John Wiley & Sons, Ltd.
Comment: Strengthening Empirical Evaluation of Causal Inference. D Jensen, Methods. Statist. Sci. 341Jensen, D. 2019. Comment: Strengthening Empirical Evaluation of Causal Inference Methods. Statist. Sci. 34(1).
Learning representations for counterfactual inference. F Johansson, U Shalit, D Sontag, ICML. Johansson, F.; Shalit, U.; and Sontag, D. 2016. Learning represen- tations for counterfactual inference. In ICML.
Bag of Tricks for Efficient Text Classification. A Joulin, E Grave, P Bojanowski, T Mikolov, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, SpainAssociation for Computational Linguistics2Short PapersJoulin, A.; Grave, E.; Bojanowski, P.; and Mikolov, T. 2017. Bag of Tricks for Efficient Text Classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, 427-431. Va- lencia, Spain: Association for Computational Linguistics. URL https://www.aclweb.org/anthology/E17-2068.
Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. J D Y Kang, J L Schafer, Statist. Sci. 224Kang, J. D. Y.; and Schafer, J. L. 2007. Demystifying Double Ro- bustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data. Statist. Sci. 22(4): 523- 539.
Text and Causal Inference: A Review of Using Text to Remove Confounding from Causal Estimates. K A Keith, D Jensen, B Connor, ACL. Keith, K. A.; Jensen, D.; and O'Connor, B. 2020. Text and Causal Inference: A Review of Using Text to Remove Confounding from Causal Estimates. In ACL.
Using Longitudinal Social Media Analysis to Understand the Effects of Early College Alcohol Use. E Kiciman, S Counts, M Gasser, ICWSM. Kiciman, E.; Counts, S.; and Gasser, M. 2018. Using Longitudinal Social Media Analysis to Understand the Effects of Early College Alcohol Use. In ICWSM.
Adam: A Method for Stochastic Optimization. D P Kingma, J Ba, arXiv:1412.6980Kingma, D. P.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 .
Robust Text Classification in the Presence of Confounding Bias. V Landeiro, A Culotta, AAAILandeiro, V.; and Culotta, A. 2016. Robust Text Classification in the Presence of Confounding Bias. AAAI .
Weight Trimming and Propensity Score Weighting. B K Lee, J Lessler, E A Stuart, PLOS ONE. 63Lee, B. K.; Lessler, J.; and Stuart, E. A. 2011. Weight Trimming and Propensity Score Weighting. PLOS ONE 6(3).
Addressing Extreme Propensity Scores via the Overlap Weights. F Li, L E Thomas, F Li, Am. J. Epidemiol. 1881Li, F.; Thomas, L. E.; and Li, F. 2018. Addressing Extreme Propen- sity Scores via the Overlap Weights. Am. J. Epidemiol 188(1): 250-257.
Universal Causal Evaluation Engine: An API for empirically evaluating causal inference models. A Lin, A Merchant, S K Sarkar, A Amour, T D Le, J Li, K Zhang, E K P Cui, A Hyvärinen, Proceedings of Machine Learning Research. Machine Learning ResearchAnchorage, Alaska, USA104Lin, A.; Merchant, A.; Sarkar, S. K.; and D'Amour, A. 2019. Uni- versal Causal Evaluation Engine: An API for empirically evalu- ating causal inference models. In Le, T. D.; Li, J.; Zhang, K.; Cui, E. K. P.; and Hyvärinen, A., eds., Proceedings of Machine Learning Research, volume 104 of Proceedings of Machine Learn- ing Research, 50-58. Anchorage, Alaska, USA: PMLR. URL http://proceedings.mlr.press/v104/lin19a.html.
Causal discovery from medical textual data. S Mani, G F Cooper, Proceedings. AMIA Symposium. AMIA SymposiumMani, S.; and Cooper, G. F. 2000. Causal discovery from medical textual data. Proceedings. AMIA Symposium .
Doubly robust treatment effect estimation with missing attributes. I Mayer, S Wager, T Gauss, J.-D Moyer, J Josse, arXiv:1910.10624Mayer, I.; Wager, S.; Gauss, T.; Moyer, J.-D.; and Josse, J. 2019. Doubly robust treatment effect estimation with missing attributes. arXiv:1910.10624 .
The Anatomy of Reddit: An Overview of Academic Research. A N Medvedev, R Lambiotte, J.-C Delvenne, Springer Proceedings in Complexity. Medvedev, A. N.; Lambiotte, R.; and Delvenne, J.-C. 2019. The Anatomy of Reddit: An Overview of Academic Research. Springer Proceedings in Complexity 183-204.
CATENA: CAusal and TEmporal relation extraction from NAtural language texts. P Mirza, S Tonelli, COLING '16. Osaka, JapanMirza, P.; and Tonelli, S. 2016. CATENA: CAusal and TEmporal relation extraction from NAtural language texts. In COLING '16. Osaka, Japan.
Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality. R Mozer, L Miratrix, A R Kaufman, Jason Anastasopoulos, L , Political Analysis 1-24Mozer, R.; Miratrix, L.; Kaufman, A. R.; and Jason Anastasopou- los, L. 2020. Matching with Text Data: An Experimental Evalua- tion of Methods for Matching Documents and of Measuring Match Quality. Political Analysis 1-24.
Distilling the Outcomes of Personal Experiences: A Propensity-Scored Analysis of Social Media. A Olteanu, O Varol, E Kiciman, CSCW. Olteanu, A.; Varol, O.; and Kiciman, E. 2017. Distilling the Out- comes of Personal Experiences: A Propensity-Scored Analysis of Social Media. In CSCW.
Causal Diagrams for Empirical Research. J Pearl, 00063444Biometrika. 824Pearl, J. 1995. Causal Diagrams for Empirical Research. Biometrika 82(4). ISSN 00063444.
A Deep Causal Inference Approach to Measuring the Effects of Forming Group Loans in Online Nonprofit Microfinance Platform. T T Pham, Y Shen, Pham, T. T.; and Shen, Y. 2017. A Deep Causal Inference Approach to Measuring the Effects of Forming Group Loans in Online Non- profit Microfinance Platform.
Deconfounded Lexicon Induction for Interpretable Social Science. R Pryzant, K Shen, D Jurafsky, S Wagner, NAACL. Pryzant, R.; Shen, K.; Jurafsky, D.; and Wagner, S. 2018. Decon- founded Lexicon Induction for Interpretable Social Science. In NAACL.
Adjusting for confounding with text matching. M E Roberts, B M Stewart, R A Nielsen, American Journal of Political Science. Roberts, M. E.; Stewart, B. M.; and Nielsen, R. A. 2020. Adjusting for confounding with text matching. American Journal of Political Science .
Design of Observational Studies. P R Rosenbaum, SpringerRosenbaum, P. R. 2010. Design of Observational Studies. Springer.
Causal Inference Using Potential Outcomes. D B Rubin, 10.1198/016214504000001880Journal of the American Statistical Association. 100469Rubin, D. B. 2005. Causal Inference Using Potential Outcomes. Journal of the American Statistical Association 100(469): 322- 331. doi:10.1198/016214504000001880. URL https://doi.org/10. 1198/016214504000001880.
A Social Media Study on the Effects of Psychiatric Medication Use. K Saha, B Sugar, J B Torous, B D Abrahao, E Kiciman, M D Choudhury, ICWSM 13Saha, K.; Sugar, B.; Torous, J. B.; Abrahao, B. D.; Kiciman, E.; and Choudhury, M. D. 2019. A Social Media Study on the Effects of Psychiatric Medication Use. ICWSM 13.
Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies. M Schuler, S Rose, American Journal of Epidemiology. 185Schuler, M.; and Rose, S. 2017. Targeted Maximum Likelihood Es- timation for Causal Inference in Observational Studies. American Journal of Epidemiology 185: 65-73.
. D Sridhar, A Springer, V Hollis, S Whittaker, L Getoor, Sridhar, D.; Springer, A.; Hollis, V.; Whittaker, S.; and Getoor, L.
Estimating Causal Effects of Exercise from Mood Logging Data. FAIM'18 Workshop on CausalML. Estimating Causal Effects of Exercise from Mood Logging Data. FAIM'18 Workshop on CausalML .
Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. S Talwar, A Dhir, P Kaur, N Zafar, M Alrasheedy, Journal of Retailing and Consumer Services. 51Talwar, S.; Dhir, A.; Kaur, P.; Zafar, N.; and Alrasheedy, M. 2019. Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. Journal of Retailing and Consumer Services 51: 72-82.
The effect of wording on message propagation: Topic-and author-controlled natural experiments on Twitter. C Tan, L Lee, B Pang, ACL. Tan, C.; Lee, L.; and Pang, B. 2014. The effect of wording on mes- sage propagation: Topic-and author-controlled natural experiments on Twitter. In ACL.
Using Text Embeddings for Causal Inference. V Veitch, D Sridhar, D M Blei, arXiv:1905.12741Veitch, V.; Sridhar, D.; and Blei, D. M. 2019. Using Text Embed- dings for Causal Inference. arXiv:1905.12741 .
Optimal Caliper Width for Propensity Score Matching of Three Treatment Groups: A Monte Carlo Study. Y Wang, H Cai, C Li, Z Jiang, L Wang, J Song, J Xia, PLOS ONE. 8Wang, Y.; Cai, H.; Li, C.; Jiang, Z.; Wang, L.; Song, J.; and Xia, J. 2013. Optimal Caliper Width for Propensity Score Matching of Three Treatment Groups: A Monte Carlo Study. PLOS ONE 8: 1-7.
When Do Words Matter? Understanding the Impact of Lexical Choice on Audience Perception Using Individual Treatment Effect Estimation. Z Wang, A Culotta, AAAI. Wang, Z.; and Culotta, A. 2019. When Do Words Matter? Un- derstanding the Impact of Lexical Choice on Audience Perception Using Individual Treatment Effect Estimation. In AAAI.
Challenges of Using Text Classifiers for Causal Inference. Z Wood-Doughty, I Shpitser, M Dredze, EMNLP. Wood-Doughty, Z.; Shpitser, I.; and Dredze, M. 2018. Challenges of Using Text Classifiers for Causal Inference. In EMNLP.
Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores. S Yang, P Ding, Biometrika. Yang, S.; and Ding, P. 2018. Asymptotic inference of causal ef- fects with observational studies trimmed by the estimated propen- sity scores. Biometrika .
A Bibliometric Overview of Twitter-Related Studies Indexed in Web of Science. J Yu, J Muñoz-Justicia, Future Internet. 1291Yu, J.; and Muñoz-Justicia, J. 2020. A Bibliometric Overview of Twitter-Related Studies Indexed in Web of Science. Future Internet 12: 91.
Conversations Gone Awry: Detecting Early Signs of Conversational Failure. J Zhang, J Chang, C Danescu-Niculescu-Mizil, L Dixon, Y Hua, D Taraborelli, N Thain, ACL. Zhang, J.; Chang, J.; Danescu-Niculescu-Mizil, C.; Dixon, L.; Hua, Y.; Taraborelli, D.; and Thain, N. 2018. Conversations Gone Awry: Detecting Early Signs of Conversational Failure. In ACL.
Quantifying the Causal Effects of Conversational Tendencies. J Zhang, S Mullainathan, C Danescu-Niculescu-Mizil, 10.1145/3415202Proc. ACM Hum.-Comput. ACM Hum.-ComputInteract. 4(CSCW2Zhang, J.; Mullainathan, S.; and Danescu-Niculescu-Mizil, C. 2020. Quantifying the Causal Effects of Conversational Ten- dencies. Proc. ACM Hum.-Comput. Interact. 4(CSCW2). doi: 10.1145/3415202. URL https://doi.org/10.1145/3415202.
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization. X Zhang, F Wei, M Zhou, ACL. Zhang, X.; Wei, F.; and Zhou, M. 2019. HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Docu- ment Summarization. In ACL.
| [] |
[
"SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization",
"SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization"
] | [
"Yang Gao yang.gao@rhul.ac.uk ",
"Wei Zhao \nComputer Science Department\nTechnische Universität Darmstadt\nGermany\n",
"Steffen Eger eger@aiphes.tu-darmstadt.de \nComputer Science Department\nTechnische Universität Darmstadt\nGermany\n",
"\nDept. of Computer Science\nRoyal Holloway\nUniversity of London\nUK\n"
] | [
"Computer Science Department\nTechnische Universität Darmstadt\nGermany",
"Computer Science Department\nTechnische Universität Darmstadt\nGermany",
"Dept. of Computer Science\nRoyal Holloway\nUniversity of London\nUK"
] | [
"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"
] | We study unsupervised multi-document summarization evaluation metrics, which require neither human-written reference summaries nor human annotations (e.g. preferences, ratings, etc.). We propose SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques. Compared to the state-of-theart unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18-39%. Furthermore, we use SUPERT as rewards to guide a neural-based reinforcement learning summarizer, yielding favorable performance compared to the state-of-the-art unsupervised summarizers. All source code is available at https://github.com/yg211/ acl20-ref-free-eval. | 10.18653/v1/2020.acl-main.124 | [
"https://www.aclweb.org/anthology/2020.acl-main.124.pdf"
] | 218,571,152 | 2005.03724 | cd85127e31b0a930cf3cacaa3b2de4f6d85bb605 |
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Yang Gao yang.gao@rhul.ac.uk
Wei Zhao
Computer Science Department
Technische Universität Darmstadt
Germany
Steffen Eger eger@aiphes.tu-darmstadt.de
Computer Science Department
Technische Universität Darmstadt
Germany
Dept. of Computer Science
Royal Holloway
University of London
UK
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20201347
We study unsupervised multi-document summarization evaluation metrics, which require neither human-written reference summaries nor human annotations (e.g. preferences, ratings, etc.). We propose SUPERT, which rates the quality of a summary by measuring its semantic similarity with a pseudo reference summary, i.e. selected salient sentences from the source documents, using contextualized embeddings and soft token alignment techniques. Compared to the state-of-theart unsupervised evaluation metrics, SUPERT correlates better with human ratings by 18-39%. Furthermore, we use SUPERT as rewards to guide a neural-based reinforcement learning summarizer, yielding favorable performance compared to the state-of-the-art unsupervised summarizers. All source code is available at https://github.com/yg211/ acl20-ref-free-eval.
Introduction
Evaluating the quality of machine-generated summaries is a highly laborious and hence expensive task. Most existing evaluation methods require certain forms of human involvement, thus are supervised: they either directly let humans rate the generated summaries (e.g. Pyramid (Nenkova and Passonneau, 2004)), elicit human-written reference summaries and measure their overlap with the generated summaries (e.g. using ROGUE (Lin, 2004a) or MoverScore (Zhao et al., 2019)), or collect some human annotations (e.g. preferences over pairs of summaries (Gao et al., 2019a)) to learn a summary evaluation function. Evaluation in multidocument summarization is particularly expensive: Lin (2004b) reports that it requires 3,000 hours of human effort to evaluate the summaries from the Document Understanding Conferences (DUC) 1 . To reduce the expenses for evaluating multidocument summaries, we investigate unsupervised evaluation methods, which require neither human annotations nor reference summaries. In particular, we focus on evaluating the relevance (Peyrard, 2019) of multi-document summaries, i.e. measuring how much salient information from the source documents is covered by the summaries. There exist a few unsupervised evaluation methods (Louis and Nenkova, 2013;Sun and Nenkova, 2019), but they have low correlation with human relevance ratings at summary level: given multiple summaries for the same source documents, these methods can hardly distinguish summaries with high relevance from those with low relevance (see §3).
Contributions. First, to better measure the semantic overlap between source documents and machine-generated summaries, we propose to use state-of-the-art contextualized text encoders, e.g. BERT (Devlin et al., 2019) and its variant Sentence-BERT (SBERT) (Reimers and Gurevych, 2019), which is optimized for measuring semantic similarity between sentences, to develop unsupervised evaluation methods. We measure the relevance of a summary in two steps: (i) identifying the salient information in the input documents, to build a pseudo reference summary, and (ii) measuring the semantic overlap between the pseudo reference and the summary to be evaluated. The resulting evaluation method is called SUPERT (SUmmarization evaluation with Pseudo references and bERT). Fig. 1 illustrates the major steps of SUPERT. We show that compared to state-of-the-art unsupervised metrics, the best SUPERT correlates better with the human ratings by 18-39% (in Kendall's τ ).
Second, we use SUPERT as reward functions to guide Reinforcement Learning (RL) based extractive summarizers. We show it outperforms the state-of-the-art unsupervised summarization methods (in multiple ROUGE metrics).
Related Work
Reference-based Evaluation. Popular metrics like ROUGE (Lin, 2004a), BLEU (Papineni et al., 2002) and METEOR (Lavie and Denkowski, 2009) fall into this category. They require (preferably, multiple) human written references and measure the relevance of a summary by comparing its overlapping word sequences with references. More recent work extends ROUGE with WordNet (ShafieiBavani et al., 2018a), word embeddings (Ng and Abrecht, 2015), or use contextualizedembedding-based methods (Zhang et al., 2019;Zhao et al., 2019) to measure the semantic similarity between references and summaries.
Annotation-based Evaluation. Some methods directly ask human annotators to rate summaries following some guidelines, e.g. Responsiveness, which measures the overall quality (relevance, fluency and readability) of summaries, and Pyramid (Nenkova and Passonneau, 2004), which measures summaries' relevance. Recently, systems have been developed to ease the construction of Pyramid scores, e.g. (Hirao et al., 2018;Yang et al., 2016;Gao et al., 2019b;, but they still require human-annotated Summary Content Units (SCUs) to produce reliable scores. Besides SCUs, recent work has explored eliciting preferences over summaries (Zopf, 2018;Gao et al., 2018Gao et al., , 2019a and annotations of important bi-grams (P.V.S and Meyer, 2017) to derive summary ratings.
Some methods collect human ratings on a small number of summaries to train an evaluation function. Peyrard et al. (2017); Peyrard and Gurevych (2018) propose to learn an evaluation function from Pyramid and Responsiveness scores, by using classic supervised learning methods with hand-crafted features. ShafieiBavani et al. (2018b) use the same idea but design corpus based and lexical re-source based word embeddings to build the features. Böhm et al. (2019) train a BERT-based evaluation function with 2,500 human ratings for 500 machinegenerated summaries from the CNN/DailyMail dataset; their method correlates better with human ratings than ROUGE and BLEU. However, as their method is designed for evaluating single-document summaries, it correlates poorly with the Pyramid scores for multi-document summaries (see §3).
Unsupervised Evaluation. Louis and Nenkova (2013) measure the relevance of a summary using multiple heuristics, for example by computing the Jensen-Shannon (JS) divergence between the word distributions in the summary and in the source documents. Ryang and Abekawa (2012); Rioux et al. (2014) develop evaluation heuristics inspired by the maximal marginal relevance metrics (Goldstein et al., 2000). But these methods have low correlation with human ratings at summary level (see §3). Scialom et al. (2019) propose to generate questions from source documents and evaluate the relevance of summaries by counting how many questions the summaries can answer. However, they do not detail how to generate questions from source documents; also, it remains unclear whether their method works for evaluating multi-document summaries. Sun and Nenkova (2019) propose a single-document summary evaluation method, which measures the cosine similarity of the ELMo embeddings (Peters et al., 2018) of the source document and the summary. In §3, we show that their method performs poorly in evaluating multi-document summaries. SUPERT extends their method by using more advanced contextualized embeddings and more effective text alignment/matching methods ( §4), and by introducing pseudo references ( §5).
Datasets, Baselines and Upper Bounds
Datasets. We use two multi-document summarization datasets from the Text Analysis Conference (TAC) 2 shared tasks: TAC'08 and TAC'09. In line with Louis and Nenkova (2013), we only use the initial summaries (the A part) in these datasets. TAC'08 includes 48 topics and TAC'09 includes 44. Each topic has ten news articles, four reference summaries and 57 (TAC'08) and 55 (TAC'09) machine-generated summaries. Each news article on average has 611 words in 24 sentences. Each summary has at most 100 words and receives a Pyramid score, which is used as the ground-truth human rating in our experiments.
Baselines & Upper Bounds. For baselines, we consider TF-IDF, which computes the cosine similarity of the tf-idf vectors of source and summaries; JS, which computes the JS divergence between the words distributions in source documents and summaries; and the REAPER heuristics proposed by Rioux et al. (2014). In addition, we use the learned metric from Böhm et al. (2019) (Böhm19) and the ELMo-based metric by Sun and Nenkova (2019) (C ELMo , stands for cosine-ELMo; see §2). In all these methods, we remove stop-words and use the stemmed words, as we find these operations improve the performance. For C ELMo , we vectorize the documents/summaries by averaging their sentences' ELMo embeddings. As for upper bounds, we consider three strong reference-based evaluation metrics: ROUGE-1/2 and MoverScore (Zhao et al., 2019); note that references are not available for unsupervised evaluation metrics. We measure the performance of the baselines and upper bounds by their average summary-level correlation with Pyramid, in terms of Pearson's (r), Spearman's (ρ) and Kendall's (τ ) correlation coefficients. 3 Table 1 presents the results. All baseline methods fall far behind the upper bounds. Among baselines, the embedding-based methods (Böhm19 and C ELMo ) perform worse than the other lexical-based baselines. This observation suggests that to rate multi-document summaries, using exist- 3 We have also considered the percentage of significantly correlated topics; results can be found in the Github repository. ing single-document summaries evaluation metrics (Böhm19) or computing source-summary embeddings' cosine similarity (C ELMo ) is ineffective.
Measuring Similarity with Contextualized Embeddings
In this section, we explore the use of more advanced contextualized embeddings and more sophisticated embedding alignment/matching methods (rather than cosine similarity) to measure summaries relevance. We first extend C ELMo by considering more contextualized text encoders: BERT, RoBERTa , ALBERT (Lan et al., 2019) and SBERT 4 . We use these encoders to produce embeddings for each sentence in the documents/summaries, and perform average pooling to obtain the vector representations for the documents/summaries. We measure the relevance of a summary by computing the cosine similarity between its embedding and the embedding of the source documents. The upper part in Table 2 presents the results. C SBERT outperforms the other cosine-embedding based metrics by a large margin, but compared to the lexical-based metrics (see Table 1) its performance still falls short. Zhao et al. (2019) recently show that, to measure the semantic similarity between two documents, instead of computing their document embeddings cosine similarity, minimizing their token embeddings word mover's distances (WMDs) (Kusner et al., 2015) yields stronger performance. By minimizing WMDs, tokens from different documents are soft-aligned, i.e. a token from one document can be aligned to multiple relevant tokens from the other document. We adopt the same idea to measure the semantic similarity between summaries and
Building Pseudo References
WMD-based metrics yield the highest correlation in both reference-based (bottom row in Table 1) and reference-free (bottom row in Table 2) settings, but there exists a large gap between their correlation scores. This observation highlights the need for reference summaries. In this section, we explore multiple heuristics to build pseudo references.
Simple heuristics
We first consider two simple strategies to build pseudo references: randomly extracting N sentences or extracting the first N sentences from each source document. Results, presented in Table 3, suggest that extracting the top 10-15 sentences as the pseudo references yields strong performance: it outperforms the lexical-based baselines (upper part in Table 1) by over 16% and M SBERT (Table 2) by over 4%. These findings confirm the position bias in news articles (c.f. (Jung et al., 2019)).
Graph-based heuristics
Graph-based methods have long been used to select salient information from documents, e.g. (Erkan and Radev, 2004;Zheng and Lapata, 2019). These methods build grahs to represent the source docu-ments, in which each vertex represents a sentence and the weight of each edge is decided by the similarity of the corresponding sentence pair. Below, we explore two families of graph-based methods to build pseudo references: position-agnostic and position-aware graphs, which ignore and consider the sentences' positional information, respectively.
Position-Agnostic Graphs. The first graph we consider is SBERT-based LexRank (SLR), which extends the classic LexRank (Erkan and Radev, 2004) method by measuring the similarity of sentences using SBERT embeddings cosine similarity. In addition, we propose an SBERT-based clustering (SC) method to build graphs, which first measures the similarity of sentence pairs using SBERT, and then clusters sentences by using the affinity propagation (Frey and Dueck, 2007) clustering algorithm; the center of each cluster is selected to build the pseudo reference. We choose affinity propagation because it does not require a preset cluster number (unlike K-Means) and it automatically finds the center point of each cluster. For each method (SLR or SC), we consider two variants: the individual-graph version, which builds a graph for each source document and selects top-K sentences (SLR) or the centers (SC) from each graph; and the global-graph version, which builds a graph considering all sentences across all source documents for the same topic, and selects the top-M sentences (SLR) or all the centers (SC) in this large graph. According to our preliminary experiments on 20 randomly sampled topics, we set K = 10 and M = 90.
Position-Aware Graphs. PacSum is a recently proposed graph-based method to select salient sentences from multiple documents (Zheng and Lapata, 2019). In PacSum, a sentence is more likely to be selected if it has higher average similarity with its succeeding sentences and lower average similarity with its preceding sentences. This strategy allows PacSum to prioritize the selection of earlyposition and "semantically central" sentences. We further extend PacSum by using SBERT to measure sentences similarity (the resulting method is denoted as SPS) and consider both the individualand global-graph versions of SPS.
Furthermore, we propose a method called Top+Clique (TC), which selects the top-N sentences and the semantically central non-top-N sentences to build the pseudo references. TC adopts the following steps: (i) Label top-N sentences from each document as salient. (ii) With the remaining (non-top-N ) sentences, build a graph such that only "highly similar" sentences have an edge between them. (iii) Obtain the cliques from the graph and select the semantically central sentence (i.e. the sentence with highest average similarity with other sentences in the clique) from each clique as potentially salient sentences. (iv) For each potentially salient sentence, label it as salient if it is not highly similar to any top-N sentences. Based on preliminary experiments on 20 topics, we let N = 10 and the threshold value be 0.75 for "highly similar". Table 4 presents the graph-based methods' performance. Except for SC G , all other graph-based methods outperform baselines in Table 1. Positionagnostic graph-based methods perform worse not only than the the position-aware ones, but even than the best method in Table 2, which simply uses the full source documents as pseudo references. In addition, we find that the position-aware graph-based sentence extraction methods perform worse than simply extracting top sentences (Table 3). These findings indicate that the position bias remains the most effective heuristic in selecting salient information from news articles; when position information is unavailable (e.g. sentences in source documents are randomly shuffled), it might be better to use all sentences rather than selecting a subset of sentences from the source to build pseudo references.
Guiding Reinforcement Learning
We explore the use of different rewards to guide Neural Temporal Difference (NTD), a RL-based multi-document summarizer (Gao et al., 2019a). We consider three unsupervised reward functions: two baseline methods REAPER and JS (see §3 and Table 1), and the best version of SUPERT, which . NTD performance is averaged over ten runs. R 1/2/L stands for ROUGE-1/2/L. * : significant advantage (p < 0.01 double-tailed t-tests) over the non-asterisks.
selects the top 10 (TAC'08) or 15 (TAC'09) sentences from each source document to build pseudo references and uses SBERT to measure the similarity between summaries and pseudo references. In addition, we consider a non-RL-based stateof-the-art unsupervised summarizer proposed by Yogatama et al. (2015) (YLS15). We use ROUGE to measure the quality of the generated summaries and leave human evaluations for future work. Table 5 presents the results. We find SUPERT is the strongest reward among the considered rewards: it helps NTD perform on par with YSL15 on TAC'08 and perform significantly better on TAC'09.
Conclusion
We explored unsupervised multi-document summary evaluation methods, which require neither reference summaries nor human annotations. We find that vectorizing the summary and the top sentences in the source documents using contextualized embeddings, and measuring their semantic overlap with soft token alignment techniques is a simple yet effective method to rate the summary's quality. The resulting method, SUPERT, correlates with human ratings substantially better than the state-of-the-art unsupervised metrics.
Furthermore, we use SUPERT as rewards to train a neural-RL-based summarizer, which leads to up to 17% quality improvement (in ROUGE-2) compared to the state-of-the-art unsupervised summarizers. This result not only shows the effectiveness of SUPERT in a downstream task, but also promises a new way to train RL-based summarizers: an infinite number of summary-reward pairs can be created from infintely many documents, and their SUPERT scores can be used as rewards to train RL-based summarizers, fundamentally relieving the data-hungriness problem faced by existing RL-based summarization systems.
Figure 1 :
1Workflow of SUPERT.
1 http://duc.nist.gov/Salient
Sentences
Extractor
Semantic
Similarity
Measurement
Summary
relevance
score
Pseudo
reference
summary
Summary
to evaluate
Doc 1
Doc 2
Doc N
Table 2 :
2Performance of contextual-embedding-based
metrics. Soft aligning the embeddings of the source
documents and the summaries (the bottom part) yields
higher correlation than simply computing the embed-
dings cosine similarity (the upper part).
Table 3 :
3Building pseudo references by extracting ran-
domly selected sentences (upper) or the first few sen-
tences (bottom). Results of the random extraction meth-
ods are averaged over ten independent runs.
source documents, using RoBERTa and SBERT
(denoted by M RoBERTa and M SBERT , respectively).
The bottom part in Table 2 presents the results.
The WMD-based scores substantially outperform
their cosine-embedding counterparts; in particular,
M SBERT outperforms all lexical-based baselines in
Table 1. This finding suggests that, to rate multi-
document summaries, soft word alignment meth-
ods should be used on top of contextualized embed-
dings to achieve good performance.
Table 4 :
4Building pseudo references by position-
agnostic (upper) and position-aware (bottom) graphs.
NTDRP .348 .087 .276 .360 .090 .187 NTDJS .353 .090 .281 .368 .095 .192 NTDSP .376 * .102 * .296 * .380 * .103 * .194 YLS15 .375 * .096 N/A .344 .088 N/ATAC'08
TAC'09
R1
R2
RL
R1
R2
RL
Table 5 :
5Training NTD, a RL-based summarizer, with different rewards (RP: REAPER, SP: SUPERT)
https://tac.nist.gov/
Model bert-large-nli-stsb-mean-tokens.
Better rewards yield better summaries: Learning to summarise without references. Florian Böhm, Yang Gao, Christian M Meyer, Ori Shapira, Ido Dagan, Iryna Gurevych, 10.18653/v1/D19-1307Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsFlorian Böhm, Yang Gao, Christian M. Meyer, Ori Shapira, Ido Dagan, and Iryna Gurevych. 2019. Bet- ter rewards yield better summaries: Learning to sum- marise without references. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3108-3118, Hong Kong, China. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.
Lexrank: Graph-based lexical centrality as salience in text summarization. Günes Erkan, Dragomir R Radev, 10.1613/jair.1523J. Artif. Intell. Res. 22Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res., 22:457-479.
Clustering by passing messages between data points. science. J Brendan, Delbert Frey, Dueck, 315Brendan J Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. science, 315(5814):972-976.
APRIL: interactively learning to summarise by combining active preference learning and reinforcement learning. Yang Gao, Christian M Meyer, Iryna Gurevych, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumYang Gao, Christian M. Meyer, and Iryna Gurevych. 2018. APRIL: interactively learning to summarise by combining active preference learning and rein- forcement learning. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 4120-4130, Brussels, Belgium.
Preference-based interactive multidocument summarisation. Yang Gao, Christian M Meyer, Iryna Gurevych, 10.1007/s10791-019-09367-8Information Retrieval Journal. Yang Gao, Christian M. Meyer, and Iryna Gurevych. 2019a. Preference-based interactive multi- document summarisation. Information Retrieval Journal.
Automated pyramid summarization evaluation. Yanjun Gao, Chen Sun, Rebecca J Passonneau, 10.18653/v1/K19-1038Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Hong Kong, ChinaAssociation for Computational LinguisticsYanjun Gao, Chen Sun, and Rebecca J. Passonneau. 2019b. Automated pyramid summarization evalua- tion. In Proceedings of the 23rd Conference on Com- putational Natural Language Learning (CoNLL), pages 404-418, Hong Kong, China. Association for Computational Linguistics.
Multi-document summarization by sentence extraction. Jade Goldstein, Vibhu Mittal, Jaime Carbonell, Mark Kantrowitz, NAACL-ANLP 2000 Workshop: Automatic Summarization. Jade Goldstein, Vibhu Mittal, Jaime Carbonell, and Mark Kantrowitz. 2000. Multi-document summa- rization by sentence extraction. In NAACL-ANLP 2000 Workshop: Automatic Summarization.
Automatic pyramid evaluation exploiting EDU-based extractive reference summaries. Tsutomu Hirao, Hidetaka Kamigaito, Masaaki Nagata, 10.18653/v1/D18-1450Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsTsutomu Hirao, Hidetaka Kamigaito, and Masaaki Nagata. 2018. Automatic pyramid evaluation ex- ploiting EDU-based extractive reference summaries. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 4177-4186, Brussels, Belgium. Association for Computational Linguistics.
Earlier isn't always better: Subaspect analysis on corpus and system biases in summarization. Taehee Jung, Dongyeop Kang, Lucas Mentch, Eduard Hovy, 10.18653/v1/D19-1327Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsTaehee Jung, Dongyeop Kang, Lucas Mentch, and Ed- uard Hovy. 2019. Earlier isn't always better: Sub- aspect analysis on corpus and system biases in sum- marization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3322-3333, Hong Kong, China. Association for Computational Linguistics.
From word embeddings to document distances. Matt J Kusner, Yu Sun, Nicholas I Kolkin, Kilian Q Weinberger, Proceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningLille, FranceMatt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd In- ternational Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 957-966.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv e- prints, page arXiv:1909.11942.
The meteor metric for automatic evaluation of machine translation. Alon Lavie, Michael J Denkowski, 10.1007/s10590-009-9059-4Machine Translation. 232-3Alon Lavie and Michael J. Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine Translation, 23(2-3):105-115.
Looking for a few good metrics: Rouge and its evaluation. Chin-Yew Lin, NTCIR Workshop. Chin-Yew Lin. 2004a. Looking for a few good metrics: Rouge and its evaluation. In NTCIR Workshop.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, ACL Workshop "Text Summarization Branches Out. Chin-Yew Lin. 2004b. ROUGE: A package for auto- matic evaluation of summaries. In ACL Workshop "Text Summarization Branches Out".
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv e-prints. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv e-prints, page arXiv:1907.11692.
Automatically assessing machine summary content without a gold standard. Annie Louis, Ani Nenkova, 10.1162/COLI_a_00123Computational Linguistics. 392Annie Louis and Ani Nenkova. 2013. Automatically assessing machine summary content without a gold standard. Computational Linguistics, 39(2):267- 300.
Evaluating content selection in summarization: The pyramid method. Ani Nenkova, Rebecca J Passonneau, Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL. Boston, Massachusetts, USAAni Nenkova and Rebecca J. Passonneau. 2004. Evalu- ating content selection in summarization: The pyra- mid method. In Human Language Technology Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics, HLT-NAACL 2004, Boston, Massachusetts, USA, May 2-7, 2004, pages 145-152.
Better summarization evaluation with word embeddings for ROUGE. Jun-Ping Ng, Viktoria Abrecht, 10.18653/v1/D15-1222Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsJun-Ping Ng and Viktoria Abrecht. 2015. Better sum- marization evaluation with word embeddings for ROUGE. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1925-1930, Lisbon, Portugal. Association for Computational Linguistics.
BLEU: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, PA, USAKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, PA, USA.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
A simple theoretical model of importance for summarization. Maxime Peyrard, 10.18653/v1/P19-1101Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsMaxime Peyrard. 2019. A simple theoretical model of importance for summarization. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1059-1073, Florence, Italy. Association for Computational Linguistics.
Learning to score system summaries for better content selection evaluation. Maxime Peyrard, Teresa Botschen, Iryna Gurevych, Proceedings of the Workshop on New Frontiers in Summarization. the Workshop on New Frontiers in SummarizationCopenhagen, DenmarkMaxime Peyrard, Teresa Botschen, and Iryna Gurevych. 2017. Learning to score system summaries for better content selection evaluation. In Proceedings of the Workshop on New Frontiers in Summarization, pages 74-84, Copenhagen, Denmark.
Objective function learning to match human judgements for optimization-based summarization. Maxime Peyrard, Iryna Gurevych, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana, USA2Short PapersMaxime Peyrard and Iryna Gurevych. 2018. Objective function learning to match human judgements for optimization-based summarization. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 654-660, New Orleans, Louisiana, USA.
Joint optimization of user-desired content in multidocument summaries by learning from user feedback. P V Avinesh, Christian M Meyer, 10.18653/v1/P17-1124Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational LinguisticsLong Papers)Avinesh P.V.S and Christian M. Meyer. 2017. Joint optimization of user-desired content in multi- document summaries by learning from user feed- back. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1353-1363, Vancouver, Canada. Association for Computational Linguistics.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3980-3990, Hong Kong, China. Association for Computational Linguistics.
Fear the REAPER: A system for automatic multidocument summarization with reinforcement learning. Cody Rioux, A Sadid, Yllias Hasan, Chali, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarCody Rioux, Sadid A. Hasan, and Yllias Chali. 2014. Fear the REAPER: A system for automatic multi- document summarization with reinforcement learn- ing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, pages 681-690, Doha, Qatar.
Framework of automatic text summarization using reinforcement learning. Seonggi Ryang, Takeshi Abekawa, Proceedings of the 2012. the 2012Seonggi Ryang and Takeshi Abekawa. 2012. Frame- work of automatic text summarization using rein- forcement learning. In Proceedings of the 2012
Natural Language Processing and Computational Natural Language Learning. Jeju IslandKoreaJoint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 256-265, Jeju Island, Ko- rea.
Answers unite! unsupervised metrics for reinforced summarization models. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, 10.18653/v1/D19-1320Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsThomas Scialom, Sylvain Lamprier, Benjamin Pi- wowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summa- rization models. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3244-3254, Hong Kong, China. As- sociation for Computational Linguistics.
A graphtheoretic summary evaluation for ROUGE. Elaheh Shafieibavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen, 10.18653/v1/D18-1085Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsElaheh ShafieiBavani, Mohammad Ebrahimi, Ray- mond Wong, and Fang Chen. 2018a. A graph- theoretic summary evaluation for ROUGE. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 762- 767, Brussels, Belgium. Association for Computa- tional Linguistics.
Summarization evaluation in the absence of human model summaries using the compositionality of word embeddings. Elaheh Shafieibavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsElaheh ShafieiBavani, Mohammad Ebrahimi, Ray- mond Wong, and Fang Chen. 2018b. Summariza- tion evaluation in the absence of human model sum- maries using the compositionality of word embed- dings. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 905- 914, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Crowdsourcing lightweight pyramids for manual summary evaluation. Ori Shapira, David Gabay, Yang Gao, Hadar Ronen, Ramakanth Pasunuru, Mohit Bansal, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USA1Yael Amsterdamer, and Ido DaganOri Shapira, David Gabay, Yang Gao, Hadar Ro- nen, Ramakanth Pasunuru, Mohit Bansal, Yael Am- sterdamer, and Ido Dagan. 2019. Crowdsourcing lightweight pyramids for manual summary evalua- tion. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 682-687.
The feasibility of embedding based automatic evaluation for single document summarization. Simeng Sun, Ani Nenkova, 10.18653/v1/D19-1116Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsSimeng Sun and Ani Nenkova. 2019. The feasibility of embedding based automatic evaluation for sin- gle document summarization. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1216-1221, Hong Kong, China. Association for Computational Linguistics.
PEAK: pyramid evaluation via automated knowledge extraction. Qian Yang, Rebecca J Passonneau, Gerard De Melo, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USAQian Yang, Rebecca J. Passonneau, and Gerard de Melo. 2016. PEAK: pyramid evaluation via au- tomated knowledge extraction. In Proceedings of the Thirtieth AAAI Conference on Artificial Intel- ligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2673-2680.
Extractive summarization by maximizing semantic volume. Dani Yogatama, Fei Liu, Noah A Smith, 10.18653/v1/D15-1228Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsDani Yogatama, Fei Liu, and Noah A. Smith. 2015. Extractive summarization by maximizing semantic volume. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1961-1966, Lisbon, Portugal. Association for Computational Linguistics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, arXiv:1904.09675BERTScore: Evaluating Text Generation with BERT. arXiv eprints. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating Text Generation with BERT. arXiv e- prints, page arXiv:1904.09675.
MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M Meyer, Steffen Eger, 10.18653/v1/D19-1053Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578, Hong Kong, China. Association for Computational Lin- guistics.
Sentence centrality revisited for unsupervised summarization. Hao Zheng, Mirella Lapata, 10.18653/v1/P19-1628Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHao Zheng and Mirella Lapata. 2019. Sentence cen- trality revisited for unsupervised summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6236-6247, Florence, Italy. Association for Compu- tational Linguistics.
Estimating summary quality with pairwise preferences. Markus Zopf, 10.18653/v1/N18-1152Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Markus Zopf. 2018. Estimating summary quality with pairwise preferences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1687-1696, New Orleans, Louisiana. Associ- ation for Computational Linguistics.
| [
"https://github.com/yg211/"
] |
[
"Gromov-Wasserstein Alignment of Word Embedding Spaces",
"Gromov-Wasserstein Alignment of Word Embedding Spaces"
] | [
"David Alvarez-Melis ",
"MITTommi S Jaakkola Csail "
] | [] | [
"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing"
] | Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-theart methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the Gromov-Wasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks. | 10.18653/v1/d18-1214 | [
"https://www.aclweb.org/anthology/D18-1214.pdf"
] | 52,156,206 | 1809.00013 | 3777dae81535d0d922aa51ec6831451cc1ebd0cc |
Gromov-Wasserstein Alignment of Word Embedding Spaces
Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 31 -November 4. 2018. 2018
David Alvarez-Melis
MITTommi S Jaakkola Csail
Gromov-Wasserstein Alignment of Word Embedding Spaces
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsOctober 31 -November 4. 2018. 20181881
Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-theart methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the Gromov-Wasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks.
Introduction
Many key linguistic tasks, within and across languages or domains, including machine translation, rely on learning cross-lingual correspondences between words or other semantic units. While the associated alignment problem could be solved with access to large amounts of parallel data, broader applicability relies on the ability to do so with largely mono-lingual data, from Part-of-Speech (POS) tagging (Zhang et al., 2016), dependency parsing (Guo et al., 2015), to machine translation . The key subtask of bilingual lexical induction, for example, while long standing as a problem (Fung, 1995;Rapp, 1995Rapp, , 1999, has been actively pursued recently (Artetxe et al., 2016;Zhang et al., 2017a;Conneau et al., 2018).
Current methods for learning cross-domain correspondences at the word level rely on distributed representations of words, building on the observation that mono-lingual word embeddings exhibit similar geometric properties across languages Mikolov et al. (2013). While most early work assumed some, albeit minimal, amount of parallel data (Mikolov et al., 2013;Dinu et al., 2014;Zhang et al., 2016), recently fully-unsupervised methods have been shown to perform on par with their supervised counterparts (Conneau et al., 2018;Artetxe et al., 2018). While successful, the mappings arise from multiple steps of processing, requiring either careful initial guesses or postmapping refinements, including mitigating the effect of frequent words on neighborhoods. The associated adversarial training schemes can also be challenging to tune properly (Artetxe et al., 2018).
In this paper, we propose a direct optimization approach to solving correspondences based on recent generalizations of optimal transport (OT). OT is a general mathematical toolbox used to evaluate correspondence-based distances and establish mappings between probability distributions, including discrete distributions such as point-sets. However, the nature of mono-lingual word embeddings renders the classic formulation of OT inapplicable to our setting. Indeed, word embeddings are estimated primarily in a relational manner to the extent that the algorithms are naturally interpreted as metric recovery methods (Hashimoto et al., 2016). In such settings, previous work has sought to bypass this lack of registration by jointly optimizing over a matching and an orthogonal mapping (Rangarajan et al., 1997;Zhang et al., 2017b). Due to the focus on distances rather than points, we instead adopt a relational OT formulation based on the Gromov-Wasserstein distance that measures how distances between pairs of words are mapped across languages. We show that the resulting mapping admits an efficient solution and requires little or no tuning.
In summary, we make the following contributions:
• We propose the use of the Gromov-Wasserstein distance to learn correspondences between word embedding spaces in a fully-unsupervised manner, leading to a theoretically-motivated optimization problem that can be solved efficiently, robustly, in a single step, and requires no post-processing or heuristic adjustments.
• To scale up to large vocabularies we realize an extended mapping to words not part of the original optimization problem.
• We show that the proposed approach performs on par with state-of-the-art neural network based methods on benchmark word translation tasks, while requiring a fraction of the computational cost and/or hyperparameter tuning.
Problem Formulation
In the unsupervised bilingual lexical induction problem we consider two languages with vocabularies V x and V y , represented by word embeddings
X = {x (i) } n i=1 and Y = {y (j) } m j=1 , respectively, where x (i) ∈ X ⊂ R dx corresponds to w x i ∈ V x and y (j) ∈ Y ⊂ R dy to w y j ∈ V y .
For simplicity, we let m = n and d x = d y , although our methods carry over to the general case with little or no modifications. Our goal is to learn an alignment between these two sets of words without any parallel data, i.e., we learn to relate x (i) ↔ y (j) with the implication that w x i translates to w y j . As background, we begin by discussing the problem of learning an explicit map between embeddings in the supervised scenario. The associated training procedure will later be used for extending unsupervised alignments (Section 3.2).
Supervised Maps: Procrustes
In the supervised setting, we learn a map T : X → Y such that T (x (i) ) ≈ y (j) whenever w y j is a translation of w x i . Let X and Y be the matrices whose columns are vectors x (i) and y (j) , respectively. Then we can find T by solving
min T ∈F X − T (Y) 2 F (1)
where · F is the Frobenius norm A F = i,j |a ij | 2 . Naturally, both the difficulty of finding T and the quality of the resulting alignment depend on the choice of space F. A classic approach constrains T to be orthonormal matrices, i.e., rotations and reflections, resulting in the orthogonal Procrustes problem
min P∈O(n) X − PY 2 F (2)
where O(n) = {P ∈ R n×n | P P = I}.
One key advantage of this formulation is that it has a closed-form solution in terms of a singular value decomposition (SVD), whereas for most other choices of constraint set F it does not. Given an SVD decomposition UΣV of XY , the solution to problem (2) is P * = UV (Schönemann, 1966). Besides obvious computational advantage, constraining the mapping between spaces to be orthonormal is justified in the context of word embedding alignment because orthogonal maps preserve angles (and thus distances), which is often the only information used by downstream tasks (e.g., for nearest neighbor search) that rely on word embeddings. (Smith et al., 2017) further show that orthogonality is required for self-consistency of linear transformations between vector spaces.
Clearly, the Procrustes approach only solves the supervised version of the problem as it requires a known correspondence between the columns of X and Y. Steps beyond this constraint include using small amounts of parallel data (Zhang et al., 2016) or an unsupervised technique as the initial step to generate pseudo-parallel data (Conneau et al., 2018) before solving for P.
Unsupervised Maps: Optimal Transport
Optimal transport formalizes the problem of finding a minimum cost mapping between two point sets, viewed as discrete distributions. Specifically, we assume two empirical distributions over embeddings, e.g.,
µ = n i=1 p i δ x (i) , ν = m j=1 q j δ y (i)(3)
where p and q are vectors of probability weights associated with each point set. In our case, we usually consider uniform weights, e.g., p i = 1/n and q j = 1/m, although if additional information were provided (such as in the form of word frequencies), those could be naturally incorporated via p and q (see discussion at the end of Section 3). We find a transportation map T realizing
inf T X c(x, T (x))dµ(x) | T # µ = ν ,(4)
where the cost c(x, T (x)) is typically just x − T (x) and T # µ = ν implies that the source points must exactly map to the targets. However, such a map need not exist in general and we instead follow a relaxed Kantorovich's formulation. In this case, the set of transportation plans is a polytope:
Π(p, q) = {Γ ∈ R n×m + | Γ1 n = p, Γ 1 n = q}.
The cost function is given as a matrix C ∈ R n×m , e.g., C ij = x (i) − y (j) . The total cost incurred by Γ is Γ, C := ij Γ ij C ij . Thus, the discrete optimal transport (DOT) problem consists of finding a plan Γ that solves
min Γ∈Π(p,q) Γ, C .(5)
Problem (5) is a linear program, and thus can be solved exactly in O(n 3 log n) with interior point methods. However, regularizing the objective leads to more efficient optimization and often better empirical results. The most common such regularization, popularized by Cuturi (2013), involves adding an entropy penalization:
min Γ∈Π(p,q) Γ, C − λH(Γ).(6)
The solution of this strictly convex optimization problem has the form Γ * = diag (a) K diag (b), with K = e − C λ (element-wise), and can be obtained efficiently via the Sinkhorn-Knopp algorithm, a matrix-scaling procedure which iteratively computes:
a ← p Kb and b ← q K a,(7)
where denotes entry-wise division. The derivation of these updates is immediate from the form of Γ * above, combined with the marginal constraints Γ1 n = p, Γ 1 n = q (Peyré and Cuturi, 2018). Although simple, efficient and theoreticallymotivated, a direct application of discrete OT for unsupervised word translation is not appropriate. One reason is that the mono-lingual embeddings are estimated in a relative manner, leaving, e.g., an overall rotation unspecified. Such degrees of freedom can dramatically change the entries of the cost matrix C ij = x (i) − y (j) and the resulting transport map. One possible solution is to simultaneously learn an optimal coupling and an orthogonal transformation (Zhang et al., 2017b). The transport problem is then solved iteratively, using
C ij = x (i) − Py (j) ,
where P is in turn chosen to minimize the transport cost (via Procrustes). While promising, the resulting iterative approach is sensitive to initialization, perhaps explaining why Zhang et al. (2017b) used an adversarially learned mapping as the initial step. The computational cost can also be prohibitive (Artetxe et al., 2018) though could be remedied with additional development.
We adopt a theoretically well-founded generalization of optimal transport for pairs of points (their distances), thus in line with how the embeddings are estimated in the first place. We explain the approach in detail in the next Section.
Transporting across unaligned spaces
In this section we introduce the Gromov-Wasserstein distance, describe an optimization algorithm for it, and discuss how to extend the approach to out-of-sample vectors.
The Gromov Wasserstein Distance
The classic optimal transport requires a distance between vectors across the two domains. Such a metric may not be available, for example, when the sample sets to be matched do not belong to the same metric space (e.g., different dimension). The Gromov-Wasserstein distance (Mémoli, 2011) generalizes optimal transport by comparing the metric spaces directly instead of samples across the spaces. In other words, this framework operates on distances between pairs of points calculated within each domain and measures how these distances compare to those in the other domain. Thus, it requires a weaker but easy to define notion of distance between distances, and operates on pairs of points, turning the problem from a linear to a quadratic one.
Formally, in its discrete version, this framework considers two measure spaces expressed in terms of within-domain similarity matrices (C, p) and (C , q) and a loss function defined between similarity pairs:
L : R × R → R, where L(C ik , C jl ) measures the discrepancy between the distances d(x (i) , x (k) ) and d (y (j) , y (l) ). Typical choices for L are L(a, b) = 1 2 (a − b) 2 or L(a, b) = KL(a|b)
. In this framework, L(C ik , C jl ) can also be understood as the cost of "matching" i to j and k to l.
All the relevant values of L(·, ·) can be put in a 4-th order tensor L ∈ R N 1 ×N 1 ×N 2 ×N 2 , where L ijkl = L(C ik , C jl ). As before, we seek a cou- Figure 1: The Gromov-Wasserstein distance is well suited for the task of cross-lingual alignment because it relies on relational rather than positional similarities to infer correspondences across domains. Computing it requires two intra-domain similarity (or equivalently cost) matrices (left & center), and it produces an optimal coupling of source and target points with minimal discrepancy cost (right).
pling Γ specifying how much mass to transfer between each pair of points from the two spaces. The Gromov-Wasserstein problem is then defined as solving (8) Compared to problem (5), this version is substantially harder since the objective is now not only non-linear, but non-convex too. 1 In addition, it requires operating on a fourth-order tensor, which would be prohibitive in most settings. Surprisingly, this problem can be optimized efficiently with first-order methods, whereby each iteration involves solving a traditional optimal transport problem (Peyré et al., 2016). Furthermore, for suitable choices of loss function L, Peyré et al. (2016) show that instead of the O(N 2 1 N 2 2 ) complexity implied by naive fourthorder tensor product, this computation reduces to O(N 2 1 N 2 + N 1N 2 2 ) cost. Their approach consists of solving (5) by projected gradient descent, which yields iterations that involve projecting onto Π(p, q) a pseudo-cost matrix of the form
GW(C, C , p, q) = min Γ∈Π(p,q) i,j,k,l L ijkl Γ ij Γ klC Γ (C, C , Γ) = C xy − h 1 (C)Γh 2 (C )(9)
where
C xy = f 1 (C)p1 m + 1 n q f 2 (C )
and f 1 , f 2 , h 2 , h 2 are functions that depend on the loss L. We provide an explicit algorithm for the case L = L 2 at the end of this section.
1 In fact, the discrete (Monge-type) formulation of the problem is essentially an instance of the well-known (and NP-hard) quadratic assignment problem (QAP).
Once we have solved (8), the optimal transport coupling Γ * provides an explicit (soft) matching between source and target samples, which for the problem of interest can be interpreted as a probabilistic translation: for every pair of words (w
(i) src , w (j) trg ), Γ *
ij provides a likelihood that these two words are translations of each other. This itself is enough to translate, and we show in the experiments section that Γ * by itself, without any further post-processing, provides highquality translations. This stands in sharp contrast to mapping-based methods, which rely on nearest-neighbor computation to infer translations, and thus become prone to hub-word effects which have to be mitigated with heuristic postprocessing techniques such as Inverted Softmax (Smith et al., 2017) and Cross-Domain Similarity Scaling (CSLS) (Conneau et al., 2018). The transportation coupling Γ, being normalized by construction, requires no such artifacts.
The Gromov-Wasserstein problem (8) possesses various desirable theoretical properties, including the fact that for a suitable choice of the loss function it is indeed a distance: Solving problem (8) therefore yields a fascinating accompanying notion: the Gromov-Wasserstein distance between languages, a measure of semantic discrepancy purely based on the relational characterization of their word embeddings. Owing to Theorem 3.1, such values can be interpreted as distances, so that, e.g., the triangle inequality holds among them. In Section 4.4 we compare various languages in terms of their GWdistance.
Finally, we note that whenever word frequency counts are available, those would be used for p and q. If they are not, but words are sorted according to occurrence (as they often are in popular off-the-shelf embedding formats), one can estimate rank-probabilities such as Zipf power laws, which are known to accurately model multiple languages (Piantadosi, 2014). In order to provide a fair comparison to previous work, throughout our experiments we use uniform distributions so as to avoid providing our method with additional information not available to others.
Scaling Up
While the pure Gromov-Wasserstein approach leads to high quality solutions, it is best suited to small-to-moderate vocabulary sizes, 2 since its optimization becomes prohibitive for very large problems. For such settings, we propose a twostep approach in which we first match a subset of the vocabulary via the optimal coupling, after which we learn an orthogonal mapping through a modified Procrustes problem. Formally, suppose we solve problem (8) for a reduced matrices X 1:k and Y i:k consisting of the first columns k of X and Y, respectively, and let Γ * be the optimal coupling. We seek an orthogonal matrix that best recovers the barycentric mapping implied by Γ * . Namely, we seek to find P which solves:
min P∈O(n) XΓ * − PY 2 2(10)
Just as problem (2), it is easy to show that this Procrustes-type problem has a closed form solution in terms of a singular value decomposition. Namely, the solution to (10) is P * = UV , where UΣV * = X 1:m Γ * Y 1:m . After obtaining this projection, we can immediately map the rest of the embeddings viaŷ (j) = P * y (j) .
We point out that this two-step procedure resembles that of Conneau et al. (2018). Both ultimately produce an orthogonal mapping obtained by solving a Procrustes problems, but they differ in the way they produce pseudo-matches to allow for such second-step: while their approach relies
Algorithm 1 Gromov-Wasserstein Computation for Word Embedding Alignment
Input: Source and target embeddings X, Y. Regularization λ. Probability vectors p, q. // Compute intra-language similarities C s ← cos(X, X),
C t ← cos(Y, Y) C st ← C 2 s p1 m + 1 n q(C 2 t ) while not converged do // Compute pseudo-cost matrix (Eq. (9)) C Γ ← C st − 2C s ΓC t // Sinkhorn iterations (Eq. (7)) a ← 1, K ← exp{−Ĉ Γ /λ} while not converged do a ← p Kb, b ← q K a end while Γ ← diag (a) K diag (b) end while // Optional step: Learn explicit projection U, Σ, V ← SVD(XΓY ) P = UV return Γ, P
on an adversarially-learned transformation, we use an explicit optimization problem.
We end this section by discussing parameter and configuration choices. To leverage the fast algorithm of Peyré et al. (2016), we always use the L 2 distance as the loss function L between cost matrices. On the other hand, we observed throughout our experiments that the choice of cosine distance as the metric in both spaces consistently leads to better results, which agrees with common wisdom on computing distances between word embeddings. This leaves us with a single hyperparameter to control: the entropy regularization term λ. By applying any sensible normalization on the cost matrices (e.g., dividing by the mean or median value), we are able to almost entirely eliminate sensitivity to that parameter. In practice, we use a simple scheme in all experiments: we first try the same fixed value (λ = 5 × 10 −5 ), and if the regularization proves too small (by leading to floating point errors), we instead use λ = 1 × 10 −4 . We never had to go beyond these two values in all our experiments. We emphasize that at no point we use train (let alone test) supervision available with many datasets-model selection is done solely in terms of the unsupervised objective. Pseudocode for the full method (with L = L 2 and cosine similarity) is shown here as Algorithm 1.
Experiments
Through this experimental evaluation we seek to: (i) understand the optimization dynamics of the proposed approach ( §4.2), evaluate its performance on benchmark cross-lingual word embedding tasks ( §4.3), and (iii) qualitatively investigate the notion of distance-between-languages it computes ( §4.4). Rather than focusing solely on prediction accuracy, we seek to demonstrate that the proposed approach offers a fast, principled, and robust alternative to state-of-the-art multi-step methods, delivering comparable performance.
Evaluation Tasks and Methods
Datasets We evaluate our method on two standard benchmark tasks for cross-lingual embeddings. First, we consider the dataset of Conneau et al. (2018), which consists of word embeddings trained with FASTTEXT (Bojanowski et al., 2017) on Wikipedia and parallel dictionaries for 110 language pairs. Here, we focus on the language pairs for which they report results: English (EN) from/to Spanish (ES), French (FR), German (DE), Russian (RU) and simplified Chinese (ZH). We do not report results on Esperanto (EO) as dictionaries for that language were not provided with the original dataset release. For our second set of experiments, we consider the-substantially harder 3 -dataset of (Dinu et al., 2014), which has been extensively compared against in previous work. It consists of embeddings and dictionaries in four pairs of languages; EN from/to ES, IT, DE, and FI (Finnish). 3 We discuss the difference in hardness of these two benchmark datasets in Section 4.3.
Methods To see how our fully-unsupervised method compares with methods that require (some) cross-lingual supervision, we follow (Conneau et al., 2018) and consider a simple but strong baseline consisting of solving a procrustes problem directly using the available cross-lingual embedding pairs. We refer to this method simply as PROCRUSTES. In addition, we compare against the fully-unsupervised methods of Zhang et al. (2017a), Artetxe et al. (2018) and Conneau et al. (2018). 4 As proposed by the latter, we use CSLS whenever nearest neighbor search is required, which has been shown to improve upon naive nearest-neighbor retrieval in multiple work.
Training Dynamics of G-W
As previously mentioned, our approach involves only two optimization choices, one of which is required only for very large settings. When running Algorithm 1 for the full set of embeddings is infeasible (due to memory limitations), one must decide what fraction of the embeddings to use during optimization. In our experiments, we use the largest possible size allowed by memory constraints, which was found to be K = 20, 000 for the personal computer we used.
The other-more interesting-optimization choice involves the entropy regularization parameter λ used within the Sinkhorn iterations. Large regularization values lead to denser optimal coupling Γ * , while less regularization leads to sparser solutions, 5 at the cost of a harder (more Table 1: Performance (P@1) of unsupervised and minimally-supervised methods on the dataset of Conneau et al. (2018). The time columns shows the average runtime in minutes of an instance (i.e., one language pair) of the method in this task on the same quad-core CPU machine.
non-convex) optimization problem.
In Figure 2 we show the training dynamics of our method when learning correspondences between word embeddings from the dataset of Conneau et al. (2018). As expected, larger values of λ lead to smoother improvements with faster runtime-per-iteration, at a price of some drop in performance. In addition, we found that computing GW distances between closer languages (such as EN and FR) leads to faster convergence than for more distant ones (such as EN and RU, in Fig. 2c).
Worth emphasizing are three desirable optimization properties that set apart the Gromov-Wasserstein distance from other unsupervised alignment approaches, particularly adversarialtraining ones: (i) the objective decreases monotonically (ii) its value closely follows the true metric of interest (translation, which naturally is not available during training) and (iii) there is no risk of degradation due to overtraining, as is the case for adversarial-based methods trained with stochastic gradient descent (Conneau et al., 2018).
Benchmark Results
We report the results on the dataset of Conneau et al. (2018) in Table 1. The strikingly high performance of all methods on this task belies the hardness of the general problem of unsupervised cross-lingual alignment. Indeed, as pointed out by Artetxe et al. (2018), the FASTTEXT embeddings provided in this task are trained on very large and highly comparable-across languagescorpora (Wikipedia), and focuses on closely related pairs of languages. Nevertheless, we carry out experiments here to have a broad evaluation of our approach in both easier and harder settings.
Next, we present results on the more challengto a permutation matrix, which gives a hard-matching solution to the transportation problem (Peyré and Cuturi, 2018). (Dinu et al., 2014). Bottom: Normalizing the cost matrices leads to better optimization and improved performance.
ing dataset of (Dinu et al., 2014) in Table 2. Here, we rely on the results reported by (Artetxe et al., 2018) since by the time of writing the present work their implementation was not available yet. Part of what makes this dataset hard is the wide discrepancy between word distance across languages, which translates into uneven distance matrices (Figure 3), and in turn leads to poor results for G-W. To account for this, previous work has relied on an initial whitening step on the embeddings. In our case, it suffices to normalize the pairwise similarity matrices to the same range to obtain substantially better results. While we have observed that careful choice of the regularization parameter λ can obviate the need for this step, we opt for the normalization approach since it allows us to optimize without having to tune λ.
We compare our method (with and without nor- Dinu et al. (2014) with runtimes in minutes. Those marked with † are from (Artetxe et al., 2018). Note that their runtimes correspond to GPU computation, while ours are CPU-minutes, so the numbers are not directly comparable.
malization) against alternative approaches in Table 2. Note that we report the runtimes of Artetxe et al. (2018) as-is, which are obtained by running on a Titan XP GPU, while our runtimes are, as before, obtained purely by CPU computation.
Qualitative Results
As mentioned earlier, Theorem 3.1 implies that the optimal value of the Gromov-Wasserstein problem can be legitimately interpreted as a distance between languages, or more explicitly, between their word embedding spaces. This distributional notion of distance is completely determined by pairwise geometric relations between these vectors. In Figure 4 we show the values GW(C s , C t , p, q) computed on the FASTTEXT word embeddings of Conneau et al. (2018) corresponding to the most frequent 2000 words in each language. Overall, these distances conform to our intuitions: the cluster of romance languages exhibits some of the shortest distances, while classical Chinese (ZH) has the overall largest discrepancy with all other languages. But somewhat surprisingly, Russian is relatively close to the romance languages in this metric. We conjecture that this could be due to Russian's rich morphology (a trait shared by romance languages but not English). Furthermore, both Russian and Spanish are prodrop languages (Haspelmath, 2001) and share syntactic phenomena, such as dative subjects (Moore and Perlmutter, 2000;Melis et al., 2013) and differential object marking (Bossong, 1991), which might explain why ES is closest to RU overall.
On the other hand, English appears remarkably isolated from all languages, equally distant from its germanic (DE) and romance (FR) cousins. Indeed, other aspects of the data (such as corpus size) might be underlying these observations.
Related Work
Study of the problem of bilingual lexical induction goes back to Rapp (1995) and Fung (1995). While the literature on this topic is extensive, we focus here on recent fully-unsupervised and minimallysupervised approaches, and refer the reader to one of various existing surveys for a broader panorama (Upadhyay et al., 2016;Ruder et al., 2017).
Methods with coarse or limited parallel data. Most of these fall in one of two categories: methods that learn a mapping from one space to the other, e.g., as a least-squares objective (e.g., (Mikolov et al., 2013)) or via orthogonal transformations Zhang et al. (2016); Smith et al. (2017); Artetxe et al. (2016), and methods that find a com-mon space on which to project both sets of embeddings (Faruqui and Dyer, 2014;Lu et al., 2015).
Fully Unsupervised methods. Conneau et al. (2018) and Zhang et al. (2017a) rely on adversarial training to produce an initial alignment between the spaces. The former use pseudo-matches derived from this initial alignment to solve a Procrustes (2) alignment problem. Our Gromov-Wasserstein framework can be thought of as providing an alternative to these adversarial training steps, albeit with a concise optimization formulation and producing explicit matches (via the optimal coupling) instead of depending on nearest neighbor search, as the adversarially-learnt mappings do. Zhang et al. (2017b) also leverage optimal transport distances for the cross-lingual embedding task. However, to address the issue of nonalignment of embedding spaces, their approach follows the joint optimization of the transportation and procrustes problem as outlined in Section 2.2. This formulation makes an explicit modeling assumption (invariance to unitary transformations), and requires repeated solution of Procrustes problems during alternating minimization. Gromov-Wasserstein, on the other hand, is more flexible and makes no such assumption, since it directly deals with similarities rather than vectors. In the case where it is required, such an orthogonal mapping can be obtained by solving a single procrustes problem, as discussed in Section 3.2.
Discussion and future work
In this work we provided a direct optimization approach to cross-lingual word alignment. The Gromov-Wasserstein distance is well-suited for this task as it performs a relational comparison of word-vectors across languages rather than wordvectors directly. The resulting objective is concise, and can be optimized efficiently. The experimental results show that the resulting alignment framework is fast, stable and robust, yielding near stateof-the-art performance at a computational cost orders of magnitude lower than that of alternative fully unsupervised methods.
While directly solving Gromov-Wasserstein problems of reasonable size is feasible, scaling up to large vocabularies made it necessary to learn an explicit mapping via Procrustes. GPU computations or stochastic optimization could help avoid this secondary step.
Theorem 3. 1 (
1Mémoli 2011). With the choice L = L 2 , GW 1 2 is a distance on the space of metric measure spaces.
4 Figure 2 :
42) EN →RU, 15K words, λ = 10 −Training dynamics for the Gromov-Wasserstein alignment problem. The algorithm provably makes progress in each iteration, and the objective (red dashed line) closely follows the metric of interest (translation accuracy, not available during training). More related languages (e.g., EN →FR in 2b,2a) lead to faster optimization, while more distant pairs yield slower learning curves (EN →RU, 2c).
Figure 3 :
3Top: Word embeddings trained on noncomparable corpora can lead to uneven distributions of pairwise distances as shown here for the EN-FI pair of
Figure 4 :
4Pairwise language Gromov-Wasserstein distances obtained as the minimal transportation cost (8) between word embedding similarity matrices. Values scaled by 10 2 for easy visualization.
Table 2 :
2Results of unsupervised methods on the dataset of
As shown in the experimental section, we are able to run problems of size in the order of |Vs| ≈ 10 5 ≈ |Vt| on a single machine without relying on GPU computation.
Despite its relevance, we do not include the OT-based method ofZhang et al. (2017b) in the comparison because their implementation required use of proprietary software.5 In the limit λ → 0, when n = m, the solution converges
AcknowledgmentsThe authors would like to thank the anonymous reviewers for helpful feedback. The work was partially supported by MIT-IBM grant "Adversarial learning of multimodal and structured data", and Graduate Fellowships from Hewlett Packard and CONACYT.
Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingMikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294.
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsuper- vised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789--798. Association for Computational Linguistics.
Enriching Word Vectors with Subword Information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.
Georg Bossong, Differential object marking in Romance and beyond. New analyses in Romance linguistics. Georg Bossong. 1991. Differential object marking in Romance and beyond. New analyses in Romance linguistics, pages 143-170.
Word Translation Without Parallel Data. Alexis Conneau, Guillaume Lample, Marc'aurelio Ranzato, Ludovic Denoyer, Hervé Jégou, International Conference on Learning Representations. Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word Translation Without Parallel Data. In Interna- tional Conference on Learning Representations.
Sinkhorn distances: Lightspeed computation of optimal transport. Marco Cuturi, Advances in Neural Information Processing Systems. Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, pages 2292--2300.
Improving zero-shot learning by mitigating the hubness problem. Georgiana Dinu, Angeliki Lazaridou, Marco Baroni, arXiv:1412.6568arXiv preprintGeorgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568.
Improving vector space word representations using multilingual correlation. Manaal Faruqui, Chris Dyer, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsManaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 462-471.
Compiling bilingual lexicon entries from a non-parallel English-Chinese corpus. Pascale Fung, Third Workshop on Very Large Corpora. Pascale Fung. 1995. Compiling bilingual lexicon en- tries from a non-parallel English-Chinese corpus. In Third Workshop on Very Large Corpora.
Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, Ting Liu, Cross-lingual dependency parsing based on distributed representations. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual depen- dency parsing based on distributed representations.
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing1Long Papers)In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), vol- ume 1, pages 1234-1244.
Word Embeddings as Metric Recovery in Semantic Spaces. B Tatsunori, David Hashimoto, Tommi S Alvarez-Melis, Jaakkola, Transactions of the Association for Computational Linguistics. 4Tatsunori B Hashimoto, David Alvarez-Melis, and Tommi S Jaakkola. 2016. Word Embeddings as Metric Recovery in Semantic Spaces. Transactions of the Association for Computational Linguistics, 4:273-286.
The European linguistic area: Standard Average European. Martin Haspelmath, Language typology and language universals: An international handbook. de Gruyter2Martin Haspelmath. 2001. The European linguistic area: Standard Average European. In Language ty- pology and language universals: An international handbook, volume 2, pages 1492-1510. de Gruyter.
Unsupervised Machine Translation Using Monolingual Corpora Only. Guillaume Lample, Ludovic Denoyer, Marc'aurelio Ranzato, International Conference on Learning Representations. Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised Machine Translation Using Monolingual Corpora Only. International Conference on Learning Representations.
Deep multilingual correlation for improved word embeddings. Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, Karen Livescu, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAng Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Deep multilingual cor- relation for improved word embeddings. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 250-256.
On the historical expansion of noncanonically marked 'subjects' in Spanish. The diachronic Typology of Non-Canonical Subjects. Chantal Melis, Marcela Flores, A Holvoet, Amsterdam/Philadelphia, Benjamins. Chantal Melis, Marcela Flores, and A Holvoet. 2013. On the historical expansion of non- canonically marked 'subjects' in Spanish. The di- achronic Typology of Non-Canonical Subjects, Am- sterdam/Philadelphia, Benjamins, pages 163-184.
Gromov-Wasserstein distances and the metric approach to object matching. Facundo Mémoli, Foundations of computational mathematics. 114Facundo Mémoli. 2011. Gromov-Wasserstein dis- tances and the metric approach to object match- ing. Foundations of computational mathematics, 11(4):417-487.
Exploiting Similarities among Languages for Machine Translation. Tomas Mikolov, V Quoc, Ilya Le, Sutskever, arXiv:1309.4168v1arXiv preprintTomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting Similarities among Lan- guages for Machine Translation. arXiv preprint arXiv:1309.4168v1, pages 1-10.
John Moore, David M Perlmutter, What does it take to be a dative subject? Natural Language and Linguistic Theory. 18John Moore and David M. Perlmutter. 2000. What does it take to be a dative subject? Natural Lan- guage and Linguistic Theory, 18(2):373-416.
Computational Optimal Transport. Gabriel Peyré, Marco Cuturi, Technical reportGabriel Peyré and Marco Cuturi. 2018. Computational Optimal Transport. Technical report.
Gromov-Wasserstein averaging of kernel and distance matrices. Gabriel Peyré, Marco Cuturi, Justin Solomon, International Conference on Machine Learning. Gabriel Peyré, Marco Cuturi, and Justin Solomon. 2016. Gromov-Wasserstein averaging of kernel and distance matrices. In International Conference on Machine Learning, pages 2664-2672.
Zipf's word frequency law in natural language: A critical review and future directions. T Steven, Piantadosi, Psychonomic Bulletin & Review. 215Steven T Piantadosi. 2014. Zipf's word frequency law in natural language: A critical review and fu- ture directions. Psychonomic Bulletin & Review, 21(5):1112-1130.
The Softassign Procrustes Matching Algorithm. Anand Rangarajan, Haili Chui, Fred L Bookstein, Lecture Notes in Computer Science. 1230Anand Rangarajan, Haili Chui, and Fred L Book- stein. 1997. The Softassign Procrustes Matching Algorithm. Lecture Notes in Computer Science, 1230:29-42.
Identifying word translations in non-parallel texts. Reinhard Rapp, Proceedings of the 33rd annual meeting on Association for Computational Linguistics. the 33rd annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsReinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd an- nual meeting on Association for Computational Lin- guistics, pages 320-322. Association for Computa- tional Linguistics.
Automatic identification of word translations from unrelated English and German corpora. Reinhard Rapp, Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics. the 37th annual meeting of the Association for Computational Linguistics on Computational LinguisticsAssociation for Computational LinguisticsReinhard Rapp. 1999. Automatic identification of word translations from unrelated English and Ger- man corpora. In Proceedings of the 37th annual meeting of the Association for Computational Lin- guistics on Computational Linguistics, pages 519- 526. Association for Computational Linguistics.
A survey of cross-lingual embedding models. Sebastian Ruder, Ivan Vulić, Anders Søgaard, arXiv:1706.04902arXiv preprintSebastian Ruder, Ivan Vulić, and Anders Søgaard. 2017. A survey of cross-lingual embedding models. arXiv preprint arXiv:1706.04902.
A generalized solution of the orthogonal procrustes problem. H Peter, Schönemann, Psychometrika. 311Peter H. Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10.
Offline bilingual word vectors, orthogonal transformations and the inverted softmax. L Samuel, Smith, H P David, Steven Turban, Nils Y Hamblin, Hammerla, International Conference on Learning Representations. Samuel L Smith, David H P Turban, Steven Ham- blin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the in- verted softmax. International Conference on Learn- ing Representations.
Cross-lingual models of word embeddings: An empirical comparison. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, Dan Roth, arXiv:1604.00425arXiv preprintShyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. arXiv preprint arXiv:1604.00425.
Adversarial training for unsupervised bilingual lexicon induction. Meng Zhang, Yang Liu, Huanbo Luan, Maosong Sun, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Long Papers)Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1959-1970.
Earth Mover's Distance Minimization for Unsupervised Bilingual Lexicon Induction. Meng Zhang, Yang Liu, Huanbo Luan, Maosong Sun, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingMeng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth Mover's Distance Minimiza- tion for Unsupervised Bilingual Lexicon Induction. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1934-1945.
Ten Pairs to Tag -Multilingual POS Tagging via Coarse Mapping between Embeddings. Yuan Zhang, David Gaddy, Regina Barzilay, Tommi Jaakkola, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsYuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten Pairs to Tag -Multilin- gual POS Tagging via Coarse Mapping between Em- beddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1307-1317, San Diego, California. Association for Computational Linguistics.
| [] |
[
"A Two-Stage Approach towards Generalization in Knowledge Base Question Answering",
"A Two-Stage Approach towards Generalization in Knowledge Base Question Answering"
] | [
"Srinivas Ravishankar \nIBM Research + UMass Amherst\n\n",
"June Thai \nIBM Research + UMass Amherst\n\n",
"Ibrahim Abdelaziz \nIBM Research + UMass Amherst\n\n",
"Nandana Mihidukulasooriya \nIBM Research + UMass Amherst\n\n",
"Tahira Naseem \nIBM Research + UMass Amherst\n\n",
"Pavan Kapanipathi \nIBM Research + UMass Amherst\n\n",
"Gaetano Rossiello \nIBM Research + UMass Amherst\n\n",
"Achille Fokoue \nIBM Research + UMass Amherst\n\n"
] | [
"IBM Research + UMass Amherst\n",
"IBM Research + UMass Amherst\n",
"IBM Research + UMass Amherst\n",
"IBM Research + UMass Amherst\n",
"IBM Research + UMass Amherst\n",
"IBM Research + UMass Amherst\n",
"IBM Research + UMass Amherst\n",
"IBM Research + UMass Amherst\n"
] | [] | Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes. However, many popular knowledge bases share similarities in their underlying schemas that can be leveraged to facilitate generalization across knowledge bases. To achieve this, we introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction, facilitating transfer learning across datasets and knowledge graphs. We show that pretraining on datasets with a different underlying knowledge base can nevertheless provide significant performance gains and reduce sample complexity. Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG). | null | [
"https://arxiv.org/pdf/2111.05825v2.pdf"
] | 243,938,317 | 2111.05825 | b2a7b03a3c403b7c7d01a5f52dfdbd845323ab66 |
A Two-Stage Approach towards Generalization in Knowledge Base Question Answering
Srinivas Ravishankar
IBM Research + UMass Amherst
June Thai
IBM Research + UMass Amherst
Ibrahim Abdelaziz
IBM Research + UMass Amherst
Nandana Mihidukulasooriya
IBM Research + UMass Amherst
Tahira Naseem
IBM Research + UMass Amherst
Pavan Kapanipathi
IBM Research + UMass Amherst
Gaetano Rossiello
IBM Research + UMass Amherst
Achille Fokoue
IBM Research + UMass Amherst
A Two-Stage Approach towards Generalization in Knowledge Base Question Answering
Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes. However, many popular knowledge bases share similarities in their underlying schemas that can be leveraged to facilitate generalization across knowledge bases. To achieve this, we introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction, facilitating transfer learning across datasets and knowledge graphs. We show that pretraining on datasets with a different underlying knowledge base can nevertheless provide significant performance gains and reduce sample complexity. Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG).
Introduction
Knowledge Base Question Answering (KBQA) has gained significant popularity in recent times due to its real-world applications, facilitating access to rich Knowledge Graphs (KGs) without the need for technical query-syntax. Given a natural language question, a KBQA system is required to find an answer based on the facts available in the KG. For example, given the question "Who is the director of the film Titanic", a KBQA system should retrieve the entity corresponding to "James Cameron". This would be dbr:James _ Cameron 1 in DBpedia (Auer et al. 2007), wd:Q42574 2 in Wikidata (Vrandečić and Krötzsch 2014), and fb:m.03 _ gd 3 in Freebase (Bollacker et al. 2008).
KBQA has been evaluated on multiple different KGs such as Freebase (Bollacker et al. 2008), Wikidata (Vrandečić and Krötzsch 2014), DBpedia (Auer et al. 2007), and MetaQA (Zhang et al. 2018). Most existing heuristicbased KBQA approaches such as NSQA (Kapanipathi et al. 2020), gAnswer (Zou et al. 2014), and QAmp (Vakulenko et al. 2019) are typically tuned for a specific underlying knowledge base making it non-trivial to generalize and adapt it to other knowledge graphs. On the other hand, WDAqua , a system with a focus on being generalizable, ignores question syntax, thereby showing reduced performance on datasets with complex multi-hop questions.
Recently, there has been a surge in end-to-end learning approaches that are not tied to specific KGs or heuristics, and hence can generalize to multiple KGs. GrailQA (Gu et al. 2021) in particular categorized different forms of generalization, such as novel relation compositionality and zero-shot generalization. They also demonstrated transfer across QA datasets, but within the same KG. On the other hand, Graft-Net (Sun et al. 2018) and EmbedKGQA (Saxena, Tripathi, and Talukdar 2020) demonstrated their ability to generalize over multiple KGs by demonstrating state-of-the-art performance on MetaQA (Wikimovies) as well as WebQSP (Freebase). The two techniques, however, are highly sensitive to the training data; failing to generalize in terms of relation compositionality within a KG. EmbedKGQA and GraftNet show significant drops (between 23-50%) in performance on relation compositions that are not seen during training. Furthermore, it is unclear how these systems transfer across KGs because of their tight-integration with KG-specific embeddings.
In this work, we present a novel generalizable KBQA approach STaG-QA (Semantic parsing for Transfer and Generalization) that works seamlessly with multiple KGs, and demonstrate transfer even across QA datasets with different underlying KGs. Our approach attempts to separate aspects of KBQA systems that are softly tied to the KG but generalizable, from the parts more strongly tied to a specific KG. Concretely, our approach has two stages: 1) The first stage is a generative model that predicts a query skeleton, which includes the query pattern, the different SPARQL operators in it, as well as partial relations based on label semantics that can be generic to most knowledge graphs. 2) The second stage converts the output of the first stage to a final query that includes entity and relations mapped to a specific KG to retrieve the final answer.
Our contributions are as follows:
• A simple SEQ2SEQ architecture for KBQA that separates aspects of the output that are generalizable across KGs, from those that are strongly tied to a specific KG.
• To the best of our knowledge, our approach is the first to evaluate on and achieve state-of-the-art or comparable performance on KBQA datasets corresponding to four different knowledge graphs, i.e, LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies). • Our extensive experimental results shows that the proposed architecture: (a) facilitates transfer with significant performance gains in low-resource setting; (b) generalizes significantly better (23-50%) to unseen relation combinations in comparison to state-of-the-art approaches.
Proposed Architecture
The KBQA task involves finding an answer for a natural language question from a given KG. Following the semantic parsing techniques for KBQA (Chen et al. 2021;Kapanipathi et al. 2020;Yih et al. 2015), we attempt to solve this task by predicting the correct structured SPARQL query that can retrieve the required answer(s) from the KG, i.e, by estimating a probability distribution over possible SPARQL queries given the natural language question. In this work, we aim to design a model architecture that generalises across different KGs such as DBpedia, Wikidata, and Freebase. In order to achieve this goal, we have a 2-stage approach as shown in Figure 1, where we separate generic SPARQL query-sketch learning from KG-specific mapping of concepts. Specifically, the following 2-stages are:
• Softly-tied query sketch: This is the first stage of our approach where we intend to learn aspects of the SPARQL query generation that are generic to any knowledge graph. Specifically, we observe the following: (i) multihop patterns are mostly generic to question answering over KGs. (ii) across many KGs, analogous relations have semantic or lexical overlap. Therefore, we focus on 2 sub-tasks in this stage, query skeleton generation and partial relation linking. We call the output of this stage a softly-tied semantic parse, because the exact output is partially dependent on the specific KG in use, but our choice of representations and architecture ensures that transfer across KGs is a natural consequence. • KG alignment: This is the next step where we introduce all vocabulary specific to the knowledge graph in order to generate an executable SPARQL query. To do so, we bind the softy-tied semantic parse strongly to the KG to find the answer by (i) resolving the textual relations to KG relations, (ii) introducing KG specific entities into the SPARQL skeleton, and (iii) rank the obtained SPARQL queries based on its groundings in the KG.
Softly-tied Query Sketch
As mentioned above, the goal is to create a representation and architecture that can generalize easily not only across examples within a dataset, but also across KGs. To accomplish this, we define 2 subtasks: (a) Skeleton Generation, and (b) Partial relation linking. As shown in Figure 1, the question is passed through a transformer-based SEQ2SEQ model which is trained to produce the SPARQL skeleton corresponding to the question text. The encoder of the SEQ2SEQ model is a bi-direction transformer, while the decoder is auto-regressive with a causal self-attention mask.
Given a question text, we tokenize it using BERT tokenizer and add special [CLS] and [SEP] symbols in the beginning and the end of the question, respectively. This tokenized input is passed through a transformer encoder, producing encoder hidden states for each token at each layer. The encoder is initialized with pretrained BERT model (Devlin et al. 2018), which helps generalization with respect to different question syntax.
We then use a transformer decoder with cross attention mechanism. At each time step i, the decoder considers the encoder states via cross-attention and previous decoder states via self attention. It produces a distribution over possible skeleton output tokens. The decoder output vocabulary V comprises of entity place holder tokens V e , relation place holder tokens V r and SPARQL operators V o ; each of these is a small closed set of tokens. The output of each decoding step is a softmax over possible operators s i ∈ V. Unlike the encoder, no pre-trained model is used for the decoder, and paramerters are initialized randomly.
Partial Relation Linking: For each relation placeholder in the SPARQL skeleton (:prop0, :prop1, etc), we need to identify the appropriate relation that can replace the place-
The films directed by John Krasinski are in which language?
Question Encoder Decoder "Author" "Directed by" "In Language" …… "Location"
Entity placeholder resolution Skeleton Generation
Partial Relation Linking Figure 1: Two-stage system architecture that comprises of: (a) On the left: Softly-tied semantic parse generation that takes an input question return a KG-agnostic parse, and (b) On the right: Knowledge Graph Integration process to eventually return the SPARQL query.
holder to produce the correct semantic representation of the query. We have noted previously that relations across KGs share lexical and semantic similarities. For example, in Table 1 the three KGs (DBpedia, Wikimovies, and Wikidata) represent the relationship "Directed by" with very similar lexical terms "Director" and "Directed by". We can thus leverage large pre-trained language models to allow generalization and transfer of such relations across KGs. In each KG, we first map the relations to their respective surface forms, using either label relations from the KG, or by extracting some semantically meaningful surface form from the relation URI. These are the "textualized relations" shown in Figure 1. Table 2 shows some more examples of relation labels for 3 KGs. Note that this mapping can be manyto-one. For example, both dbo:language and dbp:language map to the same relation label "language". Our goal is to identify which relation surface form best matches each relation placeholder in the skeleton. We thus train the SEQ2SEQ decoder and relation encoder to project into the same space. Concretely, the decoder hidden state corresponding to each relation placeholder is optimised to be closest to the encoded representation of the correct relation, using a cross-entropy loss. For example, in Figure 1, the decoder state for :prop0 should have maximum inner product with the encoded representation for the relation surface form "Directed by", compared to the encoded representations of all other relations. Our relation encoder is a transformer model whose parameters are initialized with pretrained BERT model. Given that BERT-based representations of lexically or semantically similar relations across KGs will be close, it is easy to see why transfer across KG is possible. The final outcome of partial relation linking is a ranked list of relation surface forms for each placeholder in the skeleton.
The skeleton generation loss and partial relation linking loss are optimized jointly. The SPARQL skeleton together with the partial relation linking produces a ranked list of softly-tied query sketches. In the case of multiple placeholders, the score of each pair of relation surface forms is the product of their individual scores. Sometimes this phase produces multiple semantic interpretations, either due to noisy surface forms (for instance, DBpedia KG includes Wikipedia infobox keys "as is" when they can not be mapped to the ontology relations) or due to the presence of semantically identical or similar relations with distinct identifiers (eg. dbo:language and dbp:language). For the example, "The films directed by John Krasinski are in which language?", this stage will produce the following sketches:
P=0
KG Interaction
In order to generate an executable SPARQL query, we need to introduce vocabulary specific to the KG. The KG interaction stage performs this task. Concretely, given a list of candidate query sketches, this stage performs the following steps to produce the final question answer: 1) link the different entities to their corresponding placeholders in the skeleton, 2) disambiguate relations' textual form and link it to the specific KG relations, and 3) select the correct SPARQL based on the actual facts in the KG. In our approach, we leverage a pre-trained off-the-shelf entity linker, BLINK (Wu et al. 2020). BLINK provides tuples of (surface form, linked entity) pairs. The entity placeholder resolution step aligns the entities with the entity place holders in the query sketch. In the example above, :ent0 will be linked to dbr:John _ Krasinski in DBpedia, or wd:Q313039 in Wikidata. When multiple entities are present in the question, the position of the corresponding textual span defines the alignment to the entity placeholder variable. During training, the first entity in the question corresponds to :ent0, the second entity by :ent1, etc. This pattern is repeated by the system when decoding during inference, making entity placeholder resolution trivial. The next step is to disambiguate relations' textual form and link them to the specific KG relations. Recall from Table 2 that each surface form in a query sketch can map to one or more KG relations. In our example using DBpedia as a KG, the surface form "director" could map to both [dbo:director, dbp:director] whereas "language" could map to both [dbo:language, dbp:language]. The semantic parsing stage cannot hope to distinguish between these, and thus we rely on the KG to determine the specific relation that should be chosen. Concretely, we replace every relation surface form with each of the possible KG relations it could map to. Thus, each softly-tied query sketch produces one or more fully executable SPARQLs. For example, the 2 softlytied sketches from the previous stage in our example produce 4 possible SPARQLs, see Table 3. As the final step, we execute the candidate SPARQL queries against the KB and choose the highest-ranked SPARQL that produces an answer for SELECT queries. Since ASK queries do not necessarily have to be valid in the KG, we only consider the model score in such cases.
Experiments
In this section, we compare STaG-QA to other state-of-theart approaches on datasets from multiple KGs. We validate 2
Rank
Predicted SPARQL Query The results show that pre-training our system achieves improvement in performance with better gains in low-resource and unseen relation combination settings.
Datasets
To evaluate the generality of our approach, we used datasets across a wide variety of KGs including Wikimovies-KG, Freebase (Bollacker et al. 2008), DBpedia (Auer et al. 2007), and Wikidata (Vrandečić and Krötzsch 2014). In particular, we used the following datasets (Table 4 shows
Baselines
In this work, we evaluate against 8 different KBQA systems categorized into unsupervised ( (5) EmbedKGQA (Saxena, Tripathi, and Talukdar 2020) is the state-of-the-art KBQA system on MetaQA and We-bQSP datasets, (6) PullNet (Sun, Bedrax-Weiss, and Cohen 2019) is recent approach evaluated on MetaQA and We-bQSP datasets, (7) GraftNet (Sun et al. 2018) infuses both text and KG into a heterogeneous graph and uses GCN for question answering, and (8) EmQL (Sun et al. 2020) is a query embedding approach that was successfully integrated into a KBQA system and evaluated on WebQSP and MetaQA datasets. system itself, it jointly predicts entities and relations given a question. Since SimpleQuestions-Wiki requires only a single entity and a single relation, we used Falcon 2.0 output to generate the corresponding SPARQL query required for KBQA evaluation. On MetaQA dataset, our system as well as the baselines achieve near perfect scores indicating the simplicity of this dataset.
KBQA Evaluation
On LC-QuAD 1.0, our approach significantly outperforms existing DBpedia-based approaches. When pretrained on LC-QuAD 2.0, the performance is 9% better F1 compared to NSQA; the state-of-the-art system on DBpedia. The large improvement indicates that STaG-QA was able to generalize and learn similar patterns between LC-QuAD 1.0 and LC-QuAD 2.0. As for WebQSP, both versions of our approach are inferior compared to EmQL. However, it is also worth noting that EMQL also leverages KBC embeddings, which is not currently utilized STaG-QAȮverall, the results show that STaG-QA shows better or competitive performance on 3 out of four datasets and when pretrained on another dataset, the performance improves across all datasets. In the next section, we analyze different datasets in terms of the degree of challenge they pose for KBQA systems. We propose evaluation splits that will allow us to better discriminate different systems in terms of their performance on these datasets.
Effect of Pretraining
Our architecture is designed to allow transfer learning between entirely different QA dataset/KG pairs. As it is harder Table 5: Performance against previous state-of-the-art approaches. Following these techniques, we report precision, recall and F1 scores on SimpleQuestions and LC-QuAD 1.0, and Hits@1 performance on WebQSP and MetaQA datasets. The subscript pre indicates the "pre-trained" version of our system using LC-QuAD 2.0 dataset.
to show improvements with pre-training on larger datasets, we consider low-resource settings to demonstrate the benefit of transfer, even across KGs. This is useful when there is scarcity of training data for a new target KG. We investigate the benefit of pretraining the semantic parsing stage using LC-QuaD 2.0 (Wikidata KG), before training on the 2-hop dataset in MetaQA (MetaQA-KG) and the LC-QuAD 1.0 dataset (DBpedia). Figures 2 and 3 show the performance of STaG-QA on each dataset with and without pre-training. We make note of the following observations. First, without any fine-tuning on either datasets, the pre-trained version STaG-QA pre was able to achieve 18% Hits@1 on MetaQA and 8% F1 on LC-QuAD 1.0, indicating the model ability to do zero-shot transfer across knowledge graphs. Second, the pre-trained version provides better performance and is able to converge much faster. For example, in MetaQA (Figure 2), STaG-QA pre was able to reach almost 100% Hits@1 with 100 training examples only. To reach the same 100% Hits@1, STaG-QA without pretraining required 1,000 examples, an order of magnitude more training data. The same behaviour can be observed on LC-QuAD 1.0, where STaG-QA pre is better than STaG-QA, but with both versions continuing to improve as more training data becomes available.
Generalization to novel relation composition
Common KBs have a large number of relations. For example, DBpedia (v2016-10) has around ∼60K relations, Wikidata (as of March 2020) has ∼8K relations, whereas Freebase contains ∼25K relations. In multi-hop queries, these relations can be arranged as paths (e.g., director → language) where possible path combinations grow combinatorially. With learning-based approaches, seeing all or most possible relation combinations at training would indeed result in performance improvement at the testing phase. However, this is impractical and hard to enforce in practical scenarios with most KBs as it would require significantly large training data to cover all combinations. Instead, an effective KBQA system should be able to generalise to unseen relation paths. In this section, we first analyse existing KBQA datasets to see to which extent this ability is being tested currently. We then create a development set specifically for testing the ability of KBQA systems to generalise to unseen multi-hop relation paths. We show in Table 6 the number of test questions in LC-QuAD 1.0, MetaQA and WebQSP datasets that contain relation combinations never seen at training. For instance, MetaQA does not test for any unseen relation paths (0%) where WebQSP contains only 2.06% of such questions. In contrast, in LC-QuAD 1.0 roughly half of the test questions contain novel relation compositions.
MetaQA Unseen Challenge Set: In order to further investigate how this issue affects current KBQA systems, we created a subset from MetaQA, the largest dataset in Table 6 and yet having no unseen relation combinations at testing. We modified the train and dev sets of MetaQA as follows: From the 2-hop training set, we removed training examples containing two randomly chosen relation paths ( actor _ to _ movie _ to _ director and director _ to _ movie _ to _ actor) and split the dev set into two, one containing 13,510 questions with all seen relations path in training and another containing 1,361 questions with all unseen relation paths. It is important to note that for each of the unseen relation combinations, the individual relations are present in the training set, i.e, this experiment is designed to test compositionality rather than zero-shot relation linking ability.
We then trained STaG-QA, EmbedKGQA and GraftNet on the new reduced training set and tested the performance Table 7 shows the results for each system on 2-hop questions on seen relation paths vs unseen ones. The results clearly demonstrate that there is a significant drop in performance in methods that rank directly across entities in the KG to predict answers. This is most clearly observed in EmbedKGQA, as well as GraftNet-KB, though the use of text (GraftNet-Text and GraftNet-Both) does alleviate this issue. In contrast, our approach is able to maintain exactly the same level of performance for novel relation compositions using KB information alone.
Related Work
There have been a wide variety of Knowledge Base Question Answering (KBQA) systems (Chen et al. 2020;Saxena, Tripathi, and Talukdar 2020;Maheshwari et al. 2019;Zou et al. 2014;Diefenbach, Singh, and Maret 2017;Vakulenko et al. 2019;Kapanipathi et al. 2020), trained on datasets that are either question-SPARQL pairs (strong supervision) or question-answer pairs (weak supervision). More generally, the former can use any logical form that expresses the question as an RDF-query, which is then run on the KG to retrieve the answer. We describe some KBQA systems in this section. As mentioned above, the first category of KBQA approaches focus on translating the natural language questions into an intermediate logical form to retrieve results from the knowledge base. Generating this kind of semantic parse of the question has shown improved performance compared to weak-supervision based approaches (Yih et al. 2016). Furthermore, the intermediate structured representation of the question provides a level of interpretability and explanation that is absent in systems that directly rank over entities in the KG to produce answers. This category can further be classified into (a) rule-based approaches such as gAnswer (Hu et al. 2017), NSQA (Kapanipathi et al. 2020), and (b) learning-based approaches such as MaSP (Shen et al. 2019) and GrailQA (Gu et al. 2021).
The rule based approaches primarily depend on generic language based syntactic (Zou et al. 2014) or semantic parses (Abdelaziz et al. 2021;Kapanipathi et al. 2020) of the question and build rules on it to obtain a query graph that represents the SPARQL query. NSQA, the state of the art approach for DBpedia based datasets such as LC-QuAD-1.0 (Trivedi et al. 2017) and QALD-9 (Usbeck et al. 2017), falls in this category. The system uses Abstract Meaning Representation (AMR) parses of the question and a heuristic-based graph-driven methodology to transform the AMR graph to a query graph that represents the SPARQL query. Many of these systems have components or aspects that are specific to the KG they evaluate on, and do not trivially generalize to other KGs. In particular GAnswer, NSQA, and QAmp are specific to DBpedia and do not evaluate their approaches on any other KGs.
On the other hand, MaSP is a multi-task end-to-end learning approach that focuses of dialog-based KGQA setup. MaSP uses a predicate classifier which makes transfer across KGs non-trivial. We adapt the architecture to make it generalizable across KGs, by replacing the relation classifier with a BERT-based ranker that leverages similarities in label semantics between KGs. A prominent work, Krantikari (Maheshwari et al. 2019) is on regular KBQA and has a ranking based approach that is heavily dependent on the knowledge graph. The approach ranks all candidate graph patterns retrieved from the knowledge graph based on the grounded entity. In multi-hop settings, as in MetaQA with 3-hop questions, retrieving all possible candidates upto n-hops (for an arbitrary choice of n) and then ranking across all of them is expensive. In contrast, our work focuses on a generative approach to modeling query graph patterns. GrailQA is an end-to-end learning approach that generates logical S-forms. It characterizes a few forms of generalization such as compositionality and zero-shot generalization. It also introduces a method that transfers across QA datasets, but within the same KG. On the other hand, we demonstrate transfer across KGs.
The final category of KBQA approaches are trained solely with question-answer pairs ignoring the intermediate logical representation of the question. EmbedKGQA (Saxena, Tripathi, and Talukdar 2020) and GraftNet are two such approaches that directly ranks across entities in the knowledge base to predict an answer, by leveraging either KG embeddings from Knowledge Base Completion (KBC); or creating a unified graph from KB and text. However, these approaches do not generalize well to novel relation compositions not seen during training. Finally, it is unclear how to transfer KBC embedding-based approaches such as Em-bedKGQA across KGs since the learnt KG embeddings are tightly coupled with the specific KG in question.
Conclusion
In this work, we show that a simple 2-stage architecture which explicitly separates the KG-agnostic semantic parsing stage from the KG-specific interaction can generalize across a range of datasets and KGs. We evaluated our approach on four different KG/QA pairs, obtaining state-of-the-art performance on MetaQA, LC-QuAD 1.0, and SimpleQuestions-Wiki; as well as competitive performance on WebQSP. Furthermore, we successfully demonstrate transfer learning across KGs by showing that pretraining the semantic parsing stage on an existing KG/QAdataset pair can help improve performance in low-resource settings for a new target KG; as well as greatly reduce the number of examples required to achieve state-of-the-art performance. Finally, we show that some popular benchmark datasets do not evaluate generalization to unseen combina-tions of seen relations (compositionality), an important requirement for a question answering system.
detailed statistics for each dataset): (a) MetaQA (Wikimovies-KG)(Zhang et al. 2018) is a large-scale complex-query answering dataset on a KG with 135k triples, 43k entities, and nine relations. It contains more than 400K questions for both single and multi-hop reasoning. (b) WQSP-FB (Freebase)(Yih et al. 2016) provides a subset of WebQuestions with semantic parses, with 4737 questions in total. (c) LC-QuAD 1.0 (DBpedia)(Trivedi et al. 2017): A dataset with 5,000 questions (4,000 train and 1,000 test) based on templates. It includes simple, multi-hop, as well as aggregation-type questions. LC-QuAD 2.0 is another version of LC-QuAD based on Wikidata. It has 30K question in total and also templatebased. Due to the larger underlying KB and the extensive pattern covered, we used LC-QuAD 2.0 dataset for pretraining and showing our transfer results. (d) SimpleQuestions-Wiki (Wikidata)): a mapping of the popular Freebase's SimpleQuestions dataset to Wikidata KB with 21K answerable questions.
Kapanipathi et al. 2020;Sakor et al. 2020;Vakulenko et al. 2019;Diefenbach, Singh, and Maret 2017) and supervised approaches(Sun et al. 2020;Maheshwari et al. 2019;Saxena, Tripathi, and Talukdar 2020;Sun, Bedrax-Weiss, and Cohen 2019;Sun et al. 2018). 1) NSQA(Kapanipathi et al. 2020;Abdelaziz et al. 2021) is state-of-the-art system for KBQA on DBpedia datasets. 2) QAMP(Vakulenko et al. 2019) is an unsupervised message passing approach that provides competitive performance on LC-QuAD 1.0 dataset. (3) WDAqua is another system that generalises well across a variety of knowledge graphs. (4) Falcon 2.0(Sakor et al. 2020) is a heuristicsbased approach for joint detection of entities and relations in Wikidata. Since this approach does not predict the query structure, we tested it on SimpleQuestions dataset only.
Figure 2 :
2System performance on MetaQA 2-hop questions using different number of training examples, with and without pretraining on LC-QuAD 2.0.
Figure 3 :
3System performance on LC-QuAD 1.0 using different number of training examples, with and without pretraining on LC-QuAD 2.0.
Table 1 :
1Query sketch for the question "The films directed
by John Krasinski are in which language?"
COUNT or FILTER, as well as the query graph structure, with
placeholder nodes for entities (e.g. :ent0), relations (e.g.
:prop0) and variables (e.g. ?var0). For many questions,
the generated SPARQL skeletons across different KGs are
similar, if not identical. The skeleton structures unique to a
KG, e.g. reification (present in Wikidata but not DBpedia),
can be learnt when fine-tuning on a dataset with that
underlying KG. An example of a SPARQL skeleton for our
running example in Figure 1 "The films directed by John
Krasinski are in which language?" is:
SELECT ?var0 WHERE {
?var1 :prop0 :ent0 .
?var1 :prop1 ?var0 .
}
Table 2 :
2Examples of textualized relations for different KGs, obtained either using the relation label from the KG (DBpedia, Wikidata) or by extracting a part of the relation URI (Freebase)
Table 3 :
3Top predicted SPARQL queries for the question "The films directed by John Krasinski are in which language?"KG
Train
Valid
Test
LC-QuAD 1.0
DBpedia
3,650
200
1,000
SimpleQuestion Wikidata
15,000
2,000
2,280
WQSP-FB
Freebase
2898
200
1,596
MetaQA 1-hop
Wikimovies
86,470
9,992
9,947
MetaQA 2-hop
Wikimovies 118,980 14,872 14,872
MetaQA 3-hop
Wikimovies 114,196 14,274 14,274
Table 4 :
4Dataset Statistics
claims: (1) STaG-QA achieves state-of-the-art or compara-
ble performance on a variety of datasets and KGs. (2) STaG-
QA generalizes across KBs and hence facilitating transfer.
Table 5
5shows our system results on all four datasets in com-
parison to existing approaches. We show two versions of our
system, one pre-trained with LC-QuAD 2.0 datatset (Dubey
et al. 2019) (STaG-QA pre ) and another trained from scratch
on the target dataset only (STaG-QA). As noted earlier, to
the best of our knowledge, we are the first to show general-
ity across knowledge graphs by evaluating on datasets from
DBpedia, Wikidata, Freebase, and Wikimovies-KG.
Our approach achieves significantly better performance
compared to Falcon 2.0 on SimpleQuestions-Wiki dataset
with 24% better F1 score. While Falcon 2.0 is not a KBQA
Table 6 :
6Unseen path combinations of seen relations
Table 7 :
7MetaQA Unseen Challenge Set Settingon our new development sets (seen and unseen).
A semantic parsing and reasoning-based approach to knowledge base question answering. I Abdelaziz, S Ravishankar, P Kapanipathi, S Roukos, A Gray, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Abdelaziz, I.; Ravishankar, S.; Kapanipathi, P.; Roukos, S.; and Gray, A. 2021. A semantic parsing and reasoning-based approach to knowledge base question answering. In Pro- ceedings of the AAAI Conference on Artificial Intelligence, volume 35, 15985-15987.
Dbpedia: A nucleus for a web of open data. S Auer, C Bizer, G Kobilarov, J Lehmann, R Cyganiak, Z Ives, The semantic web. SpringerAuer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; and Ives, Z. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, 722-735. Springer.
Freebase: a collaboratively created graph database for structuring human knowledge. K Bollacker, C Evans, P Paritosh, T Sturge, J Taylor, Proceedings of the 2008 ACM SIGMOD international conference on Management of data. the 2008 ACM SIGMOD international conference on Management of dataBollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Manage- ment of data, 1247-1250.
ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering. S Chen, Q Liu, Z Yu, C.-Y Lin, J.-G Lou, F Jiang, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System DemonstrationsOnline: Association for Computational LinguisticsChen, S.; Liu, Q.; Yu, Z.; Lin, C.-Y.; Lou, J.-G.; and Jiang, F. 2021. ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering. In Proceedings of the 59th Annual Meeting of the Association for Computa- tional Linguistics and the 11th International Joint Confer- ence on Natural Language Processing: System Demonstra- tions, 325-336. Online: Association for Computational Lin- guistics.
Formal Query Building with Query Structure Prediction for Complex Question Answering over Knowledge Base. Y Chen, H Li, Y Hua, G Qi, International Joint Conference on Artificial Intelligence (IJCAI). Chen, Y.; Li, H.; Hua, Y.; and Qi, G. 2020. Formal Query Building with Query Structure Prediction for Com- plex Question Answering over Knowledge Base. In Inter- national Joint Conference on Artificial Intelligence (IJCAI).
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, D Diefenbach, K Singh, P Maret, arXiv:1810.04805Semantic Web Evaluation Challenge. SpringerarXiv preprintWdaqua-core0: A question answering component for the research communityDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805. Diefenbach, D.; Singh, K.; and Maret, P. 2017. Wdaqua- core0: A question answering component for the research community. In Semantic Web Evaluation Challenge, 84-89. Springer.
Question Answering Benchmarks for Wikidata. D Diefenbach, T P Tanon, K D Singh, P Maret, Proceedings of the ISWC 2017 Posters & Demonstrations and Industry Tracks co-located with 16th International Semantic Web Conference. the ISWC 2017 Posters & Demonstrations and Industry Tracks co-located with 16th International Semantic Web ConferenceVienna, Austria, October 23rd -to -25thDiefenbach, D.; Tanon, T. P.; Singh, K. D.; and Maret, P. 2017. Question Answering Benchmarks for Wikidata. In Proceedings of the ISWC 2017 Posters & Demonstrations and Industry Tracks co-located with 16th International Se- mantic Web Conference (ISWC 2017), Vienna, Austria, Oc- tober 23rd -to -25th, 2017.
. M Dubey, D Banerjee, A Abdelkawi, J Lehmann, Dubey, M.; Banerjee, D.; Abdelkawi, A.; and Lehmann, J.
Lc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia. In International semantic web conference. SpringerLc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia. In International se- mantic web conference, 69-78. Springer.
Beyond IID: three levels of generalization for question answering on knowledge bases. Y Gu, S Kase, M Vanni, B Sadler, P Liang, X Yan, Y Su, Proceedings of the Web Conference 2021. the Web Conference 2021Gu, Y.; Kase, S.; Vanni, M.; Sadler, B.; Liang, P.; Yan, X.; and Su, Y. 2021. Beyond IID: three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021, 3477-3488.
Answering natural language questions by subgraph matching over knowledge graphs. S Hu, L Zou, J X Yu, H Wang, D Zhao, IEEE Transactions on Knowledge and Data Engineering. 305Hu, S.; Zou, L.; Yu, J. X.; Wang, H.; and Zhao, D. 2017. An- swering natural language questions by subgraph matching over knowledge graphs. IEEE Transactions on Knowledge and Data Engineering, 30(5): 824-837.
P Kapanipathi, I Abdelaziz, S Ravishankar, S Roukos, A Gray, R Astudillo, M Chang, C Cornelio, S Dana, A Fokoue, arXiv:2012.01707Question Answering over Knowledge Bases by Leveraging Semantic Parsing and Neuro-Symbolic Reasoning. arXiv preprintKapanipathi, P.; Abdelaziz, I.; Ravishankar, S.; Roukos, S.; Gray, A.; Astudillo, R.; Chang, M.; Cornelio, C.; Dana, S.; Fokoue, A.; et al. 2020. Question Answering over Knowl- edge Bases by Leveraging Semantic Parsing and Neuro- Symbolic Reasoning. arXiv preprint arXiv:2012.01707.
Learning to rank query graphs for complex question answering over knowledge graphs. G Maheshwari, P Trivedi, D Lukovnikov, N Chakraborty, A Fischer, J Lehmann, International semantic web conference. SpringerMaheshwari, G.; Trivedi, P.; Lukovnikov, D.; Chakraborty, N.; Fischer, A.; and Lehmann, J. 2019. Learning to rank query graphs for complex question answering over knowl- edge graphs. In International semantic web conference, 487-504. Springer.
Falcon 2.0: An entity and relation linking tool over Wikidata. A Sakor, K Singh, A Patel, M.-E Vidal, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementSakor, A.; Singh, K.; Patel, A.; and Vidal, M.-E. 2020. Fal- con 2.0: An entity and relation linking tool over Wikidata. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 3141-3148.
Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. A Saxena, A Tripathi, P Talukdar, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSaxena, A.; Tripathi, A.; and Talukdar, P. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, 4498-4507.
Multi-task learning for conversational question answering over a large-scale knowledge base. T Shen, X Geng, T Qin, D Guo, D Tang, N Duan, G Long, D Jiang, arXiv:1910.05069arXiv preprintShen, T.; Geng, X.; Qin, T.; Guo, D.; Tang, D.; Duan, N.; Long, G.; and Jiang, D. 2019. Multi-task learning for con- versational question answering over a large-scale knowledge base. arXiv preprint arXiv:1910.05069.
H Sun, A O Arnold, T Bedrax-Weiss, F Pereira, W W Cohen, arXiv:2004.03658Faithful Embeddings for Knowledge Base Queries. Sun, H.; Arnold, A. O.; Bedrax-Weiss, T.; Pereira, F.; and Cohen, W. W. 2020. Faithful Embeddings for Knowledge Base Queries. arXiv:2004.03658.
H Sun, T Bedrax-Weiss, W W Cohen, H Sun, B Dhingra, M Zaheer, K Mazaitis, R Salakhutdinov, W W Cohen, arXiv:1904.09537Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. arXiv preprintOpen Domain Question Answering Using Early Fusion of Knowledge Bases and Text. EMNLPSun, H.; Bedrax-Weiss, T.; and Cohen, W. W. 2019. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. arXiv preprint arXiv:1904.09537. Sun, H.; Dhingra, B.; Zaheer, M.; Mazaitis, K.; Salakhut- dinov, R.; and Cohen, W. W. 2018. Open Domain Ques- tion Answering Using Early Fusion of Knowledge Bases and Text. EMNLP.
LC-QuAD: A corpus for complex question answering over knowledge graphs. P Trivedi, G Maheshwari, M Dubey, J Lehmann, Trivedi, P.; Maheshwari, G.; Dubey, M.; and Lehmann, J. 2017. LC-QuAD: A corpus for complex question answering over knowledge graphs. In ISWC 2017, 210-218.
7th open challenge on question answering over linked data (QALD-7). R Usbeck, A.-C N Ngomo, B Haarmann, A Krithara, M Röder, G Napolitano, Semantic Web Evaluation Challenge. SpringerUsbeck, R.; Ngomo, A.-C. N.; Haarmann, B.; Krithara, A.; Röder, M.; and Napolitano, G. 2017. 7th open challenge on question answering over linked data (QALD-7). In Semantic Web Evaluation Challenge, 59-69. Springer.
Message passing for complex question answering over knowledge graphs. S Vakulenko, J D Fernandez Garcia, A Polleres, M De Rijke, M Cochez, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementVakulenko, S.; Fernandez Garcia, J. D.; Polleres, A.; de Ri- jke, M.; and Cochez, M. 2019. Message passing for complex question answering over knowledge graphs. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 1431-1440.
Wikidata: a free collaborative knowledgebase. D Vrandečić, M Krötzsch, Communications of the ACM. 5710Vrandečić, D.; and Krötzsch, M. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10): 78-85.
Zero-shot Entity Linking with Dense Entity Retrieval. L Wu, F Petroni, M Josifoski, S Riedel, L Zettlemoyer, EMNLP. Wu, L.; Petroni, F.; Josifoski, M.; Riedel, S.; and Zettle- moyer, L. 2020. Zero-shot Entity Linking with Dense Entity Retrieval. In EMNLP.
Semantic parsing via staged query graph generation: Question answering with knowledge base. S W Yih, M.-W Chang, X He, J Gao, Yih, S. W.-t.; Chang, M.-W.; He, X.; and Gao, J. 2015. Se- mantic parsing via staged query graph generation: Question answering with knowledge base.
The value of semantic parse labeling for knowledge base question answering. W Yih, M Richardson, C Meek, M.-W Chang, J Suh, ACL. Yih, W.-t.; Richardson, M.; Meek, C.; Chang, M.-W.; and Suh, J. 2016. The value of semantic parse labeling for knowledge base question answering. In ACL, 201-206.
Variational reasoning for question answering with knowledge graph. Y Zhang, H Dai, Z Kozareva, A Smola, L Song, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Zhang, Y.; Dai, H.; Kozareva, Z.; Smola, A.; and Song, L. 2018. Variational reasoning for question answering with knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
Natural language question answering over RDF: a graph data driven approach. L Zou, R Huang, H Wang, J X Yu, W He, D Zhao, Proceedings of the. theZou, L.; Huang, R.; Wang, H.; Yu, J. X.; He, W.; and Zhao, D. 2014. Natural language question answering over RDF: a graph data driven approach. In Proceedings of the 2014
ACM SIGMOD international conference on Management of data. ACM SIGMOD international conference on Management of data, 313-324.
| [] |
[
"Generating Synthetic Speech from SpokenVocab for Speech Translation",
"Generating Synthetic Speech from SpokenVocab for Speech Translation"
] | [
"Jinming Zhao \nDepartment of Data Science & AI\nMonash University\n\n",
"Gholamreza Haffari \nDepartment of Data Science & AI\nMonash University\n\n",
"Ehsan Shareghi \nDepartment of Data Science & AI\nMonash University\n\n"
] | [
"Department of Data Science & AI\nMonash University\n",
"Department of Data Science & AI\nMonash University\n",
"Department of Data Science & AI\nMonash University\n"
] | [
"Association for Computational Linguistics: EACL 2023"
] | Training end-to-end speech translation (ST) systems requires sufficiently large-scale data, which is unavailable for most language pairs and domains. One practical solution to the data scarcity issue is to convert text-based machine translation (MT) data to ST data via text-tospeech (TTS) systems.Yet, using TTS systems can be tedious and slow. In this work, we propose SpokenVocab, a simple, scalable and effective data augmentation technique to convert MT data to ST data on-the-fly. The idea is to retrieve and stitch audio snippets, corresponding to words in an MT sentence, from a spoken vocabulary bank. Our experiments on multiple language pairs show that stitched speech helps to improve translation quality by an average of 1.83 BLEU score, while performing equally well as TTS-generated speech in improving translation quality. We also showcase how Spo-kenVocab can be applied in code-switching ST for which often no TTS systems exit. 1 | 10.48550/arxiv.2210.08174 | [
"https://www.aclanthology.org/2023.findings-eacl.147.pdf"
] | 252,918,531 | 2210.08174 | 343bdc78f8efb0eab1e4669b58df423531d201e7 |
Generating Synthetic Speech from SpokenVocab for Speech Translation
1936 May 2-6, 2023
Jinming Zhao
Department of Data Science & AI
Monash University
Gholamreza Haffari
Department of Data Science & AI
Monash University
Ehsan Shareghi
Department of Data Science & AI
Monash University
Generating Synthetic Speech from SpokenVocab for Speech Translation
Association for Computational Linguistics: EACL 2023
19301936 May 2-6, 2023
Training end-to-end speech translation (ST) systems requires sufficiently large-scale data, which is unavailable for most language pairs and domains. One practical solution to the data scarcity issue is to convert text-based machine translation (MT) data to ST data via text-tospeech (TTS) systems.Yet, using TTS systems can be tedious and slow. In this work, we propose SpokenVocab, a simple, scalable and effective data augmentation technique to convert MT data to ST data on-the-fly. The idea is to retrieve and stitch audio snippets, corresponding to words in an MT sentence, from a spoken vocabulary bank. Our experiments on multiple language pairs show that stitched speech helps to improve translation quality by an average of 1.83 BLEU score, while performing equally well as TTS-generated speech in improving translation quality. We also showcase how Spo-kenVocab can be applied in code-switching ST for which often no TTS systems exit. 1
Introduction
End-to-end (E2E) speech-to-text translation (ST) models require large amounts of data to train (Sperber and Paulik, 2020). Despite the emerging ST datasets (Cattoni et al., 2021;, their size is considerably smaller compared to text-based machine translation (MT) data. A common remedy to tackle the data scarcity issue is to leverage text-based MT data in training ST systems. Common approaches include multi-task learning (Anastasopoulos and Chiang, 2018;Ye et al., 2021), transfer learning & pretraining (Bansal et al., 2019;Wang et al., 2020) and knowledge distillation (Inaguma et al., 2021;.
A more straightforward alternative is to convert text-based MT data to ST via text-to-speech (TTS) synthesis engines (Pino et al., 2019;Jia et al., 2019). 1 Our code is available at https://github.com/ mingzi151/SpokenVocab Figure 1: Overview of generating synthetic speech from SpokenVocab on-the-fly. The first step is to prepare the SpokenVocab bank offline and the second step is to retrieve and stitch audio snippets from the bank by words in a sentence. This method is less commonly used despite its simplicity and effectiveness, 2 mainly due to practical reasons: (i) TTS models have slow inference time and may incur monetary costs; (ii) the conversion is required for each MT datasets. Recently, Lam et al. (2022) proposed to generate synthetic speech without using TTS models. However, their approach is based on real ST data, and thus cannot be extended to MT data.
In this work, we propose a simple, effective and efficient data augmentation approach to convert MT data to ST data on-the-fly. The idea is to prepare a set of spoken words, forming a spoken vocabulary (SpokenVocab) bank, and then generate synthetic speech by retrieving and stitching spoken words based on a text sequence, as shown in Figure 1. 3 Our experiments show that this method is as effective as TTS-generated speech, at a much lower computational and financial cost. For instance, augmenting ST data on-the-fly with 100k of stitchconverted MT data, boosts translation quality by an average of 1.83 BLEU over 3 language pairs from Must-C (Cattoni et al., 2021) with no additional cost, memory, or speed footprints. Comparing the real ST data vs. our converted version from the same transcripts, to our positive surprise, revealed that our synthetic data outperforms its real counterpart by 0.41 BLEU score. We conduct thorough experiments to examine SpokenVocab in boosting translation and further showcase its use and benefit in the context of code-switching (CS) ST.
We hope this simple technique to ease the use of MT data for ST in practice as well as other tasks where synthetic speech is useful.
SpokenVocab
We describe our methodology in creating effective synthetic ST data based on MT data in this section. The core step is the preparation of a SpokenVocab bank offline and stitching sounds on-the-fly.
Concretely, we first use a TTS engine to convert items in a word vocabulary to speech to obtain a set of SpokenVocab offline. 4 Next, we can configure the TTS engine to generate different speaker voices and thus curate a SpokenVocab bank in which each set corresponds to a "speaker". The purpose is to simulate, to the greatest extent, a realistic speech dataset consisting of various speakers. At training, assume we have access to an MT dataset and each pair denoted as < s, t > where s and t are source and targets sentences, respectively. Given such a pair, we choose one voice 5 from the bank, and produce synthetic speech by fetching corresponding audio snippets by words in s from the bank and stitching them together. During stitching, we deploy cross-fade, a well-known technique to smooth transitions between two independent audio clips. 6 3 During the writing of this manuscript we found out that Voder, the first electronic speech synthesiser developed by Bell Labs in 1939, synthesized human speeches by decomposing it into its acoustic components and combining them using human operators in real time.
4 SpokenVocab could also be based on n-grams in a dataset. 5 One could also generate utterances by mixing speakers at the token level, with no additional cost with our technique. We leave further investigation of this to future work as it requires a test condition (i.e., including various speaker voices per utterance) which is not available to the best of our knowledge. 6 https://github.com/jiaaro/pydub Pairing it with t yields a synthetic ST instance. 7
Experiments
We first present the ST system ( §3.1) and TTS systems ( §3.1.2) used in this study. We then describe the ST and MT datasets ( §3.1.3), followed by providing implementation details ( §3.1.4). Next we explain how SpokenVocab is designed ( §3.2) and report translation results ( §3.3). Lastly, we illustrate how our method can be applied to CS ST ( §3.5).
Experimental Setup
Speech Translation System
Pre-trained speech encoders and text decoders have shown great performance on ST Zhao et al., 2022), compared to models trained from scratch. For this reason, we follow the architecture in Gállego et al. (2021) that uses Wav2vec 2 (W2V2) (Baevski et al., 2020) as the speech encoder and mBart decoder as the text decoder, joint with a lightweight linear adapter and a CNN-based length adapter.
TTS Systems
To prepare SpokenVocab, we use the Google TTS service, 8 which supports a wide range of voice configurations; this allows simulating different speakers with various accents, gender and geographical background. We also use a off-the-shelf TTS toolkit, i.e., Tacotron2-DCA + Mulitband-Melgan (short for T2+Mel). 9 We use Google TTS to generate synthetic speech in raw wavforms.
Dataset
We conduct our major experiments on Must-C, a multilingual ST dataset curated from Ted talks. We focus on English (En)→German (De), Romanian (Ro) and Italian (It). For MT data, we use a subset of WMT14, WMT16 and OPUS100 10 for De, Ro and It, with 100k, 100k and 24k instances, respectively. For the code-switching (CS) setting, we use Prabhupadavani (Sandhan et al., 2022), multilingual CS ST dataset, and we focus on En→De, It. Its source utterances are code-mixed with English (major language), Bengali and Sanskrit; each utterance is translated manually to 25 languages. We prepare ST data following the instructions in Gállego et al. (2021). We preprocess MT data with the fairseq instructions and remove pairs with the length of target sentences greater than 64 words to avoid out-of-memory issues. Minimal preprocessing is performed on the CS ST dataset.
Implementation Details
Similar to and Gállego et al. (2021), training different components of W2V2 and mBart decoder yields divergent results. In our initial experiments, we note that fine-tuning the entire W2V2 except for its feature extractor and freezing mBart lead to decent translation results, and thus we use this configuration for all our experiments. To ensure Must-C to be dominant, we make the ratio of Must-C and MT data to be approximately 8:1, unless mentioned otherwise. We use sacreBLEU (Post, 2018) to evaluate translation. Please refer to Appendix A.1 for full training details, hyper-parameters and hardware.
SpokenVocab Preparation and Variations
Constructing the SpokenVocab bank is crucial, as synthetic speech produced in this manner have a direct impact on translation quality. In this section we examine SpokenVocab from various dimensions. TTS Conversion. The first questions to ask are which TTS system should be used to convert a word to a spoken form and what sampling rate (SR) is appropriate. 11 To answer these questions, we conduct intrinsic evaluation on stitched speech by varying TTS engines and SR. Furthermore, as it is common to diversify raw wave forms with audio effects (Potapczyk et al., 2019), we apply the same technique to distort our stitched speech. Results in Table 1 show that using Google TTS and setting the SR to 24k are better choices, while distortion (i.e., adding the effects of tempo, speed and echo) may or may not be helpful. Contrary to the common practice of using a SR of 16k (Baevski et al., 2020), applying 16k to SpokenVocab alters the sound significantly, as shown in the demo in §2, and this has negative impacts on the system. Overall, we use the setting in italic for the rest of our experiments. Word Vocabulary. We compile a word vocabulary, consisting of 1) a common subset of words 12 , and 11 SR is defined as the number of samples taken from a continuous signal per second. 12 The list comes from Official Scrabble Players Dictionary and Wiktionary's word frequency lists, and can be found 2) unique words with a frequency of higher than 99 from the En→X WMT subset. The purpose is to construct an approximated version of SpokenVocab that is ready to convert any sentence to synthetic speech. For words that are not covered by the list, we employ a fuzzy matching mechanism where the most similar word at the surface level is returned.
For instance, an out-of-vocabulary (OOV) word "apples" is replaced by its closest match in the vocabulary "apple", and the speech snippet for "apple" is retrieved. When no match is found, a default filter word, "a", is returned. To investigate the effect of this approximation which would inevitably lead to mispronounced words, we prepare another set of SpokenVocab containing the full set of spoken words in the WMT data (eliminating the need for fuzzy matching). In controlled experiments on En→De, the BLEU scores with the approximated and full SpokenVocabs, with the size of 35k and 460k respectively, are 28.02 and 27.91. The negligible difference indicates the effectiveness of using an approximated SpokenVocab. Additional ablation studies on using 50% and 10% of the full vocabulary yield scores of 27.79 and 27.94, further validating the insensitivity of W2V2 to nuanced mispronunciation, perhaps due to the presence of powerful pre-trained auto-regressive decoder. 13 Number of Speakers. Despite the artificial nature of the stitched speech sounds, one still can tell the speaker's information (e.g., gender, accent). To examine whether diverse voices would be helpful for translation, we set n to 1, 5 and 10 and train models with the same amount of data. These sys- Table 2: Translation quality on Must-C and the average costs associated for generating synthetic speech for every 100k sentences in terms of inference time in minutes ( ), USD value ( ) and storage required in GB (). Preparing SpokenVocab took 2 hours, free of charge, with Google TTS, and stitched speeches are discarded.
tems display similar translation performance with 28.02, 27.73 and 27.80 BLEU scores respectively, suggesting that having a single speaker is sufficient. Our conjecture to this phenomenon is that speech representations produced by W2V2 have removed speaker information, as demonstrated in Nguyen et al. (2020) where analysis was conducted on wav2vec (Schneider et al., 2019), the predecessor to W2V2. This could be further examined with using dialect-or pronunciation-focused translation settings, which we leave to future work.
Translation Performance on Must-C
Producing synthetic speech from SpokenVocab onthe-fly makes the conversion from text to speech highly scalable in terms of time and monetary costs, and it also avoids the need of storing speech. Table 2 reports the time, dollar value and space required to produce every 100k speech with Google TTS, while these numbers are negligible for Spo-kenVocab due to its re-usability. 14 Apart from scalability, it is more important to see the translation performance difference between unnatural speech produced by SpokenVocab and fluent speech generated by state-of-the-art TTS systems.
Stitched Speech vs. Real Speech
An alternative approach to augmentation is to leverage real ST data from any other existing domains.
To assess whether our approach as another augmentation technique is still competitive, we conduct an experiment on En→De by augmenting Must-C with 35k training instances from the Europarl-ST (Iranzo-Sánchez et al., 2020). Table 3 reports the results. To our positive surprise, our stitched speech (generated from the transcripts of eurorparl-ST counterpart) works even better than the real Europarl-ST speech.
Code-switching Speech Translation
Development in CS ST is constrained by the availability of relevant datasets (Sandhan et al., 2022) and using TTS systems to augment data is practically difficult. To this end, our method provides a high degree of flexibility in that it can stitch audio clips of different languages freely. To produce a code-switched utterance, we further prepare Spo-kenVocab for Bengali (Google TTS does not support Sanskrit) based on an English-Bengali dictionary. 15 We maintained the ratio of code-switching in the real data (i.e., 0.35 probability of CS occurring, and 2 as the average number of code-switched words in a sentence). Please see Algorithm 1 in Appendix A.2 for the detailed utterance generation process. Results in Table 4 suggest that the models trained with additional 100k and 24k instances (for De and It respectively.) from SpokenVocab outperform those only trained with the original data.
Conclusion
In this work, we proposed a simple, fast and effective data augmentation technique, SpokenVocab for ST. This provides an alternative for converting MT data to ST data with TTS systems which comes with monetary and computation costs in practice.
Our approach generates synthetic speech on-thefly during training, with no cost or footprint. We have shown that speech stitched from SpokenVocab works as effective as TTS-generated speech, and unlike TTS system, it could directly be applied as a data augmentation tool in code-switching ST.
Our approach can be used in other content-driven speech processing tasks as an uncompromising and easy-to-use augmentation technique.
Limitations
CS ST exbihit difficulties (Huber et al., 2022;Weller et al., 2022), exposing several limitations in this study: 1) Bengali and Sanskrit (another minority language) are treated without difference, as they originate from the same script and Sanskrit is not supported by the Google TTS service. 2)
We use a open-source language detection tool to calculate the oracle hyper-parameters in the dev set; yet, imperfection of the detector on token-level prediction and the fact that source sentences are written in Latin regardless of the language deviate the scores from true values.
Table 3 :
3BLEU scores under different augmentations.STCS
STCS+MTCS-stitched
BLEU
En-Be→De 26.11
28.09
En-Be→It
26.41
26.90
Table 4 :
4Translation quality for CS ST dataset.
Only one work out of 8 uses TTS to augment data in the IWSLT2022 offline speech translation track.
We provide a demo for stitched speeches. 8 https://cloud.google.com/ text-to-speech 9 https://github.com/mozilla/TTS 10 http://opus.nlpl.eu/opus-100.php
For fair comparison between TTS which operates on the full vocabulary, we report the cost under the full vocabulary version of our method.
https://github.com/MinhasKamal/ BengaliDictionary
https://github.com/facebookresearch/ fairseq 17 https://dl.fbaipublicfiles.com/ fairseq/wav2vec/wav2vec_vox_960h_pl.pt 18 https://dl.fbaipublicfiles.com/ fairseq/models/mbart50/mbart50.ft.1n. tar.gz
A AppendixA.1 Implementation DetailsWe implement and train all models with fairseq 16 on 4 A40 GPUs, using 16 floating point precision, for 25k updates. WAV2VEC 2 17 and the mBart50 18 decoder are used. We employ an Adam optimizer with β 1 = 0.99, β 2 = 0.98, while setting the dropout to 0.1, clip norm to 20 and label smoothing to 0.2. For the baseline models, we use a learning rate of 5e-04 and reduce it at plateau. For models trained with additional data, we use the same learning scheduler with a learning rate of 3e-04.
Tied multitask learning for neural speech translation. Antonios Anastasopoulos, David Chiang, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersAntonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 82-91.
Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Alexei Baevski, Henry Zhou, arXiv:2006.11477arXiv preprintAlexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477.
Pre-training on high-resource speech recognition improves lowresource speech-to-text translation. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, Sharon Goldwater, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2019. Pre-training on high-resource speech recognition improves low- resource speech-to-text translation. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 58-68.
Mustc: A multilingual corpus for end-to-end speech translation. Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, Marco Turchi, Computer Speech & Language. 66101155Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Ben- tivogli, Matteo Negri, and Marco Turchi. 2021. Must- c: A multilingual corpus for end-to-end speech trans- lation. Computer Speech & Language, 66:101155.
End-to-end speech translation with pre-trained models and adapters: Upc at iwslt 2021. Ioannis Gerard I Gállego, Carlos Tsiamas, Escolano, A R José, Marta R Fonollosa, Costa-Jussà, Proceedings of the 18th International Conference on Spoken Language Translation (IWSLT 2021). the 18th International Conference on Spoken Language Translation (IWSLT 2021)Gerard I Gállego, Ioannis Tsiamas, Carlos Escolano, José AR Fonollosa, and Marta R Costa-jussà. 2021. End-to-end speech translation with pre-trained mod- els and adapters: Upc at iwslt 2021. In Proceedings of the 18th International Conference on Spoken Lan- guage Translation (IWSLT 2021), pages 110-119.
Code-switching without switching: Language agnostic end-to-end speech translation. Christian Huber, Alexander Enes Yavuz Ugan, Waibel, arXiv:2210.01512arXiv preprintChristian Huber, Enes Yavuz Ugan, and Alexander Waibel. 2022. Code-switching without switching: Language agnostic end-to-end speech translation. arXiv preprint arXiv:2210.01512.
Source and target bidirectional knowledge distillation for end-to-end speech translation. Hirofumi Inaguma, Tatsuya Kawahara, Shinji Watanabe, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesHirofumi Inaguma, Tatsuya Kawahara, and Shinji Watanabe. 2021. Source and target bidirectional knowledge distillation for end-to-end speech trans- lation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1872-1881.
Europarl-st: A multilingual corpus for speech translation of parliamentary debates. Javier Iranzo-Sánchez, Joan Albert Silvestre-Cerda, Javier Jorge, Nahuel Roselló, Adria Giménez, Albert Sanchis, Jorge Civera, Alfons Juan, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEJavier Iranzo-Sánchez, Joan Albert Silvestre-Cerda, Javier Jorge, Nahuel Roselló, Adria Giménez, Al- bert Sanchis, Jorge Civera, and Alfons Juan. 2020. Europarl-st: A multilingual corpus for speech transla- tion of parliamentary debates. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8229-8233. IEEE.
Leveraging weakly supervised data to improve end-to-end speech-to-text translation. Ye Jia, Melvin Johnson, Wolfgang Macherey, Ron J Weiss, Yuan Cao, Chung-Cheng Chiu, Naveen Ari, Stella Laurenzo, Yonghui Wu, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEYe Jia, Melvin Johnson, Wolfgang Macherey, Ron J Weiss, Yuan Cao, Chung-Cheng Chiu, Naveen Ari, Stella Laurenzo, and Yonghui Wu. 2019. Leverag- ing weakly supervised data to improve end-to-end speech-to-text translation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7180-7184. IEEE.
Sample, translate, recombine: Leveraging audio alignments for data augmentation in end-toend speech translation. Shigehiko Tsz Kin Lam, Stefan Schamoni, Riezler, Proceedings of the 60th. the 60thTsz Kin Lam, Shigehiko Schamoni, and Stefan Riezler. 2022. Sample, translate, recombine: Leveraging audio alignments for data augmentation in end-to- end speech translation. In Proceedings of the 60th
Annual Meeting of the Association for Computational Linguistics. 2Short Papers)Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 245- 254.
Multilingual speech translation from efficient finetuning of pretrained models. Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Xian Li, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. Multilingual speech trans- lation from efficient finetuning of pretrained models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 827-838.
Multilingual denoising pretraining for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, Transactions of the Association for Computational Linguistics. 8Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre- training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.
Investigating self-supervised pre-training for end-to-end speech translation. Ha Nguyen, Fethi Bougares, Natalia Tomashenko, Yannick Estève, Laurent Besacier, 2020Ha Nguyen, Fethi Bougares, Natalia Tomashenko, Yan- nick Estève, and Laurent Besacier. 2020. Investi- gating self-supervised pre-training for end-to-end speech translation. In Interspeech 2020.
Harnessing indirect training data for end-to-end automatic speech translation: Tricks of the trade. Juan Pino, Liezl Puzon, Jiatao Gu, Xutai Ma, D Arya, Deepak Mccarthy, Gopinath, Proceedings of the 16th International Conference on Spoken Language Translation. the 16th International Conference on Spoken Language TranslationJuan Pino, Liezl Puzon, Jiatao Gu, Xutai Ma, Arya D McCarthy, and Deepak Gopinath. 2019. Harnessing indirect training data for end-to-end automatic speech translation: Tricks of the trade. In Proceedings of the 16th International Conference on Spoken Language Translation.
Matt Post, arXiv:1804.08771A call for clarity in reporting bleu scores. arXiv preprintMatt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771.
Samsung's system for the iwslt 2019 end-to-end speech translation task. Tomasz Potapczyk, Paweł Przybysz, Marcin Chochowski, Artur Szumaczuk, Proceedings of the 16th International Conference on Spoken Language Translation. the 16th International Conference on Spoken Language TranslationTomasz Potapczyk, Paweł Przybysz, Marcin Cho- chowski, and Artur Szumaczuk. 2019. Samsung's system for the iwslt 2019 end-to-end speech transla- tion task. In Proceedings of the 16th International Conference on Spoken Language Translation.
Prabhupadavani: A code-mixed speech translation data for 25 languages. Jivnesh Sandhan, Ayush Daksh, Laxmidhar Om Adideva Paranjay, Pawan Behera, Goyal, arXiv:2201.11391arXiv preprintJivnesh Sandhan, Ayush Daksh, Om Adideva Paranjay, Laxmidhar Behera, and Pawan Goyal. 2022. Prabhu- padavani: A code-mixed speech translation data for 25 languages. arXiv preprint arXiv:2201.11391.
wav2vec: Unsupervised pretraining for speech recognition. Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli, INTERSPEECH. Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre- training for speech recognition. In INTERSPEECH.
Speech translation and the end-to-end promise: Taking stock of where we are. Matthias Sperber, Matthias Paulik, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMatthias Sperber and Matthias Paulik. 2020. Speech translation and the end-to-end promise: Taking stock of where we are. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7409-7421.
Improving speech translation by understanding and learning from the auxiliary text translation task. Yun Tang, Juan Pino, Xian Li, Changhan Wang, Dmitriy Genzel, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Yun Tang, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021. Improving speech translation by understanding and learning from the auxiliary text translation task. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4252-4261.
Covost 2 and massively multilingual speech translation. Changhan Wang, Anne Wu, Jiatao Gu, Juan Pino, Interspeech. Changhan Wang, Anne Wu, Jiatao Gu, and Juan Pino. 2021. Covost 2 and massively multilingual speech translation. In Interspeech, pages 2247-2251.
Curriculum pre-training for end-to-end speech translation. Chengyi Wang, Yu Wu, Shujie Liu, Ming Zhou, Zhenglu Yang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsChengyi Wang, Yu Wu, Shujie Liu, Ming Zhou, and Zhenglu Yang. 2020. Curriculum pre-training for end-to-end speech translation. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 3728-3738.
Dominic Telaar, and Matthias Paulik. 2022. End-to-end speech translation for code switched speech. Orion Weller, Matthias Sperber, Telmo Pires, Hendra Setiawan, Christian Gollan, Findings of the Association for Computational Linguistics: ACL 2022. Orion Weller, Matthias Sperber, Telmo Pires, Hendra Setiawan, Christian Gollan, Dominic Telaar, and Matthias Paulik. 2022. End-to-end speech translation for code switched speech. In Findings of the Associa- tion for Computational Linguistics: ACL 2022, pages 1435-1448.
End-toend speech translation via cross-modal progressive training. Rong Ye, Mingxuan Wang, Lei Li, Proc. Interspeech 2021. Interspeech 2021Rong Ye, Mingxuan Wang, and Lei Li. 2021. End-to- end speech translation via cross-modal progressive training. Proc. Interspeech 2021, pages 2021-1065.
M-adapter: Modality adaptation for end-to-end speech-to-text translation. Jinming Zhao, Hao Yang, Ehsan Shareghi, Gholamreza Haffari, arXiv:2207.00952arXiv preprintJinming Zhao, Hao Yang, Ehsan Shareghi, and Gho- lamreza Haffari. 2022. M-adapter: Modality adapta- tion for end-to-end speech-to-text translation. arXiv preprint arXiv:2207.00952.
| [
"https://github.com/jiaaro/pydub",
"https://github.com/mozilla/TTS",
"https://github.com/MinhasKamal/",
"https://github.com/facebookresearch/"
] |
[
"STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation",
"STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation"
] | [
"Nader Akoury \nUniversity of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n\n",
"Shufan Wang shufanwang@cs.umass.edu \nUniversity of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n\n",
"Josh Whiting \nUniversity of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n\n",
"Stephen Hood stephen@storium.com \nUniversity of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n\n",
"Nanyun Peng \nUniversity of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n\n",
"Mohit Iyyer miyyer@cs.umass.edu \nUniversity of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n\n"
] | [
"University of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n",
"University of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n",
"University of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n",
"University of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n",
"University of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n",
"University of Massachusetts Amherst ‡ Storium\nUniversity of California Los Angeles\n"
] | [] | Systems for story generation are asked to produce plausible and enjoyable stories given an input context. This task is underspecified, as a vast number of diverse stories can originate from a single input. The large output space makes it difficult to build and evaluate story generation models, as (1) existing datasets lack rich enough contexts to meaningfully guide models, and (2) existing evaluations (both crowdsourced and automatic) are unreliable for assessing long-form creative text. To address these issues, we introduce a dataset and evaluation platform built from STORIUM, an online collaborative storytelling community. Our author-generated dataset contains 6K lengthy stories (125M tokens) with fine-grained natural language annotations (e.g., character goals and attributes) interspersed throughout each narrative, forming a robust source for guiding models. We evaluate language models fine-tuned on our dataset by integrating them onto STORIUM, where real authors can query a model for suggested story continuations and then edit them. Automatic metrics computed over these edits correlate well with both user ratings of generated stories and qualitative feedback from semistructured user interviews. We release both the STORIUM dataset and evaluation platform to spur more principled research into story generation. | 10.18653/v1/2020.emnlp-main.525 | [
"https://arxiv.org/pdf/2010.01717v1.pdf"
] | 236,941,431 | 2010.01717 | a0035379f93e0e95bdadd77a1d8eb27ba89dcf60 |
STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation
4 Oct 2020
Nader Akoury
University of Massachusetts Amherst ‡ Storium
University of California Los Angeles
Shufan Wang shufanwang@cs.umass.edu
University of Massachusetts Amherst ‡ Storium
University of California Los Angeles
Josh Whiting
University of Massachusetts Amherst ‡ Storium
University of California Los Angeles
Stephen Hood stephen@storium.com
University of Massachusetts Amherst ‡ Storium
University of California Los Angeles
Nanyun Peng
University of Massachusetts Amherst ‡ Storium
University of California Los Angeles
Mohit Iyyer miyyer@cs.umass.edu
University of Massachusetts Amherst ‡ Storium
University of California Los Angeles
STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation
4 Oct 2020
Systems for story generation are asked to produce plausible and enjoyable stories given an input context. This task is underspecified, as a vast number of diverse stories can originate from a single input. The large output space makes it difficult to build and evaluate story generation models, as (1) existing datasets lack rich enough contexts to meaningfully guide models, and (2) existing evaluations (both crowdsourced and automatic) are unreliable for assessing long-form creative text. To address these issues, we introduce a dataset and evaluation platform built from STORIUM, an online collaborative storytelling community. Our author-generated dataset contains 6K lengthy stories (125M tokens) with fine-grained natural language annotations (e.g., character goals and attributes) interspersed throughout each narrative, forming a robust source for guiding models. We evaluate language models fine-tuned on our dataset by integrating them onto STORIUM, where real authors can query a model for suggested story continuations and then edit them. Automatic metrics computed over these edits correlate well with both user ratings of generated stories and qualitative feedback from semistructured user interviews. We release both the STORIUM dataset and evaluation platform to spur more principled research into story generation.
Introduction
Fiction writers express their creativity through both low-level linguistic choices and discourse-level sequencing of narrative elements (e.g., plot events and character development). Unlike more constrained text generation tasks, such as translation or summarization, fiction writing allows for almost infinite creative freedom, which budding authors often find cognitively overwhelming (Rose, 1980). Machinein-the-loop storytelling (Clark et al., 2018), in which an author obtains automatically generated sentences or paragraphs when stuck with writer's block, lowers the barrier to entry for creative writing (Roemmele and Gordon, 2015). To spur research in this area, we partner with STORIUM, 1 an online collaborative storytelling platform, to introduce a new dataset and evaluation methodology for story generation.
The open-endedness of story writing does not just pose a barrier to humans-it also presents a challenge for building and evaluating computational models. Prior work relies on datasets that are either too artificial to generalize to longform stories, such as the crowdsourced ROCStories (Mostafazadeh et al., 2016) corpus, or too unconstrained, as in the r/writingprompts dataset (Fan et al., 2018), which pairs mediumlength stories with short prompts. Furthermore, lack of standardized evaluation makes measuring progress difficult: most prior work evaluates outputs using a combination of simple automatic metrics not designed for long-form creative text generation (e.g., BLEU and ROUGE against a single reference) and crowdsourced ratings (McIntyre and Lapata, 2009;Yao et al., 2019;Fan et al., 2019) that preclude evaluating long-form narratives.
We address these limitations by (1) collecting a dataset of stories (Section 2) containing finegrained structural annotations written in natural language, and (2) providing a platform for evaluating models in a machine-in-the-loop setting by allowing real STORIUM authors to interact with the generated stories (Section 4). Our dataset contains nearly 6K longform stories (125M tokens) written by STORIUM authors, each of which is broken into discourse-level scene entries annotated with narrative elements, such as character goals or abilities. Conditioning story generation models on this information thus imposes loose constraints on what the 1 https://storium.com
SCENE INTRO
The women were called to the flight line. Lieutenant Petrova, leader of the section, called out the assignments. "Makarova and Dospanova . . .
DISRUPT THE GERMANS
Bombing a German marshaling yard behind the Don river crossing. Searchlights are already active. Last to play MUST play a Weak card.
ON CHALLENGE SUCCESS
Despite heavy resistance you score many good hits. Damage is minimal.
ON CHALLENGE FAILURE
Flak! One plane is out of action for a few days, and one character (narrator's choice) is Wounded.
Adira was in the middle of her first a mission. She had been given a 'simple' task to escort an airship across river , and then she would take over as pilot while she made some calls for other pilots ; disrupting the Germans . As soon they reached their destination -in this case; it being Moscow -Adira took off towards the enemy base on the opposite side was time to shine . The only thing that mattered right now is getting the women through there without any serious damage ; and without being spotted.
Adira was in the middle of her first a mission. She had been given a 'simple' task to escort an airship across river , and then she would take over as pilot while she made some calls for other pilots ; disrupting the Germans . As soon they reached their destination -in this case; it being Moscow -Adira took off towards the enemy base on the opposite side was time to shine . The only thing that mattered right now is getting the women through there without any serious damage ; and without being spotted.
DEADLY AIM
You're a crack shot, whether with your Tokarev pistol or the bomb release (and occasional machine gun) of your aging biplane.
ADIRA MAKAROVA > select model now using gpt2 gpt2> startup gpt2> preprocess gpt2> generate gpt2> shutdown gene rated entry us er edit s Figure 1: A high-level outline of our dataset and platform. In this example from a real STORIUM game, the character ADIRA MAKAROVA uses the strength card DEADLY AIM to DISRUPT THE GERMANS, a challenge card . Our model conditions on the natural language annotations in the scene intro, challenge card , strength card , and character, along with the text of the previous scene entry (not shown) to generate a suggested story continuation. Players may then edit the model output, by adding adding or deleting deleting text, before publishing the entry. We collect these edits, using the matched matched text as the basis of our USER metric. New models can be added to the platform by simply implementing four methods: startup, shutdown, preprocess, and generate. model should produce, compared to unstructured datasets such as r/writingprompts, and also enables modeling of narrative planning processes.
We fine-tune large-scale pretrained language models on our dataset (Section 3) and integrate them with the STORIUM platform, where authors can query a model for the next few sentences in their story and then edit the resulting text to their liking. We devise a metric (inspired by ROUGE) on top of these edits that measures how much of the generated text is preserved in the post-edited version, and discover that this metric correlates with Likert judgments of linguistic properties such as relevance and coherence. Detailed analyses of the edits (Section 5), including semi-structured interviews with STORIUM users, suggests that generating text relevant to the current story context is the most important open problem in this area. We publicly release both the STORIUM dataset and user-facing evaluation platform to facilitate future research on story generation. 2
STORIUM Dataset
Our STORIUM dataset derives from an online collaborative storytelling community that provides rich metadata useful for guiding computational storytelling systems. In this section, we describe how the 2 https://storium.cs.umass.edu structural elements of STORIUM stories fit together, and verify via an annotation task that this metadata indeed influences the text of the stories. Finally, we use neural topic models to highlight the thematic content and narrative sequencing of STORIUM.
STORIUM: Gamified Storytelling
The STORIUM platform enables a small group of users to collaboratively write a single story by transforming the writing process into a turnbased game. In each game, one player acts as the narrator, while other players take on the role of individual characters within the story (e.g., ADIRA MAKAROVA in Figure 1). Stories unfold through a series of high-level scenes that consist of multiple short entries, each of which is written from the perspective of a character (or the narrator). Scenes commonly revolve around challenges (e.g., DISRUPT THE GERMANS), that the characters tackle within the text of their entries; to help address these challenges, each character has access to a set of cards (e.g., DEADLY AIM, a strength card ) that define various properties such as strengths, weaknesses, items, and goals. The narrator moves the story forward by introducing new challenges, locations, and characters, in the form of cards. These are either created from scratch by the narrator or selected from a predefined world that contains a com- Table 1: While STORIUM has fewer stories than other popular story datasets, each story is considerably longer and contains natural language annotations to guide story generation. * We combine character and action sets to determine average story length. † We count narrator actions introducing challenges and locations as prompts.
mon set of story elements. Collectively, the cards played form a set of structural natural language annotations that guide the story being written. Table 2: An overview of our dataset, which contains long stories, broken down into scene entries, with structural annotations in the form of cards played to guide the narrative. * We count tokens as contiguous spans of either alphanumeric or non-alphanumeric symbols.
Cards influence entry text: STORIUM does not force players to relate their written entries to selected cards or challenges, instead relying on game conventions to guide user behavior. To validate whether the structural metadata influences story text, we conduct a small-scale annotation of 235 scene entries, where we ask annotators 3 to provide binary judgments for (1) whether the card played influences the scene entry, and (2) if the scene entry 3 The annotators were NLP graduate students. addresses the current challenge. We find that 77% of scene entries reference the played cards, and 80% address the current challenge (Table A1).
Related datasets: Prior story generation papers have frequently focused on the ROC-Stories (Mostafazadeh et al., 2016) and r/writingprompts (Fan et al., 2018) datasets. While STORIUM has comparatively fewer stories than these datasets, our stories are over an order of magnitude longer (Table 1). Rather than containing a single short prompt to start the story, our stories on average contain 14 narrator prompts per story, with 41 natural language annotations which describe character goals, attributes, and key items useful for conditioning story generation models. 4 Like STORIUM, the stories in roleplayerguild (Louis and Sutton, 2018) are also formed from collaborative storytelling turns via a role-playing game, though this dataset lacks any prompts or annotations. Finally, datasets consisting of novels and other fiction, like PG-19 (Rae et al., 2020), provide long-form narratives without explicit structure to constrain generation.
Common Themes and Story Arcs
To provide insight into common narrative themes and substructures within our dataset, we train a neural topic model on text from entries and challenges and analyze the resulting topics and their transitions.
Topic model specification
Our topic model is a simplified version of the relationship modeling network (RMN) proposed by Iyyer et al. (2016). 5 As in the RMN, our model re-lies on dictionary learning to compute topics; however, it models each entry and challenge independently, instead of considering the temporal order of scenes through recurrence. We ignore the temporal component because STORIUM contexts do not neatly fit into a chronologically-ordered timeline (e.g., entries within a single scene may not depend on each other). Building a specialized topic model for this data is beyond the scope of this work.
Concretely, given an input text (either an entry or a challenge), we first encode it by computing an average of pretrained GloVe 6 embeddings x. Next, we compute the dot product between x and each row of a global dictionary matrix R. Intuitively, each row of R is a vector representation of an individual topic. These row-wise dot products are converted to a probability distribution via a softmax function and then used to compute a weighted average r of the dictionary rows, which is then trained through a contrastive max-margin loss to reconstruct the input vector z. At test time, the dictionary rows are interpreted by their nearest neighbors (using cosine distance) in the GLoVe word embedding space. 7
Worlds
Topic words Iyyer et al. (2016) for more details. The only difference between our setup and theirs is that we directly use x to compute the row weights without any feed-forward or recurrent layers in between. Figure 2: Example story arcs derived from the adjacency matrix of topic transitions over the text of entries (e.g., in FANTASY CLASSIC stories, the weapon, combat, melee topic is often followed by a transition, as denoted by weapon , to the fealty, valor, sword topic).
Examining topics and their transitions
To explore the content of the STORIUM dataset, we train our model with 50 topics (i.e., R has 50 rows) on the union of entry and challenge text. Table 3 shows the most distinguishing topic (ranked by relative importance) for a sample of different STORIUM worlds. These topics illustrate the diversity of our dataset: topics range from science fiction (Cyberpunk, Steampunk) to detective fiction (Urban Fantasy) and stories set in hospitals (Medical Drama) and schools (The University).
Following the methodology of Antoniak et al. (2019), we also examine common local topic transitions between entries written by the same character across different scenes in a story. We compute the transition probability from topic A to topic B by counting how many times A and B are the most probable topics for two consecutive entries, respectively, and normalizing by the total number of occurrences of topic A. Figure 2 shows a topic transition diagram originating from a weapons-related topic. In the Space Adventure world, stories progress into vehicle and technologyrelated topics, while in Fantasy Classic, they tend to transition to topics about valor instead. That said, both of these worlds are not completely different, as they share a transition topic associated with physical action.
Generating Scene Entries
We focus our modeling efforts on generating scene entries, which are the smallest units of each story, because we want to evaluate the generated text
+ + + Token Embeddings (e) Position Embeddings (p) Segment1 Embeddings (s1) Segment2 Embeddings (s2) Len >= 50 | Pri = 1 Len >= 50 | Pri = 2 Len >= 30 | Pri = 3 Len >= 20 | Pri = 4
Len >= 100 | Pri = 5 Len >= 250 | Pri = 6 Constraint Segment Types: intro character challenge card strength card prev entry entry title description Figure 3: An illustration of our segment embeddings and packing strategy. In addition to token and position embeddings, common to all Transformer models, we employ compositional segment embeddings for conditioning on story metadata (e.g., DEADLY AIM is the title of a strength card ). Each metadata segment has linear constraints with associated priorities (e.g., Len >= 30 | Pri = 3) for optimally packing tokens within the available space. on the STORIUM platform within a machine-in-theloop framework. 8 Our method relies on fine-tuning a pretrained language model (GPT-2) on the STO-RIUM dataset using segment embeddings to differentiate each type of context. While GPT-2 has successfully been used as a state-of-the-art model for story generation (Mao et al., 2019;Guan et al., 2020), one crucial challenge is the length of the contexts: each entry in a story can condition on any narrative element that comes before it (e.g., previous entries, scenes, challenges). Thus, the number of context tokens quickly grows larger than what is feasible to fit in GPU memory. Another challenge lies in how to properly tune hyperparameters in a machine-in-the-loop setting, as it is infeasible to obtain human judgments for a huge number of configurations. The rest of this section fully specifies our model, a token-packing strategy to optimize use of the input context, and preliminary user-facing experiments that helped us decide on our final model hyperparameters.
Model Specification
We fine-tune the GPT-2 medium-sized (355M parameters) language model (Radford et al., 2019) for story generation, as it has been shown to generate coherent long-form prose. Before fine-tuning, we need to account for the complexity of STORIUM contexts: each scene consists of multiple entries, each of which may reference a different number of semi-structured cards (e.g., both the DEADLY AIM strength card and the ADIRA MAKAROVA character in Figure 1 contain a title and description). To handle the compositional and semi-structured nature of the scenes and cards, we allow each input token to condition on an arbitrary number of segment embeddings (Wolf et al., 2019) (Figure 3). Concretely, we augment the token vocabulary V of GPT-2 with a segment vocabulary S for delineating each segment. The final embedding vector e i at position i is computed by summing the token embedding v i with the positional embedding p i and the corresponding set of n segment embeddings {s i 1 , . . . , s in }:
e i = p i + v i + n m=1 s im(1)
During training, a single input instance to our models contains the text of the current entry, its associated challenge, card metadata, as well as the current character's biography and the scene's introductory text (Figure 1). Our final model also includes the text of the immediately preceding story entry, 9 which improves human and automatic evaluation scores (Table 4). At test time, we provide only the story context and autoregressively sample a scene entry.
Context packing
The average story in our dataset has over 19K tokens broken up into 78 scene entries, which is much longer than GPT-2's maximum sequence length of 1024 tokens. We thus face the challenge of how best to optimize our usage of the limited input space, which is made more difficult by the many different types of input context (e.g., entries, characters, challenges) within STORIUM. Naïvely reserving a fixed number of tokens per context type wastes significant space, as the number and length of metadata instances varies considerably per entry. For example, some scene entries do not make use of cards (Table 2), while others reference multiple cards.
Our solution applies the Cassowary algorithm (Badros et al., 2001), well-known for arranging UI elements in Apple's iOS, to pack the input tokens more efficiently. Cassowary allows for efficiently solving linear equality and inequality constraints incrementally, using a dual simplex based method. We define a set of linear constraints on the size of each metadata segment (e.g., include at least 250 tokens from an entry when possible), and Cassowary's solver produces an optimal arrangement of context tokens with respect to these constraints ( Figure 3). Compared to naïvely packing tokens into fixed length segments, Cassowary allows us to vary the minimum and maximum bounds on segments, as well as collapse missing segments. This flexibility results in increased human and automatic evaluation scores (Table 4).
Hyperparameter Selection
Before launching our full machine-in-the-loop evaluation, we conduct preliminary experiments on the STORIUM platform to validate our design choices. Since we want real users on STORIUM to enjoy interacting with the generated text, we want to avoid alienating them with poorly performing models. We measure the impact of (1) including history information from the immediately preceding entry in the story, and (2) using Cassowary to densely pack the context. In total, we fine-tune four models on the Cartesian product of these complementary modeling ideas, keeping all other hyperparameters constant, and deploy these models to STORIUM.
The results (Table 4) highlight the importance of both modeling choices: after including more story history and applying the Cassowary solver, validation perplexity decreases while STORIUM user ratings of fluency, coherence, relevance, and likability all increase. This motivates us to use only the best-performing model for the full-scale evaluation. Additionally, user feedback from these experiments suggested that we generate shorter entries, as longer ones frequently devolved into unrelated and incoherent sentences. Thus, for our final experiments detailed in the next section, we also truncate model outputs to a maximum of four sentences. (His) is key to achieving low perplexity (Ppl), along with high fluency (F), coherence (C), likability (L), and relevance (R) based on a number of user judgments (Jdg).
A Machine-in-the-Loop Evaluation Platform
The inadequacies of existing human and automatic evaluation methods are a major roadblock for story generation research. Automatic evaluations correlate weakly with human judgments (Sagarkar et al., 2018), and these judgments are obtained from crowd workers who are not invested in the narratives they are assessing. These concerns are magnified with STORIUM, as the story contexts are far too long for crowd workers to reliably evaluate (Section 5). In this section, we propose an improved evaluation methodology by directly integrating our models onto the STORIUM platform. This allows story authors to query a machine (Clark et al., 2018) for suggestions during the process of writing their own stories. We develop a new evaluation metric, User Story Edit Ratings (USER), computed on top of the edits that STORIUM users make to generated entries. Finally, we provide experimental results that compare two configurations of our best model from Section 3.2.
Evaluation Lifecycle
To evaluate generated stories, we develop a dedicated web service for serving model outputs to the STORIUM platform. STORIUM users simply press a button on the user interface to obtain a generated scene entry conditioned on the story context. Users can then add add new text while deleting deleting any of the generated text that they wish (Figure 1). When users publish their edited entry, they are also asked to evaluate the generated text on a 5-point Likert scale 10 with respect to relevance (fit with the current story), fluency (judgment of grammaticality), coherence (logical ordering of sentences), and likability (subjective assessment of enjoyability). This process allows experts (STORIUM authors) to eval-uate generated stories, which is a substantial improvement over prior evaluation efforts. We make our evaluation platform publicly accessible for researchers to develop and integrate their own models. Our framework makes adding a new model using any Python-based deep learning framework very easy, requiring implementation of only four methods: startup, shutdown, preprocess, and generate.
A Metric Over User Edits
Intuitively, the amount of generated text that a user preserves in their final published entry clearly indicates the usefulness of the generated text. We quantify this by developing User Story Edit Ratings (USER), inspired by the longest common subsequence (LCS) variant of ROUGE (Lin, 2004), applied to user edits. Given a generated entry X and the final published entry Y , we compute
USER(X, Y ) = |MATCH(X,Y )| |X| , where MATCH(X, Y )
considers contiguous substrings with at least one non-stopword as matches matches (see Figure 1 for an example and Appendix C for a more thorough treatment). We do not use ROUGE-L because vanilla LCS typically favors subsequences of unigram matches (often stopwords) over longer contiguous n-gram matches. In our STORIUM setting, users preserving n-grams or full sentences is a clear indication that the generated text was useful.
Analysis
Compared to existing work on story generation, the main novelty of our STORIUM evaluation platform is that it enables authors to interact directly with model-generated text through their edits. In this section, we conduct experiments on our platform and analyze the edits by examining the correlation of USER to Likert scores. We explore linguistic properties of text that users preserve and also conduct a crowdsourced evaluation on Amazon Mechanical Turk that demonstrates its unsuitability for this task. Finally, we qualitatively describe feedback obtained from interviews with ten STORIUM users who engaged with our models, which provides a roadmap for future work.
Top-k vs. nucleus sampling: Using our platform (Section 4), we evaluate our best model (Table 4) with two different decoding strategies: (1) top-k sampling (Fan et al., 2018) with k = 40, and (2) nucleus sampling (Holtzman et al., 2020) (2020) show that nucleus sampling improves over top-k sampling on measures like repetition, STORIUM users clearly prefer the top-k variant across all categories (last column of Table 5). We collect roughly 200 feedback ratings and 175 edits for each model over a span of three months beginning in late February 2020. We discover that both configurations score best on fluency and worst on relevance. This is unsurprising as (1) GPT-2 is known to produce fluent text and (2) the complex and lengthy STORIUM data is a challenge for limited-context models. Finally, USER scores are generally low (15.6 for top-k vs. 9.9 for nucleus sampling), indicating that users delete most of the current model's generated text. This result demonstrates that story generation models still have a long way to go. 13 USER correlates with human judgments: A natural question is whether our USER metric correlates with judgments of fluency, coherence, relevance, and likability. Table 5 shows that for the top-k configuration, relevance has a significantly higher correlation (Pearson's r) with USER than the other properties. In other words, users are most likely to preserve generated text when it is relevant to the overall story. Fluency correlates only weakly with USER, which makes sense as most generated entries are fluent due to GPT-2's pretraining. Finally, nucleus sampling exhibits lower correlation for relevance, but higher correlation for the other three properties, possibly due to its lower average scores for these properties (see Appendix C for a comparison of USER to ROUGE-based metrics). 13
Linguistic properties of preserved text: Knowing that users delete most of the generated text, we instead explore the linguistic commonalities of the preserved text. We run spaCy part-of-speech tagging and named entity recognition (Honnibal and Montani, 2017) over the edited entries. Strikingly, 29.5% of generated proper nouns are preserved in the edited text, compared to only 13.5% for all other POS tags. A major confound is that our model could unfairly receive credit for simply copying character names from the input context, as users are likely to write about these characters anyway.
To measure the extent of this effect, we match all generated named entities that users preserve to predefined character lists from each story, and discover that 63% of generated entities already exist within the story context. The remaining 37% of entities are often completely new character names. User interviews also suggest that this ability to generate new names is a useful feature.
Crowdsourced evaluation is unreliable: Thus far, we have argued for our evaluation platform by claiming that crowdsourced methods are unsuitable for evaluating stories with complex and lengthy contexts. Here, we measure fluency, coherence, relevance, and likability of our generated entries with a crowdsourced Amazon Mechanical Turk task, to see if the results correspond to STORIUM user ratings. Designing this crowdsourced task is difficult, as we cannot show crowd workers the entire story context due to its length; we thus decide to show the same inputs that the model receives (Section 3). We collect ratings of 100 examples per model, with three judgments per example. 14 Table 6 (top) shows that workers have very low agreement (Fleiss' κ) for all properties, including even fluency. An analysis of the median task completion time 15 reveals most workers did not actually read the context. We run a second experiment, showing only the generated text (no context), and remove the relevance rating. Table 6 (bottom) shows this improves agreement ( Table 6), and that the average fluency scores align closely with those from STORIUM users. Overall, our struggle to obtain quality judgments from Mechanical Turk further validates our platform: STORIUM provides free expert judgments from people invested in storytelling.
Feedback from user interviews: To better understand the strengths and weaknesses of our current model, we conduct semi-structured interviews with ten STORIUM users. Most were surprised with the overall fluency of our models. This partly explains the low correlation of fluency with USER. Relevance was mentioned by 9 out of 10 users as the number one area of improvement for our model, confirming our experimental results (Table 5). Four users called out the model's tendency to fabricate facts and introduce new characters. Despite these concerns, three users explicitly stated the model inspired them to write or found portions of the generated text useful, though mostly as a source for character and place names (supporting the linguistic analysis in Section 5). Finally, some users considered the system a curiosity and decided to write stories using only generated text (without edits). 16 14 We limit annotations to crowd workers living in the US and the UK, with over 1000 completed annotations and a 99% approval. We pay $0.50 per annotation, by assuming 2 minutes per annotation, for an effective hourly rate of $15. 15 Mechanical Turk automatically reports a WorkTimeInSeconds field for each annotation, which is ten minutes on average for our task -more than enough time to read and assess the generated entry and associated context. Sadly, this interval is misleading. Analyzing the median time between submits, we see workers accept multiple concurrent tasks, wait a few minutes, then submit each annotation in quick succession, thus inflating the WorkTimeInSeconds interval. 16 These AI-guided narratives are prevalent enough that we manually exclude these games from our experiments as they Our work builds on prior research in computational modeling for story generation. Early narrative prose generation systems (Meehan, 1977;Callaway and Lester, 2001;Riedl and Young, 2004) relied on graph-based planning formalisms and custom rules to structure their narratives, while story graphs have been used for interactive storytelling (Riedl and Bulitko, 2013). More recent work uses deep learning to generate stories by training neural models with limited context (Peng et al., 2018;Fan et al., 2018;Goldfarb-Tarrant et al., 2019) and structured knowledge, either external (Mao et al., 2019;Guan et al., 2020;Goldfarb-Tarrant et al., 2020) or derived (Yao et al., 2019;Fan et al., 2019). Compared to the datasets studied in those works, our STORIUM dataset contains much longer stories with built-in structural annotations written in natural language in the form of cards (Table 2).
Our work connects more closely to existing machine-in-the-loop storytelling work (Roemmele and Gordon, 2015;Samuel et al., 2016;Clark et al., 2018), in which systems work in concert with users to collaboratively author a narrative. Much like the Creative Help platform of Roemmele and Gordon (2015), we provide writing assistance by interactively generating continuations of STORIUM stories. We improve over Roemmele and Gordon (2015) by evaluating a trained model (instead of a retrievalbased approach) with a large user population.
Finally, our STORIUM evaluation takes a different approach to prior research that measures the quality of generated stories. Sagarkar et al. (2018) train an automatic scorer on human annotations of overall story quality, relevance, and interestingness based on evaluation criteria from (McIntyre and Lapata, 2009). See et al. (2019) consider a number of diversity related measures for automated evaluation of story generation systems by focusing on the GPT-2 small model, noting that quality assessments are still best measured through human evaluation.
Limitations
Evaluating on the STORIUM platform enables researchers to receive high-quality judgements on the outputs of their story generation models. These judgements are made possible by the significant time and effort spent by real authors on crafting their narratives, as their incentives are substantially different than those of crowdsourced workers. artificially increase the automatic metrics.
The amount of author effort involved in evaluation, when combined with the relatively small size of the STORIUM community, can cause evaluation to take a considerable amount of time (i.e., to collect hundreds of judgements) as evidenced in our analysis (Section 5). Thus, our platform is not currently suitable for "instant" evaluation of generated stories. Furthermore, as the evaluation platform is specifically deployed on STORIUM, it cannot be trivially used to evaluate models trained on other story generation datasets, as users of the website are mainly invested in writing narratives that follow the STO-RIUM format.
Conclusion
We introduce the STORIUM dataset and evaluation platform for machine-in-the-loop story generation, built from an online collaborative storytelling community. STORIUM contains 6K long stories annotated with structural metadata useful for conditioning language models. Importantly, real STORIUM authors evaluate model outputs by adding and removing text to create their own stories. We devise a metric on top of their edits that correlates strongly with judgments of the relevance of the generated text, which user interviews suggest is the most important area for improvement moving forward. Our dataset and evaluation platform will be made publicly available to spur progress into story generation.
Author Contributions
Dataset Analysis: Akoury, Wang Generation Model: Akoury, Wang Evaluation Platform: Akoury, Whiting, Hood Research Guidance: Iyyer, Peng Table A1: We ask annotators to determine how frequently cards influence an entry, and if the entry addresses the challenge. †Annotators were asked to flag stories not written in English or otherwise could not be understood.
Additionally, there are many small details which are important distinctions in the game, but may not require separate modeling for generating a scene entry. For example, there is a distinction between regular cards, which have a fixed title and description provided by the narrator; versus wild cards, which allow individual characters to write their own title and description. For the sake of completeness, we provide Table A2 to help further explore the depths of this unique dataset. The following histograms 1 further break down the data in Table A2, clearly demonstrating the long tail distributions indicative of user generated stories:
B Web Service
Our web service is modular and allows easily adding new models. It consists of a frontend service, which acts as a mediator between STO-RIUM and each backend service responsible for serving model outputs. The frontend stores data in a PostgreSQL database and provides a dashboard for viewing realtime ratings and evaluation metrics. It also displays user comments, scene entry diffs based on user edits, and Pearson's r correlations among metrics and user ratings -all sortable per model. A new model can be served by simply implementing four methods (startup, shutdown, preprocess, and generate automatically installs all Python requirements for serving a model and is agnostic to the underlying tensor library used. Additionally, we follow the latest best practices, including the use of Docker containers and the Asynchronous Server Gateway Interface (ASGI) 2 ,the latest Python web standard, which allows for asynchronous programming using asyncio. 3 We host the web service using an onpremise server with four 2080Ti GPUs.
C User Story Edit Ratings
Recently, the discriminative power of BLEU has been called into question when evaluating stateof-the-art machine translation systems, leading researchers to investigate alternative evaluation metrics (Freitag et al., 2020;Sellam et al., 2020). Similarly, we question the use of ROUGE metrics for automatic evaluation of open-ended story generation. Using our evaluation platform, we show that USER improves upon ROUGE in the story generation domain. When evaluating story continuations, we cannot compare against an a priori gold standard. Rather, we consider the final published story a user generates to be the gold standard, and thus evaluate models by how much text the user retains. Using ROUGE-L precision, which simply computes the ratio of the longest common subsequence (LCS) with the number of tokens in the generated text, we can measure this quantity.
As highlighted by Lin (2004), ROUGE-L contains a subtle mismatch with expectations, as the LCS does not consider locality of matches -assigning equal weight to subsequences of the same length even when the distance between matched words differs. Given a reference sequence X, the following two candidate sequences Y 1 and Y 2 produce the same ROUGE-L score (an underscore indicates a subsequence match): ROUGE-W tries to address this shortcoming by introducing a weighting which favors subsequences with less separation. Sadly, for long texts, both ROUGE-L and ROUGE-W often favors long subsequences of stopwords over contiguous substrings, a sign that a user clearly used part of the output unchanged. While acceptable for short summaries, this is much less appropriate for long-form openended text generation. Removing stopwords helps alleviate the mismatch, so we do so in our comparison to ROUGE (Table A4), though the fundamental issue still remains. This mismatch calls into question the ability of ROUGE-L and ROUGE-W to distinguish among models with strong story generation capability. Table A4: USER produces lower scores on average than ROUGE-L or ROUGE-W.
Our new metric, User Story Edit Ratings (USER), is based on a diff-like approach. We begin by applying the same text preprocessing as ROUGE. Afterwhich, we find the longest contiguous substring, then use it as a pivot to divide the remaining string into two halves (excluding the pivot), and recursively repeat the process in each half. 4 We then only consider substrings with at least one non-stopword as matches matches (careful scrutiny of Figure 1 reveals an unmatched stopword it). Subsequently, we compute precision, recall, and F1 identically to ROUGE. Table A3 shows USER correlates with user judgments approximately similarly to ROUGE metrics, while correlating strongly with both metrics. Additionally, USER produces lower scores on average compared to ROUGE (Table A4). Taken in combination, these insights indicate USER is better capable of discerning differences among the strong story generation models of the future, as it provides more stark evaluations while still correlating well with human judgments.
histograms provide context for the meaning of Mean and Std Dev forTable A2.
Dataset # Stories # Tokens per Story Prompts Turns Annotationsroleplayerguild
1,439
3,079 *
✗
✓
✗
PG-19
28,752
68,973
✗
✗
✗
ROCStories
98,156
88
✓
✗
✗
r/writingprompts 303,358
735
✓
✗
✗
STORIUM
5,743
19,278
✓ †
✓
✓
Table 3 :
3Topics with the highest relative importance for
a sample of STORIUM worlds, which illustrate the diver-
sity of the dataset.
tions in Iyyer et al. (2016).
6 glove.840B.300d
7 We encourage interested readers to see
Table 4 :
4Exploratory experiments indicate optimally
packing tokens using Cassowary (Cas), and including
more history
withLik
Flu
Coh USER
Rating
Rel top-k 0.51 0.28 0.55
0.51
2.55
nucleus 0.53 0.40 0.57
0.39
2.47
Lik top-k
-
0.28 0.35
0.34
3.32
nucleus
-
0.38 0.55
0.35
3.21
Flu
top-k
-
-
0.54
0.13 †
3.96
nucleus
-
-
0.61
0.23
3.76
Coh top-k
-
-
-
0.25
3.41
nucleus
-
-
-
0.36
2.96
USER top-k
-
-
-
-
15.63
nucleus
-
-
-
-
9.86
Table 5 :
5Despite its low rating, relevance is clearly im-
portant as indicated by the moderately strong Pearson's
r correlations (first four columns) with USER and the re-
maining human judgments. All correlations are signifi-
cant (p < 0.01), except those indicated by † (p > 0.05).
p = 0.9. 11 The sampling parameters, such as the
k in top-k sampling, can significantly affect out-
put quality of story generation models (See et al.,
2019), so we choose values that worked well in
prior work (Qin et al., 2019). 12
Interestingly, while Holtzman et al.
Table 6 :
6Despite our best efforts, our first crowd sourced
judgments show low agreement (κ) on open-ended story
generation. Our second run, which removes context,
thus excluding relevance judgments, greatly increases
agreement for fluency and coherence.
Table A2 :
A2A small look at the highly compositional nature of our dataset.
). The backendTable A3: USER correlates well with both ROUGE-L and ROUGE-W when removing stopwords.Likability
Fluency
Coherence
ROUGE-L
ROUGE-W
USER
top-k nuc top-k nuc top-k nuc top-k nuc top-k nuc top-k nuc
Relevance
0.51 0.53 0.28 0.40 0.55 0.57 0.52 0.38 0.50 0.36 0.51 0.39
Likability
-
0.28 0.38 0.35 0.55 0.29 0.34 0.28 0.31 0.34 0.35
Fluency
-
-
0.54 0.61 0.11 † 0.23 0.10 † 0.22 0.13 † 0.23
Coherence
-
-
-
0.27 0.38 0.24 0.34 0.25 0.36
ROUGE-L
-
-
-
-
0.98 0.98 0.95 0.93
ROUGE-W
-
-
-
-
-
0.97 0.94
Top-k Nucleus Score Count Score Count ROUGE-L28.61
174
20.66
178
ROUGE-W 20.73
174
13.80
178
USER
15.63
174
9.86
178
While Fan et al. (2019) extract internal structure via SRL, this is not inherent to the dataset, and can be applied to other datasets, including our own. 5 Preliminary experiments with LDA (Blei et al., 2003) yielded less coherent topics, which is consistent with evalua-
Our dataset also enables modeling high-level decisions made by the narrator, such as challenge sequencing; we leave this for future work.
If the preceding entry is not written by the current character, we also include the current character's last entry.
They also provide optional freeform comments on generated text; we leave analysis of the comments to future work.
We use a temperature of 0.9, a repetition penalty (Keskar et al., 2019) of 1.2, and an analogous length penalty that dynamically penalizes producing the end of sequence token inversely proportionally to a desired length l d .12 It is possible that a better set of sampling hyperparameters exists, which we leave to future work.13 See the supplementary HTML for an export of all results (including generated text and edits) used for this paper.
FastAPI (https://fastapi.tiangolo.com) 3 https://docs.python.org/3/library/ asyncio.html
We use SequenceMatcher from Python's difflib:
https://docs.python.org/3/library/ difflib.html
AcknowledgementsWe thank the wonderful STORIUM users for actively using our story generation models and generously providing their time to be interviewed. We also thank the amazing UMass NLP community for thoughtful insights on our paper and helping to validate whether structural metadata influences story text on STORIUM. Akoury and Iyyer were supported during this project by a research gift from Genpact. Peng was supported in part by the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).Appendix A Additional Dataset StatisticsAs our dataset derives from a collaborative storytelling game that is highly compositional by nature, it is difficult to concisely capture the full scope of the data within the main body. Here we highlight the full results of our small scale annotation that indicates cards influence the scene entry text.
Narrative paths and negotiation of power in birth stories. Maria Antoniak, David Mimno, Karen Levy, Proceedings of the ACM Conference on Computer-Supported Cooperative Work and Social Computing. the ACM Conference on Computer-Supported Cooperative Work and Social ComputingMaria Antoniak, David Mimno, and Karen Levy. 2019. Narrative paths and negotiation of power in birth sto- ries. In Proceedings of the ACM Conference on Computer-Supported Cooperative Work and Social Computing.
The cassowary linear arithmetic constraint solving algorithm. Greg J Badros, Alan Borning, Peter J Stuckey, ACM Trans. Comput. Hum. Interact. 8Greg J. Badros, Alan Borning, and Peter J. Stuckey. 2001. The cassowary linear arithmetic constraint solving algorithm. ACM Trans. Comput. Hum. Inter- act., 8:267-306.
Latent dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, Journal of Machine Learning Research. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research.
Narrative prose generation. B Charles, James C Callaway, Lester, Artif. Intell. 139Charles B. Callaway and James C. Lester. 2001. Narra- tive prose generation. Artif. Intell., 139:213-252.
Creative writing with a machine in the loop: Case studies on slogans and stories. Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, Noah A Smith, 23rd International Conference on Intelligent User Interfaces. Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. 2018. Creative writ- ing with a machine in the loop: Case studies on slo- gans and stories. 23rd International Conference on Intelligent User Interfaces.
Hierarchical neural story generation. Angela Fan, Mike Lewis, Yann Dauphin, Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsAngela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proceedings of the Association for Computational Linguistics.
Strategies for structuring story generation. Angela Fan, Mike Lewis, Yann Dauphin, Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsAngela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. In Pro- ceedings of the Association for Computational Lin- guistics.
Bleu might be guilty but references are not innocent. Markus Freitag, David Grangier, Isaac Caswell, abs/2004.06063ArXiv. Markus Freitag, David Grangier, and Isaac Caswell. 2020. Bleu might be guilty but references are not in- nocent. ArXiv, abs/2004.06063.
Content planning for neural story generation with aristotelian rescoring. Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, Nanyun Peng, the 2020 Conference on Empirical Methods in Natural Language Processing. EMNLPSeraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, and Nanyun Peng. 2020. Content plan- ning for neural story generation with aristotelian rescoring. In the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Plan, write, and revise: an interactive system for open-domain story generation. Seraphina Goldfarb-Tarrant, Haining Feng, Nanyun Peng, NAACL-HLT. system demonstrationSeraphina Goldfarb-Tarrant, Haining Feng, and Nanyun Peng. 2019. Plan, write, and revise: an interactive system for open-domain story generation. In NAACL- HLT, system demonstration.
A knowledge-enhanced pretraining model for commonsense story generation. Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, Minlie Huang, Transactions of the Association for Computational Linguistics. 8Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A knowledge-enhanced pre- training model for commonsense story generation. Transactions of the Association for Computational Linguistics, 8:93-108.
The curious case of neural text degeneration. iclr. Ari Holtzman, Jan Buys, Maxwell Forbes, Yejin Choi, Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degen- eration. iclr.
2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Matthew Honnibal, Ines Montani, To appearMatthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremental parsing. To appear.
Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, L Jor Dan, Hal Boyd-Graber, Iii Daume, NAACL-HLT. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jor dan L. Boyd-Graber, and Hal Daume III. 2016. Feud- ing families and former friends: Unsupervised learn- ing for dynamic fictional relationships. In NAACL- HLT.
Ctrl: A conditional transformer language model for controllable generation. Bryan Nitish Shirish Keskar, Lav R Mccann, Caiming Varshney, Richard Xiong, Socher, abs/1909.05858ArXiv. Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. ArXiv, abs/1909.05858.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, ACL. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In ACL 2004.
Deep Dungeons and Dragons: Learning Character-Action Interactions from Role-Playing Game Transcripts. Annie Louis, Charles Sutton, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesShort Papers; New Orleans, LouisianaAssociation for Computational Linguistics2Annie Louis and Charles Sutton. 2018. Deep Dungeons and Dragons: Learning Character-Action Interactions from Role-Playing Game Transcripts. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 708-713, New Orleans, Louisiana. Asso- ciation for Computational Linguistics.
Improving neural story generation by targeted common sense grounding. Bodhisattwa Prasad Huanru Henry Mao, Julian Majumder, Garrison Mcauley, Cottrell, 10.18653/v1/D19-1615Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsHuanru Henry Mao, Bodhisattwa Prasad Majumder, Ju- lian McAuley, and Garrison Cottrell. 2019. Improv- ing neural story generation by targeted common sense grounding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 5988-5993, Hong Kong, China. Association for Com- putational Linguistics.
Learning to tell tales: A data-driven approach to story generation. Mirella Neil Duncan Mcintyre, Lapata, ACL/IJCNLP. Neil Duncan McIntyre and Mirella Lapata. 2009. Learn- ing to tell tales: A data-driven approach to story gen- eration. In ACL/IJCNLP.
Tale-spin, an interactive program that writes stories. James R Meehan, IJCAI. James R. Meehan. 1977. Tale-spin, an interactive pro- gram that writes stories. In IJCAI.
A corpus and cloze evaluation for deeper understanding of commonsense stories. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James Allen, 10.18653/v1/N16-1098Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsNasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of com- monsense stories. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 839-849, San Diego, Cal- ifornia. Association for Computational Linguistics.
Towards controllable story generation. Nanyun Peng, Marjan Ghazvininejad, Jonathan May, Kevin Knight, Proceedings of the First Workshop on Storytelling. the First Workshop on StorytellingNew Orleans, LouisianaAssociation for Computational LinguisticsNanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. 2018. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pages 43-49, New Orleans, Louisiana. Association for Computational Linguistics.
Counterfactual story reasoning and generation. Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin Choi, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsLianhui Qin, Antoine Bosselut, Ari Holtzman, Chan- dra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and genera- tion. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 5043-5053, Hong Kong, China. Association for Com- putational Linguistics.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Compressive transformers for long-range sequence modelling. Jack W Rae, Anna Potapenko, M Siddhant, Chloe Jayakumar, Timothy P Hillier, Lillicrap, International Conference on Learning Representations. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Com- pressive transformers for long-range sequence mod- elling. In International Conference on Learning Rep- resentations.
Interactive narrative: An intelligent systems approach. O Mark, Vadim Riedl, Bulitko, AI Magazine34Mark O. Riedl and Vadim Bulitko. 2013. Interactive nar- rative: An intelligent systems approach. AI Magazine, 34:67-77.
An intent-driven planner for multi-agent story generation. O Mark, Robert Michael Riedl, Young, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems. the Third International Joint Conference on Autonomous Agents and Multiagent SystemsMark O. Riedl and Robert Michael Young. 2004. An intent-driven planner for multi-agent story generation. Proceedings of the Third International Joint Confer- ence on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004., pages 186-193.
Creative help: a story writing assistant. Melissa Roemmele, Andrew S Gordon, International Conference on Interactive Digital Storytelling. Melissa Roemmele and Andrew S Gordon. 2015. Cre- ative help: a story writing assistant. In International Conference on Interactive Digital Storytelling.
Rigid rules, inflexible plans, and the stifling of language: A cognitivist analysis of writer's block. College Composition and Communication. Mike Rose, 31Mike Rose. 1980. Rigid rules, inflexible plans, and the stifling of language: A cognitivist analysis of writer's block. College Composition and Communi- cation, 31(4).
Quality signals in generated stories. Manasvi Sagarkar, John Wieting, Lifu Tu, Kevin Gimpel, 10.18653/v1/S18-2024Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. the Seventh Joint Conference on Lexical and Computational SemanticsNew Orleans, LouisianaAssociation for Computational LinguisticsManasvi Sagarkar, John Wieting, Lifu Tu, and Kevin Gimpel. 2018. Quality signals in generated stories. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 192- 202, New Orleans, Louisiana. Association for Com- putational Linguistics.
The design of writing buddy: A mixedinitiative approach towards computational story collaboration. Ben Samuel, Michael Mateas, Noah Wardrip-Fruin, ICIDS. Ben Samuel, Michael Mateas, and Noah Wardrip-Fruin. 2016. The design of writing buddy: A mixed- initiative approach towards computational story col- laboration. In ICIDS.
Do massively pretrained language models make better storytellers?. Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D Manning, Conference on Computational Natural Language Learning. Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better storytellers? In Conference on Computational Nat- ural Language Learning.
Bleurt: Learning robust metrics for text generation. Thibault Sellam, Dipanjan Das, Ankur P Parikh, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsLong Papers)Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. Bleurt: Learning robust metrics for text gen- eration. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Computational Linguistics.
Transfertransfo: A transfer learning approach for neural network based conversational agents. Thomas Wolf, Victor Sanh, Julien Chaumond, Clement Delangue, abs/1901.08149ArXiv. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversa- tional agents. ArXiv, abs/1901.08149.
Plan-andwrite: Towards better automatic storytelling. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, Rui Yan, Association for the Advancement of Artificial Intelligence. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan-and- write: Towards better automatic storytelling. In Asso- ciation for the Advancement of Artificial Intelligence.
| [] |
[
"FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference",
"FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference"
] | [
"Michiel De Jong \n‡ Google Research\nUniversity of Southern California\n\n",
"Yury Zemlyanskiy \n‡ Google Research\nUniversity of Southern California\n\n",
"Joshua Ainslie \n‡ Google Research\nUniversity of Southern California\n\n",
"Nicholas Fitzgerald \n‡ Google Research\nUniversity of Southern California\n\n",
"Sumit Sanghai \n‡ Google Research\nUniversity of Southern California\n\n",
"Sha ‡ Fei \n‡ Google Research\nUniversity of Southern California\n\n",
"William W Cohen \n‡ Google Research\nUniversity of Southern California\n\n"
] | [
"‡ Google Research\nUniversity of Southern California\n",
"‡ Google Research\nUniversity of Southern California\n",
"‡ Google Research\nUniversity of Southern California\n",
"‡ Google Research\nUniversity of Southern California\n",
"‡ Google Research\nUniversity of Southern California\n",
"‡ Google Research\nUniversity of Southern California\n",
"‡ Google Research\nUniversity of Southern California\n"
] | [] | Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledgeintensive NLP tasks. However, the architecture used for FiD was chosen by making minimal modifications to a standard T5 model, which our analysis shows to be highly suboptimal for a retrieval-augmented model. In particular, FiD allocates the bulk of FLOPs to the encoder, while the majority of inference time results from memory bandwidth constraints in the decoder. We propose two simple changes to the FiD architecture to alleviate memory bandwidth constraints, and speed up inference by 7x. This allows us to use a much larger decoder at modest cost. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large. | 10.48550/arxiv.2212.08153 | [
"https://export.arxiv.org/pdf/2212.08153v2.pdf"
] | 254,823,295 | 2212.08153 | a7ca1bce0af7fe4703f5c3296db2dcc8dc112f20 |
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Michiel De Jong
‡ Google Research
University of Southern California
Yury Zemlyanskiy
‡ Google Research
University of Southern California
Joshua Ainslie
‡ Google Research
University of Southern California
Nicholas Fitzgerald
‡ Google Research
University of Southern California
Sumit Sanghai
‡ Google Research
University of Southern California
Sha ‡ Fei
‡ Google Research
University of Southern California
William W Cohen
‡ Google Research
University of Southern California
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledgeintensive NLP tasks. However, the architecture used for FiD was chosen by making minimal modifications to a standard T5 model, which our analysis shows to be highly suboptimal for a retrieval-augmented model. In particular, FiD allocates the bulk of FLOPs to the encoder, while the majority of inference time results from memory bandwidth constraints in the decoder. We propose two simple changes to the FiD architecture to alleviate memory bandwidth constraints, and speed up inference by 7x. This allows us to use a much larger decoder at modest cost. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large.
Introduction
A large body of work has demonstrated that language model performance on downstream tasks can be improved by augmenting the model with relevant retrieved text (Guu et al., 2020;Izacard and Grave, 2021;Izacard et al., 2022). In particular, the Fusion-in-Decoder (FiD) architecture (Izacard and Grave, 2021) stands out for strong performance, even outperforming much larger models on many knowledge-intensive tasks (Izacard et al., 2022). However, FiD uses a standard T5 encoder-decoder architecture which was not designed for use as a retrieval-augmented model. In this work we propose FiDO, a modified FiD architecture optimized for the retrieval-augmented setting.
0%
25% 50% 75% 100% The FiD decoder is responsible for a difficult task, assimilating information from many passages and reasoning over the information to generate an output. However, because the encoder and decoder are similar size and the encoder is applied to a large number of retrieved passages, FiD devotes an order of magnitude more Floating Point Operations (FLOPs) to the encoder than the decoder. In spite of this, the majority of inference time is actually spent in the decoder, as has been observed in prior work (Hofstätter et al., 2022). This surprising result is shown in Figure 1. Our analysis finds that for typical inference settings the FiD decoder is memory-bandwidth bound (Williams et al., 2009) due to using multi-head cross-attention (Vaswani et al., 2017) over a large input sequence.
Based on this analysis, we propose two sets of architectural changes. We first propose to reduce the cost of cross-attention over retrieved passages by removing most cross-attention layers from the decoder. This reduces cost and yields much smaller losses in performance than FiD-Light (Hofstätter et al., 2022), the best previously-proposed approach for optimizing FiD. We also replace multi-head attention with multi-query attention (Shazeer, 2019). With these modifications the memory-bandwidth bottleneck is eliminated: decoder inference is now orders of magnitude faster and most inference time is spent in the encoder, consistent with the balance of FLOPs between components. Finally, we propose to partially rebalance compute towards the decoder by massively scaling decoder size, using a smaller encoder to extract information from retrieved passages and a larger decoder to assimilate the information and reason about the desired output. We refer to the resulting series of models as FiDO (Fusion in Decoder Optimized) and show that FiDO strongly outperforms standard FiD models on the question-answering datasets Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017) and WebQuestions (Berant et al., 2013) for a wide range of inference budgets and settings. Figure 2 summarizes some of these results.
Analysis
Retrieval-augmented models generally read many context tokens relative to the number of question or answer tokens, such that processing retrieved text consumes the bulk of FLOPs. However, past work has shown that most inference time for Fusion-in-Decoder (FiD) is spent in the decoder (Hofstätter et al., 2022). Our own experiments support this ( Figure 1). This section investigates FiD's computational structure and decoder inference speed, and finds the slower decoder speed to be the result of memory bandwidth constraints, exacerbated by attention over retrieved documents.
Fusion-in-Decoder
The backbone of the Fusion-in-Decoder model (Izacard and Grave, 2021) is a T5 encoder-decoder architecture. The model is provided a question or other input, as well as a number of relevant retrieved text passages. The question is prepended to each retrieved passage, and then the encoder is applied to each passage separately. The resulting representations are concatenated. Finally, the decoder cross-attends to the large number of concatenated representations and assimilates the information from the different passages to generate an answer, hence Fusion-in-Decoder.
FLOPs of FiD model
Model speed is determined by the number of FLOPs and the speed at which computations are performed, typically measured in floating point operations per second (FLOP/s). Operations in a Transformer can be roughly divided into MLP layers, attention projection layers, and attention operations. For simplicity, we count only multiplication operations.
Let d be the dimension of the model, n s the total number of tokens across all passages, n p the number of tokens in a single retrieved passage, n t the number of tokens in the target, L the number of layers, and assume the MLP dimension is 4d. The number of FLOPs used in an encoder layer is The output length n t ≪ n s , d, so the only nonnegligible term for decoder FLOPs originates from the cross-attention key and value projections, which cost the same FLOPs as encoder key and value projections. We see that the decoder consumes roughly 1 6 the FLOPs of the encoder.
FLOPs dec ≈ 2n s d 2 · L (2) Figure 1 shows that actual measured training time closely mirrors this FLOPs approximation. However, the decoder is much more expensive for inference. We argue below this is because the decoder is memory bandwidth constrained during inference, specifically the cross-attention layers.
Effective computational throughput
In order to perform computations, accelerators must transmit data between global memory and registers, which can be a limiting factor. The actual FLOP/s achieved can be usefully modeled with the roofline model (Williams et al., 2009;Ofenbeck et al., 2014;Mohan, 2018) as the lesser of peak FLOP/s the device is capable of and how fast required data can be transferred. The data constraint is given by the product of device memory bandwidth -how fast data can be transferred -and operational intensity -how many operations are performed per unit of data. The latter is determined by an algorithm's degree of data reuse, the number of operations that can be performed before new data needs to be fetched.
High operational intensity is necessary for good performance on modern GPU/TPU hardware, for which peak FLOP/s are usually two orders of magnitude times larger than memory bandwidth (Google, 2022;NVIDIA, 2022). If operational intensity is too low, the accelerator will spend the majority of its time waiting for data to be transferred to registers. Usually, that happens when the model performs minor computations with large tensors repeatedly, for example in normalization layers or during incremental decoding.
Operational intensity of FiD inference
Shazeer (2019) shows that the speed of incremental Transformer decoding is memory-bandwidth bound due to low operational intensity. Here we follow their analysis and derive the asymptotic inverse of operational intensity -the ratio of memory operations to the compute performed during each incremental decoding step -for FiD. Let b be the batch size, h the number of attention heads and assume that attention heads have dimension d h . Operational intensity of MLP layer. For each token the linear projections perform O(bd 2 ) operations, and load O(bd + d 2 ) memory, where bd corresponds to activations and d 2 to the weight matrices. During training, sequence length effectively multiplies batch size as weights need to be loaded only once for the entire sequence, but for inference each token is processed incrementally. The inverse operational intensity is then
R MLP = 1 b + 1 d(3)
Therefore, obtaining high operational intensity of MLP layer (R MLP ≪ 1) during inference requires a large batch size.
Operational intensity of attention layers. Memory bandwidth is a more severe bottleneck for attention inference, particularly cross-attention. At each decoding step the model applies projections for a single token, and has to load all cached key and value projections from encoder tokens and prior decoder tokens into memory. This leads to very low operational intensity. Specifically, query/key/value/output projections for a single position take O(bd 2 ) operations. As discussed earlier, we can ignore the attention computation itself. The model needs to load projection matrices (O(d 2 ) memory) and past keys and values (O(bnd) memory). Therefore, the inverse operational intensities for self-attention layers, R S-MHA and cross-attention layers R C-MHA are
R S-MHA = 1 b + n t d , R C-MHA = 1 b + n s d(4)
Because the source input length n s is extremely long for FiD, the cross-attention operational intensity is very low, which bottlenecks inference.
Method
We have shown that the encoder accounts for the bulk of FiD FLOPs and training cost, while FiD spends the majority of inference time in the decoder due to low operational intensity of cross-attention layers. Next we propose several ways to alleviate the decoder bottleneck. This allows us to efficiently allocate more compute to the decoder by scaling decoder size without significantly increasing the inference speed. We denote Fusion-in-Decoder with the proposed optimizations as FiDO (Fusionin-Decoder Optimized).
Model Pre-training Finetuning
Vanilla FiD 219.9 9.7 + LSA 247.0 11.8 + MQ 248.0 11.8 + XL Decoder 81.9 6.9
Layer-sparse cross-attention
The decoder cross-attention layer is the primary bottleneck for inference due to its low operational intensity. FiD-Light (Hofstätter et al., 2022) improves the operational intensity by reducing the effective input length by a factor of K. We instead propose to remove cross-attention from some decoder layers entirely, keeping cross-attention only in one out of every K decoder layers. We call this layer-sparse cross-attention (LSA). Section 5 provides evidence that LSA achieves similar speedups without FiD-Light's drop in quality. For FiDO we use LSA with sparsity K = 6, which means that a Large decoder has cross-attention only at layers 6, 12, 18 and 24. In principle LSA and FiD-Light can be combined, but we find that after applying LSA and multi-query attention the remaining crossattention makes up a small proportion of decoder inference cost and further speedups from reducing cross-attention are modest (Figure 4). Removing cross-attention layers also reduces FiD's FLOPs and memory usage. Cross-attention layers make up approximately 1 7 of total FiD FLOPs (see Eqn 2) and applying LSA-6 leads to a 12% reduction in FLOPs. Table 2 shows the reduction in FLOPs is reflected by an increase in training speed. Moreover, cross-attention keys and values make up a substantial proportion of memory usage during inference, and LSA-6 enables a much larger batch size (Table 1).
Multi-query attention
Shazeer (2019) proposes to increase the operational intensity of decoder attention layers by applying multi-query attention, in which keys and values share a single head each and only queries have multiple heads. With a single head, keys and values use a factor h less memory and are much faster to load. With multi-query attention, keys and values occupy O(bnd/h) memory, so that the inverse operational intensity of cross-attention becomes
R C-MQA = 1 b + 1 d + n s dh(5)
which has the problematic term ns d reduced by factor of h. Multi-query attention further reduces inference cost ( Figure 2) and memory (Table 1) on top of layer-sparse cross-attention, though not training speed (Table 2).
Asymmetric Decoder
Section 5.4 showed that the FiD encoder consumes an order of magnitude more FLOPs than the decoder because the encoder and decoder are the same size but the encoder is applied to many more tokens. After applying layer-sparse cross-attention and multi-query attention, the decoder also takes up much less time for inference. Such an allocation may not be optimal, as the FiD decoder is responsible for a more challenging task than the standard T5 encoder: it has to assimilate and reason over information from many passages.
We propose to partially redress this imbalance through massively scaling the decoder up, by as much as 15x. Because the decoder is applied to fewer tokens, and because increased decoder dimension improves operational efficiency, such scaling only modestly increases inference cost. For example, Figure 2 shows that replacing the Basesized decoder with an XL-sized decoder increases the total inference time per sample by only 21%. Fine-tuning costs also increase only modestly (Table 2). However, pre-training costs increase more (though still much less than the scaling factor of the decoder), as T5 pre-training uses a much smaller ratio of input length to output length. After reducing the decoder cross-attention memory costs scaling the decoder only mildly increases activation memory, so that FiDO can still fit much larger batch sizes than vanilla FiD (Table 1). For the FiDO method we use decoders that are typically two T5 sizes larger than the encoder: Small-Large, Base-XL, Large-XXL and XL-XXL (as XXL is the largest T5 model).
Related Work
Retrieval-augmented models There exists a large body of retrieval-augmented approaches. Some particularly well known models are REALM (Guu et al., 2020), RAG (Lewis et al., 2020), RETRO (Borgeaud et al., 2022) and Fusion-in-Decoder (Izacard and Grave, 2021). FiD in particular has achieved state-of-the-art performance on a wide variety of tasks (Izacard and Grave, 2021;Izacard et al., 2022;Yu et al., 2022b) and in this work we focus on improving the performanceefficiency trade-offs for FiD. RETRO is another closely related retrieval-augmented model, as it uses a small encoder for retrieved context and a larger primary decoder like FiDO does. Unlike RETRO, FiDO's efficiency improvements allow it to tractably attend to many retrieved passages with a much larger decoder.
Efficient Transformers
Our work builds heavily on existing insights into neural network and particularly Transformer speed. Previous work has found that data movement is often a constrain- ing factor for computations on modern devices (Williams et al., 2009;Dao et al., 2022;Shazeer, 2019). Shazeer (2019) shows that autoregressive Transformers are particularly bandwidth bound during inference, and proposes multi-query attention as a partial solution. We find that this is exacerbated by the FiD setting, and adopt multi-query attention for FiDO to ameliorate the problem. Pope et al. (2022) also investigates multi-query attention, primarily in the context of efficient inference and parallelization for very large language models, whereas we focus on performance/cost trade-offs for the retrieval-augmented setting. Another way to alleviate memory bandwidth constraints is to quantize model parameters and possibly activations (Dettmers et al., 2022;Zeng et al., 2022). Quantizing models reduces data that needs to be sent to device registers, and also reduces overall memory usage which allows for larger, more efficient batch sizes. Finally, it is possible to distill (Hinton et al., 2015;Gou et al., 2021) models into a smaller student model, which is cheaper for inference. However, knowledge distillation requires labeling a very large number of samples with the larger model, so reducing the inference costs of larger models is highly valuable.
Efficient retrieval-augmented models FiDO lies in a body of work that attempts to improve the efficiency of retrieval-augmented or long-input models. One direction focuses on reducing the cost of the attention mechanism. LongT5 routes long-range attention through a small number of global tokens. FiD-Light (Hofstätter et al., 2022), the most closely related work to FiDO, employs a similar mechanism for FiD, as the decoder attends to only the first 1 K proportion of representations of each retrieved passage. We opt to introduce sparsity in attention layers as in ReadTwice (Zemlyanskiy et al., 2021) instead of attention patterns. FiDO applies cross-attention from the decoder to the encoder in one out of every K layers, which achieves a similar speedup to FiD-Light but with only minor performance penalty. FiDO also incorporates multi-query attention leading to a further order of magnitude reduction in decoder inference cost, and takes advantage of this to massively scale the decoder.
A different and complementary direction is to reduce the cost of reading retrieved passages. KG-FiD (Yu et al., 2022a) reranks retrieved passages and reads only the top passages, while Varshney et al. (2022) reads more retrieved passages only if it is not confident in its answer. Another approach is to pre-compute and store encoder representations in a memory and directly retrieve representations from memory, rather than re-encoding retrieved text (de Jong et al., 2022;Wu et al., 2022;Li et al., 2022). For standard FiD, the decoder actually makes up the bulk of the inference cost. FiDO reduces the cost of the decoder such that encoding retrieved passages becomes the bottleneck, increasing the benefit of the above approaches.
Experiments
Experiment Setup
Pre-training All models are based on the T5.1.1 architecture , pre-trained from scratch on C4 (Dodge et al., 2021) using JAX (Bradbury et al., 2018), FLAX (Heek et al., 2020), and T5X (Roberts et al., 2022). We employ the standard T5 training recipe except for a modified Adafactor (Shazeer and Stern, 2018) optimizer. Appendix A describes training in greater detail.
Downstream evaluation We evaluate FiDO on open-domain question-answering datasets Natural
Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017) and WebQuestions (Berant et al., 2013). We report results on the open-domain QA splits from Lee et al. (2019). For all datasets, each sample is paired with a set of 100-word Wikipedia passages ranked by DPR score. The question is prepended to each retrieved passage, and then truncated to 256 tokens. The experiments in the paper use 40 retrieved passages to balance performance and speed, but our results hold across a wide range of retrieved passages.
Inference setup For our main results we choose a setting that we believe is most representative for common use of retrieval-augmented models. We perform inference on a single TPUv4 and report inference time per sample (TPS) as measured by xprof (Google, 2020). We use a batch size of 64 (or the largest batch size that fits, if smaller) for the main experiments. Figure 1 and 2 use batch size 24 to ensure a like-for-like comparison, as it is the largest batch size that fits for vanilla FiD. All experiments use 40 passages of 256 tokens and output size of 32 tokens. Predictions are generated with greedy decoding as we found beam search did not meaningfully improve performance for considered tasks. Analysis in Section 5.4 investigates how trade-offs change with input and output length, low batch size and different sampling methods. Figure 3 shows performance as a function of inference time for FiD and FiDO. FiDO strongly outperforms FiD at any inference budget and achieves the same performance with order of magnitude faster speed. The following section investigates how each component of FiDO contributes to its performance. Layer-sparse cross-attention First, Table 3 shows that layer-sparse cross-attention significantly reduces inference cost with modest performance degradation. Separately, Table 4 compares the inference speed and performance impact of layersparse cross-attention with the token-sparse crossattention from FiD-Light. Reducing cross-attention layers and inducing encoder output sparsity by the same factor lead to similar speedups, but layer-sparse cross-attention achieves the inference speedup with much lower performance penalty. Note that we find a much larger performance degradation from compressing the encoder output in our setting compared to the experiments in Hofstätter et al. (2022). Some exploratory experiments suggest that multi-task training fine-tuning on large amounts of data as done in FiD-Light may ameliorate the performance penalty from compressing encoder output; however even with such training Hofstätter et al. (2022) still report significant peformance degradation, in contrast to LSA.
Main results
Layer-sparsity over a factor of 6 incurs greater performance penalties. However, as shown in Table 4, with LSA-6 cross-attention already makes up a small proportion of total decoder inference cost. Table 3 shows that multiquery attention achieves a large cost reduction on top of layer-sparse cross-attention with minimal performance degradation, consistent with our analysis and findings from Shazeer (2019).
Multi-query attention
Decoder scale We can see in Table 3 that increasing the size of the decoder leads to a significant improvement in performance at the cost of a modest increase in inference time. Figure 5 provides a visual comparison of the performance-inference profile for FiDO with and without asymmetric decoders and shows that asymmetric large decoders achieve a better trade-off.
Other analysis
Model NQ TQA WQ REALM (Guu et al., 2020) 40.4 -40.7 RAG (Lewis et al., 2020 44.5 56.8 45.2 RETRO (Borgeaud et al., 2022) 45.5 --T5-XXL 35.2 51.9 42.8 ATLAS (Izacard et al., 2022) 60 Varying input and target length Our main results use a middle-of-the-road setting for FiD applications with a medium number of retrievals and a relatively short output, reflecting common knowledge-intensive tasks. However, it is interesting to ask how FiDO components affect speed for other settings. Figure 6 shows time per sample as a function of retrieved passages and length of the target output for each step from FiD to FiDO.
We first note that layer-sparse cross-attention and multi-query attention are critical across all settings. For standard output length, the asymmetric decoder is cheap for any reasonable number of retrieved passages, becoming negligible as a fraction of total inference time as the number of retrievals increases. As output length increases, the cost of the disproportionately large decoder rises, although it only becomes a substantial proportion of inference time for output length of 256-512 and above. For tasks with long outputs, such as summarization, one may want to reduce the level of decoder asymmetry (e.g. Base-Large rather than Base-XL).
Low batch size setting For our primary investigation we focus on medium batch sizes (24+). There are two reasons one might care about smaller batch sizes: either because larger batches do not fit in memory or because they lead to excessive latency. The first constraint is not binding for FiDO: due to FiDO's memory efficiency we are able to fit larger batches even for the XL-XXL model, and if necessary model size can be further extended with quantization (Zeng et al., 2022) and parallelism (Pope et al., 2022).
For real-time serving latency can be a constraint, but in those settings it is common practice to use much smaller models which are distilled from larger teacher models (Gou et al., 2021). The student models can utilize a higher batch size, while the teacher models do not have latency constraints, so FiDO also applies to this use case.
For rare cases where a lower batch size is required layer-sparse and multi-query attention are still important, but cannot fully eliminate the decoder as a bottleneck for inference ( Table 6). The 1 b term in Equation 5 dominates, reflecting the fact that the model has to repeatedly load model parameters without spreading the cost over many samples.
Instead of scaling the decoder, it would be more cost-effective to apply more expensive sampling methods, because sampling methods increase the effective batch size. For example, beam search with large beams is nearly free at lower batch sizes.
Model
Total TPS Decoder TPS Sampling We do not apply beam search for our main experiments as decoder inference time is proportional to beam width for medium batch sizes and beam search does not improve performance on the considered set of tasks. Instead, we find that scaling decoder size provides a more cost-efficient way to add decoder capacity.
Conclusion
We perform analysis of the performance-inference speed tradeoff for FiD, showing that the encoder uses more FLOPs but most time is spent in the decoder due to memory bandwidth constraints. We propose FiDO, an extension of FiD which removes most cross-attention layers and employs multiquery attention to vastly reduce the cost of the decoder. The resulting model spends most time in the encoder, consistent with compute analysis, which FiDO takes advantage of by strongly increasing the size of the decoder. We show that FiDO achieves much stronger performance for the same inference budget relative to existing FiD models.
Acknowlegements
We thank Livio Baldini Soares, Kenton Lee, Pat Verga, Iftekhar Naim and others at Google Research for insightful advice and discussion. Michiel de Jong is partially supported by NSF Awards IIS-1513966/ 1632803/1833137, CCF-1139148, DARPA Awards#: FA8750-18-2-0117, FA8750-19-1-0504, DARPA-D3M -Award UCB-00009528, Google Research Awards, gifts from Facebook and Netflix, and ARO# W911NF-12-1-0241 and W911NF-15-1-0484.
Limitations
One of the advantages of the Fusion-in-Decoder approach is that it uses the off-the-shelf T5 architecture with publicly available checkpoints. The proposed FiDO modifications strongly improve performance and inference speed for retrieval-augmented question-answering, but require pre-training from scratch. It is in general preferable to have a small number of checkpoints that can be fine-tuned for any application. For example, it may not be feasible to train different giant language models for use in the retrieval-augmented setting. Instead, the architectures for such large models may need to be a compromise for different use cases.
Ethics
In general the ethics concerns for this paper are similar to those for the large body of work studying retrieval-augmented language models. One distinction worth pointing out is that this work proposes a model with faster inference, which makes retrievalaugmented models more feasible to apply in practical settings and serve to users and inherently carries higher risk.
Figure 1 :
1Shows the percentage of FLOPs in forward pass, training time and inference time for the encoder and decoder for a Fusion-in-Decoder model with 40 retrieved passages and batch size 24. The vast majority of FLOPs and training time originate from the encoder, but the decoder is much more expensive for inference.
Figure 3 :
3MAIN RESULT. FiDO achieves much higher performance for any given inference budget. Exact match on Natural Questions (NaturalQ), TriviaQA and WebQuestions (WebQ) test sets as a function of inference budget (log scale). Compares FiD Small, Base and Large models with FiDO Small-Large, Base-XL, Large-XXL and XL-XXL models.
Figure 4 :
4Cross-attention and total decoder inference time for FiDO Base-XL with varying factors of layersparse cross-attention. The main FiDO configuration uses LSA-6 which has cross-attention every 6 layers.
Figure 5 :
5Performance on Natural Questions dev set as a function of inference time for FiDO Small, Base and Large models with and without asymmetric decoder.
Figure 6 :
6Time per sample (TPS) as a function of retrieved passages (left) or the number of generated tokens (right) for Base FiD variants and FiDO-Base-XL.
arXiv:2212.08153v2 [cs.CL] 2 Jun 20230
20
40
60
80
100 NQ TQA WQ
FiD
+ LSA
+ MQ
+ Dec XL
46.8
41.0
42.1
41.8
67.3
64.9
65.3
65.8
48.2
46.3
45.8
46.5
4
1
16
88
13
13
13
13
Time per sample during inference, ms
Encoder
Decoder
Figure 2: MAIN RESULT. Layer-sparse cross-attention (LSA) and multi-query (MQ) attention eliminate the
bulk of decoder inference cost with minor performance penalty, and the decoder can then be massively scaled
up (Dec XL ) with only a modest increase in inference time. To the left, encoder and decoder inference time per
sample on a single TPUv4 with batch size 24 and 40 retrieved passages for variants of base-sized FiD model. To the
right, corresponding exact match performance on Natural Questions (NQ), TriviaQA (TQA) and WebQuestions
(WQ) dev sets.
Table 1: Maximum batch size for QA inference with 40 retrieved passages on a single TPUv4 for FiD Base models with different FiDO components.Model
Max Batch Size
Vanilla FiD
24
+ LSA
128
+ MQ
256
+ XL Decoder
128
Table 2 :
2Pre-training and fine-tuning samples per second per chip for FiD Base model with varying FiDO components. We use 64 TPUv4 chips and batch size 2048 for pre-training and 32 chips and batch size 64 for fine-tuning. See Section 5.1 for training information.
ModelTotal TPS Decoder TPS NaturalQ TriviaQA WebQFiDO (base-XL)
15.8
2.0
48.2
67.3
46.8
no LSA
19.2
5.4
47.9
67.4
46.3
no MQ
60.8
47.0
48.2
67.5
45.4
no Asym (base-base)
14.4
0.6
46.3
64.9
41.0
Table 3 :
3Inference time per sample, decoder time per sample (ms) and downstream QA exact match for FiDO base-XL with different components ablated separately. FiDO is evaluated on dev sets for ablation results.
Table 5 compares
5FiDO to published results.5.3 Components
Model
TPS NQ TQA WebQ
FiD
101.8 46.5 65.8
41.83
FiD-Light 28.3 36.3 54.5
30.8
FiD-LSA
29.5 45.8 65.3
41.0
Table 4 :
4Time per sample (ms) and QA exact match
for FiD, FiD-Light, and FiD Base-sized models with
layer-sparse cross-attention.
Table 5 :
5Comparison of FiDO with published results
on Natural Questions, TriviaQA and WebQuestions test
sets. We focus on comparing with FiD as other works
enhance performance with improved retrieval (such as
ATLAS), which is orthogonal to our contributions.
Table 6 :
6Inference time per sample (ms) with batch size 1 for Base FiD with varying FiDO components.
Table 7
7compares the performance vs time trade-offs from beam search and scaling the decoder for Natural Questions, and shows that scaling the decoder is significantly more effective. Beam search may be more important for other tasks, such as tasks with longer outputs.Model
Decoder TPS NaturalQ
FiD with LSA, MQ
0.6
46.3
+ Beam 4
2.4
46.2
FiDO
2.0
48.2
Table 7 :
7Decoder inference time (ms) and QA exact match for FiD Base models, comparing the trade-offs of beam search versus scaling decoder size.
A TrainingAll experiments are built on the T5.1.1 architecture with the training recipe from T5 . The first exception is the optimizer; we find that the second moment factoring and mixing schedule from Adafactor(Shazeer and Stern, 2018)can lead to instability, especially with unbalanced encoder and decoder sizes. Instead, we disable factoring and second moment mixing, leading to an optimizer that is a hybrid between Adafactor and Adam (Kingma and Ba, 2015).The second difference to the training recipe arises from the observation that FiDO XL-XXL is unstable for the standard training regimen. We solve the instability by restarting from a recent healthy checkpoint with a 10x decreased learning rate, which happened once.During fine-tuning, we load not only model weights but also second moment estimates, which we find leads to better fine-tuning in general and particularly for asymmetric models. We finetune with learning rate 0.001 and batch size 64 for all datasets. For evaluation on test sets we select the checkpoint with the best validation performance.
Semantic parsing on freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingGrand Hyatt Seattle, Seattle, Washington, USAACL2013A meeting of SIGDAT, a Special Interest Group of the ACLJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1533-1544. ACL.
Improving language models by retrieving from trillions of tokens. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las, Aurelia Casas, Jacob Guy, Roman Menick, Tom Ring, Saffron Hennigan, Loren Huang, Chris Maggiore, Albin Jones, Andy Cassirer, Michela Brock, Geoffrey Paganini, Oriol Irving, Simon Vinyals, Karen Osindero, Jack W Simonyan, Erich Rae, Laurent Elsen, Sifre, PMLRInternational Conference on Machine Learning, ICML 2022. Maryland, USA162BaltimoreSebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Si- monyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Bal- timore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 2206-2240. PMLR.
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vanderplas, Skye Wanderman-Milne, Qiao Zhang, JAX: composable transformations of Python+NumPy programs. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018. JAX: composable transformations of Python+NumPy programs.
Flashattention: Fast and memory-efficient exact attention with io-awareness. Tri Dao, Daniel Y Fu, Stefano Ermon, Atri Rudra, Christopher Ré, 10.48550/arXiv.2205.14135abs/2205.14135CoRRTri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. CoRR, abs/2205.14135.
Mention memory: incorporating textual knowledge into transformers through entity mention attention. Jong Michiel De, Yury Zemlyanskiy, Nicholas Fitzgerald, Fei Sha, William W Cohen, The Tenth International Conference on Learning Representations. ICLR 2022, Virtual Event. OpenReview.netMichiel de Jong, Yury Zemlyanskiy, Nicholas FitzGer- ald, Fei Sha, and William W. Cohen. 2022. Mention memory: incorporating textual knowledge into trans- formers through entity mention attention. In The Tenth International Conference on Learning Repre- sentations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
8-bit matrix multiplication for transformers at scale. CoRR. Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer, 10.48550/arXiv.2208.07339abs/2208.073392022. Llm. int8(Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Llm.int8(): 8-bit ma- trix multiplication for transformers at scale. CoRR, abs/2208.07339.
Documenting large webtext corpora: A case study on the colossal clean crawled corpus. Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, Matt Gardner, 10.18653/v1/2021.emnlp-main.98Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican Republic2021Virtual Event / Punta Cana. Association for Computational LinguisticsJesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colos- sal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 1286-1305. Association for Computa- tional Linguistics.
Profile your model with cloud tpu tools. Google, Google. 2020. Profile your model with cloud tpu tools. https://cloud.google.com/tpu/ docs/cloud-tpu-tools. Accessed: 2022-11- 11.
Knowledge distillation: A survey. Jianping Gou, Baosheng Yu, Stephen J Maybank, Dacheng Tao, 10.1007/s11263-021-01453-zInt. J. Comput. Vis. 1296Jianping Gou, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. 2021. Knowledge distillation: A sur- vey. Int. J. Comput. Vis., 129(6):1789-1819.
Longt5: Efficient text-to-text transformer for long sequences. Mandy Guo, Joshua Ainslie, David C Uthus, Santiago Ontañón, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang, 10.18653/v1/2022.findings-naacl.55Findings of the Association for Computational Linguistics: NAACL 2022. Seattle, WA, United StatesAssociation for Computational LinguisticsMandy Guo, Joshua Ainslie, David C. Uthus, Santi- ago Ontañón, Jianmo Ni, Yun-Hsuan Sung, and Yin- fei Yang. 2022. Longt5: Efficient text-to-text trans- former for long sequences. In Findings of the Associ- ation for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 724-736. Association for Computational Linguistics.
Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. Kelvin Guu, Kenton Lee, Zora Tung, PMLRInternational Conference on Machine Learning. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International Con- ference on Machine Learning, pages 3929-3938. PMLR.
Flax: A neural network library and ecosystem for JAX. Jonathan Heek, Anselm Levskaya, Avital Oliver, Marvin Ritter, Bertrand Rondepierre, Andreas Steiner, Marc Van Zee, Jonathan Heek, Anselm Levskaya, Avital Oliver, Mar- vin Ritter, Bertrand Rondepierre, Andreas Steiner, and Marc van Zee. 2020. Flax: A neural network library and ecosystem for JAX.
Distilling the knowledge in a neural network. Geoffrey E Hinton, Oriol Vinyals, Jeffrey Dean, abs/1503.02531CoRRGeoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531.
Fid-light: Efficient and effective retrieval-augmented text generation. Sebastian Hofstätter, Jiecao Chen, Karthik Raman, Hamed Zamani, 10.48550/arXiv.2209.14290abs/2209.14290CoRRSebastian Hofstätter, Jiecao Chen, Karthik Raman, and Hamed Zamani. 2022. Fid-light: Efficient and ef- fective retrieval-augmented text generation. CoRR, abs/2209.14290.
Leveraging passage retrieval with generative models for open domain question answering. Gautier Izacard, Edouard Grave, 10.18653/v1/2021.eacl-main.74Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021OnlineAssociation for Computational LinguisticsGautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open do- main question answering. In Proceedings of the 16th Conference of the European Chapter of the Associ- ation for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 -23, 2021, pages 874- 880. Association for Computational Linguistics.
Few-shot learning with retrieval augmented language models. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave, 10.48550/arXiv.2208.03299abs/2208.03299CoRRGautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval aug- mented language models. CoRR, abs/2208.03299.
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. Mandar Joshi, Eunsol Choi, Daniel S Weld, Luke Zettlemoyer, 10.18653/v1/P17-1147Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 1601-1611. Association for Computational Linguistics.
Dense passage retrieval for open-domain question answering. Vladimir Karpukhin, Barlas Oguz, Sewon Min, S H Patrick, Ledell Lewis, Sergey Wu, Danqi Edunov, Wen-Tau Chen, Yih, 10.18653/v1/2020.emnlp-main.550Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline2020Association for Computational LinguisticsVladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6769-6781. Associa- tion for Computational Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Natural questions: a benchmark for question answering research. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, 10.1162/tacl_a_00276Trans. Assoc. Comput. Linguistics. Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov7Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics, 7:452- 466.
Latent retrieval for weakly supervised open domain question answering. Kenton Lee, Ming-Wei Chang, Kristina Toutanova, 10.18653/v1/p19-1612Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyAssociation for Computational Linguistics1Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open do- main question answering. In Proceedings of the 57th Conference of the Association for Computational Lin- guistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 6086-6096. Association for Computational Linguistics.
Retrieval-augmented generation for knowledge-intensive NLP tasks. S H Patrick, Ethan Lewis, Aleksandra Perez, Fabio Piktus, Vladimir Petroni, Naman Karpukhin, Heinrich Goyal, Mike Küttler, Wen-Tau Lewis, Tim Yih, Sebastian Rocktäschel, Douwe Riedel, Kiela, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. 2020virtualPatrick S. H. Lewis, Ethan Perez, Aleksandra Pik- tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neu- ral Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Decoupled context processing for context augmented language modeling. Zonglin Li, Ruiqi Guo, Sanjiv Kumar, 10.48550/arXiv.2210.05758abs/2210.05758CoRRZonglin Li, Ruiqi Guo, and Sanjiv Kumar. 2022. De- coupled context processing for context augmented language modeling. CoRR, abs/2210.05758.
Understanding roofline charts. NVIDIA. 2022. Nvidia a100 tensor core gpu. Ankur Mohan, Ankur Mohan. 2018. Understanding roofline charts. NVIDIA. 2022. Nvidia a100 tensor core gpu. https://www.nvidia.com/en-us/ data-center/a100/. Accessed: 2022-12-
Applying the roofline model. Georg Ofenbeck, Ruedi Steinmann, Victoria Caparrós Cabezas, Daniele G Spampinato, Markus Püschel, 10.1109/ISPASS.2014.68444632014 IEEE International Symposium on Performance Analysis of Systems and Software. Monterey, CA, USAIEEE Computer SocietyGeorg Ofenbeck, Ruedi Steinmann, Victoria Caparrós Cabezas, Daniele G. Spampinato, and Markus Püschel. 2014. Applying the roofline model. In 2014 IEEE International Symposium on Performance Anal- ysis of Systems and Software, ISPASS 2014, Monterey, CA, USA, March 23-25, 2014, pages 76-85. IEEE Computer Society.
. Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Anselm Levskaya, Jonathan Heek, Kefan Xiao, Shivani Agrawal, 10.48550/arXiv.2211.05102and Jeff Dean. 2022. Efficiently scaling transformer inference. CoRR, abs/2211.05102Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Anselm Levskaya, Jonathan Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean. 2022. Efficiently scaling transformer in- ference. CoRR, abs/2211.05102.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, J. Mach. Learn. Res. 2167Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1-140:67.
Adam Roberts, Hyung Won, Anselm Chung, Gaurav Levskaya, James Mishra, Daniel Bradbury, Sharan Andor, Brian Narang, Colin Lester, Afroz Gaffney, Curtis Mohiuddin, Aitor Hawthorne, Alex Lewkowycz, Salcianu, Jacob Marc Van Zee, Sebastian Austin, Livio Baldini Goodman, Haitang Soares, Sasha Hu, Aakanksha Tsvyashchenko, Jasmijn Chowdhery, Jannis Bastings, Xavier Bulian, Jianmo Garcia, Andrew Ni, Chen, arXiv:2203.17189Newlan, and Andrea Gesmundo. 2022. Scaling up models and data with t5x and seqio. Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos; Noah Fiedel, Mark Omernick, Brennan Saeta, Ryan Sepassi, Alexander Spiridonov, JoshuaarXiv preprintAdam Roberts, Hyung Won Chung, Anselm Levskaya, Gaurav Mishra, James Bradbury, Daniel Andor, Sha- ran Narang, Brian Lester, Colin Gaffney, Afroz Mohiuddin, Curtis Hawthorne, Aitor Lewkowycz, Alex Salcianu, Marc van Zee, Jacob Austin, Se- bastian Goodman, Livio Baldini Soares, Haitang Hu, Sasha Tsvyashchenko, Aakanksha Chowdh- ery, Jasmijn Bastings, Jannis Bulian, Xavier Gar- cia, Jianmo Ni, Andrew Chen, Kathleen Kenealy, Jonathan H. Clark, Stephan Lee, Dan Garrette, James Lee-Thorp, Colin Raffel, Noam Shazeer, Marvin Ritter, Maarten Bosma, Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick, Bren- nan Saeta, Ryan Sepassi, Alexander Spiridonov, Joshua Newlan, and Andrea Gesmundo. 2022. Scal- ing up models and data with t5x and seqio. arXiv preprint arXiv:2203.17189.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, 10.18653/v1/2020.emnlp-main.437Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics2020Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, Novem- ber 16-20, 2020, pages 5418-5426. Association for Computational Linguistics.
Fast transformer decoding: One write-head is all you need. Noam Shazeer, abs/1911.02150CoRRNoam Shazeer. 2019. Fast transformer decoding: One write-head is all you need. CoRR, abs/1911.02150.
Adafactor: Adaptive learning rates with sublinear memory cost. Noam Shazeer, Mitchell Stern, PMLRProceedings of the 35th International Conference on Machine Learning, ICML 2018. the 35th International Conference on Machine Learning, ICML 2018Stockholmsmässan, Stockholm, Sweden80Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmäs- san, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 4603-4611. PMLR.
Can open-domain QA reader utilize external knowledge efficiently like humans? CoRR. Neeraj Varshney, Man Luo, Chitta Baral, 10.48550/arXiv.2211.12707abs/2211.12707Neeraj Varshney, Man Luo, and Chitta Baral. 2022. Can open-domain QA reader utilize external knowledge efficiently like humans? CoRR, abs/2211.12707.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
Roofline: an insightful visual performance model for multicore architectures. Samuel Williams, Andrew Waterman, David A Patterson, 10.1145/1498765.1498785Commun. ACM. 524Samuel Williams, Andrew Waterman, and David A. Pat- terson. 2009. Roofline: an insightful visual perfor- mance model for multicore architectures. Commun. ACM, 52(4):65-76.
Memorizing transformers. Yuhuai Wu, Markus Norman Rabe, Delesley Hutchins, Christian Szegedy, The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event. OpenReview.netYuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transform- ers. In The Tenth International Conference on Learn- ing Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Kg-fid: Infusing knowledge graph in fusion-in-decoder for opendomain question answering. Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, Michael Zeng, 10.18653/v1/2022.acl-long.340Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1ACL 2022Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yim- ing Yang, and Michael Zeng. 2022a. Kg-fid: Infus- ing knowledge graph in fusion-in-decoder for open- domain question answering. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 4961- 4974. Association for Computational Linguistics.
Generate rather than retrieve: Large language models are strong context generators. Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, Meng Jiang, 10.48550/arXiv.2209.10063abs/2209.10063CoRRWenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022b. Gener- ate rather than retrieve: Large language models are strong context generators. CoRR, abs/2209.10063.
Readtwice: Reading very large documents with memories. Yury Zemlyanskiy, Joshua Ainslie, Philip Michiel De Jong, Ilya Pham, Fei Eckstein, Sha, 10.18653/v1/2021.naacl-main.408Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021Association for Computational LinguisticsYury Zemlyanskiy, Joshua Ainslie, Michiel de Jong, Philip Pham, Ilya Eckstein, and Fei Sha. 2021. Read- twice: Reading very large documents with memories. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5189-5195. Association for Computational Linguis- tics.
. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, 10.48550/arXiv.2210.02414and Jie Tang. 2022. GLM-130B: an open bilingual pre-trained model. CoRR, abs/2210.02414Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, and Jie Tang. 2022. GLM- 130B: an open bilingual pre-trained model. CoRR, abs/2210.02414.
| [] |
[
"QAMPARI: An Open-domain Question Answering Benchmark for Questions with Many Answers from Multiple Paragraphs",
"QAMPARI: An Open-domain Question Answering Benchmark for Questions with Many Answers from Multiple Paragraphs"
] | [
"Samuel Joseph samueljoseph@mail.tau.ac.il \nBlavatnik School of Computer Science\nTel Aviv University\nIsrael\n",
"Amouyal Ohad \nBlavatnik School of Computer Science\nTel Aviv University\nIsrael\n",
"Rubin Ori ohad.rubin@cs.tau.ac.il \nBlavatnik School of Computer Science\nTel Aviv University\nIsrael\n",
"Yoran Tomer Wolfson \nBlavatnik School of Computer Science\nTel Aviv University\nIsrael\n",
"Jonathan Herzig \nBlavatnik School of Computer Science\nTel Aviv University\nIsrael\n",
"Jonathan Berant joberant@cs.tau.ac.il \nBlavatnik School of Computer Science\nTel Aviv University\nIsrael\n"
] | [
"Blavatnik School of Computer Science\nTel Aviv University\nIsrael",
"Blavatnik School of Computer Science\nTel Aviv University\nIsrael",
"Blavatnik School of Computer Science\nTel Aviv University\nIsrael",
"Blavatnik School of Computer Science\nTel Aviv University\nIsrael",
"Blavatnik School of Computer Science\nTel Aviv University\nIsrael",
"Blavatnik School of Computer Science\nTel Aviv University\nIsrael"
] | [] | Existing benchmarks for open-domain question answering (ODQA) typically focus on questions whose answers can be extracted from a single paragraph. By contrast, many natural questions, such as "What players were drafted by the Brooklyn Nets?" have a list of answers. Answering such questions requires retrieving and reading from many passages, in a large corpus. We introduce QAMPARI, an ODQA benchmark, where question answers are lists of entities, spread across many paragraphs. We created QAMPARI by (a) generating questions with multiple answers from Wikipedia's knowledge graph and tables, (b) automatically pairing answers with supporting evidence in Wikipedia paragraphs, and (c) manually paraphrasing questions and validating each answer. We train ODQA models from the retrieve-and-read family and find that QAMPARI is challenging in terms of both passage retrieval and answer generation, reaching an F 1 score of 26.6 at best. Our results highlight the need for developing ODQA models that handle a broad range of question types, including single and multi-answer questions. " Producers Eric Newman and Marc Abraham developed the film […]. | 10.48550/arxiv.2205.12665 | [
"https://arxiv.org/pdf/2205.12665v2.pdf"
] | 249,062,559 | 2205.12665 | f9cd110f9f020a9e5345aa3f565c7266985ca4ee |
QAMPARI: An Open-domain Question Answering Benchmark for Questions with Many Answers from Multiple Paragraphs
Samuel Joseph samueljoseph@mail.tau.ac.il
Blavatnik School of Computer Science
Tel Aviv University
Israel
Amouyal Ohad
Blavatnik School of Computer Science
Tel Aviv University
Israel
Rubin Ori ohad.rubin@cs.tau.ac.il
Blavatnik School of Computer Science
Tel Aviv University
Israel
Yoran Tomer Wolfson
Blavatnik School of Computer Science
Tel Aviv University
Israel
Jonathan Herzig
Blavatnik School of Computer Science
Tel Aviv University
Israel
Jonathan Berant joberant@cs.tau.ac.il
Blavatnik School of Computer Science
Tel Aviv University
Israel
QAMPARI: An Open-domain Question Answering Benchmark for Questions with Many Answers from Multiple Paragraphs
Existing benchmarks for open-domain question answering (ODQA) typically focus on questions whose answers can be extracted from a single paragraph. By contrast, many natural questions, such as "What players were drafted by the Brooklyn Nets?" have a list of answers. Answering such questions requires retrieving and reading from many passages, in a large corpus. We introduce QAMPARI, an ODQA benchmark, where question answers are lists of entities, spread across many paragraphs. We created QAMPARI by (a) generating questions with multiple answers from Wikipedia's knowledge graph and tables, (b) automatically pairing answers with supporting evidence in Wikipedia paragraphs, and (c) manually paraphrasing questions and validating each answer. We train ODQA models from the retrieve-and-read family and find that QAMPARI is challenging in terms of both passage retrieval and answer generation, reaching an F 1 score of 26.6 at best. Our results highlight the need for developing ODQA models that handle a broad range of question types, including single and multi-answer questions. " Producers Eric Newman and Marc Abraham developed the film […].
Introduction
Open-domain question answering (ODQA) is a core language understanding task concerned with answering factoid questions over large document collections (Voorhees and Tice, 2000;Brill et al., 2002). Due to its wide applicability, ODQA has received substantial attention in recent years (Chen et al., 2017;. Typically, systems solving ODQA tasks follow the "retrieve-and-read" paradigm, where a retriever first retrieves a set of candidate passages, followed by a reader which receives the retrieved passages and produces the final answer.
The retrieve-and-read paradigm has been shown to be effective for benchmarks such as Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), where the answer is typically a single phrase from a single passage. However, in many cases, a question might have many answers that are spread across multiple passages. Consider the example in Fig. 1. Eric Newman produced multiple movies, so finding them along with their directors requires incorporating information from many passages. Such questions pose two main challenges to retrieve-and-read systems. First, as there are multiple answers, that can be far apart, the reader model must reason over a long text sequence to generate all of the correct answers. Second, since the reader is computationally constrained to process at most K passages, the retriever must score all necessary passages at its top-K results, which is challenging and even impossible when the number of such passages is ≥K.
While recent works explored questions that involve reading multiple passages, their overall number of passages was quite small. AMBIGQA studied ambiguous questions from NQ with several plausible answers. However, as 70% of its questions have at most 2 answers, retrieveand-read models can be easily adapted to the AM-BIGQA task. The HOTPOTQA (Yang et al., 2018) dataset focused on multi-hop reasoning, but its questions require no more than 2 passages to answer. Last, WIKINLDB (Thorne et al., 2021) was proposed as a benchmark for testing reasoning over multiple facts. However, WIKINLDB restricted its text corpus to databases of 1,000 facts at most, making it significantly smaller than standard ODQA corpora. Moreover, these facts are model-generated utterances rather than natural language passages.
In this work, we present QAMPARI, a benchmark for Questions with many Answers over Multiple Paragraphs, Indeed. All questions in QAMPARI have at least 5 answers, with an average of 13 answers per question. Examples are semi-automatically generated using two data sources, Wikidata (Vrandečić and Krötzsch, 2014) and Wikipedia tables. We automatically generate multi-answer questions of the form "What/Who has [relation] with [entity]?" and convert these into pseudo-language using manually defined templates. We then verify our questions are answerable, given Wikipedia sentences, by automatically extracting evidence passages for all answers. Finally, we use crowdsourcing to validate example correctness, and to paraphrase questions from pseudo-language into natural language (Wang et al., 2015). To further increase the richness of questions, we also generate composition questions, that compose two relations (as in Fig. 1), and intersection questions, such as "What movies were produced and directed by Clint Eastwood?". Overall, QAMPARI contains 2K test questions and more than 60K training examplessee Tab. 1 for some examples. We evaluate models from the retrieve-and-read family and find that they struggle on QAMPARI. Specifically, we use a BM25 (Robertson and Zaragoza, 2009) retriever followed by one of two readers: (1) a RAG-style reader that decodes an answer or abstains given each passage independently, and (2) an FiD reader , that directly decodes the answer list given encoded representations of many passages. To evaluate, we compare the set of answers predicted by a model to the gold set of answers.
When training models in a multi-task setting of NQ and QAMPARI we observe that QAMPARI is challenging in terms of both passage retrieval and answer generation. Models reach an F 1 score of 26.6 at most. In addition, models are able to return over 80% of the correct answers only for 30% of our examples, well below typical performance on single-answer datasets such as NQ.
To summarize, we present QAMPARI, a challenging benchmark for evaluating the ability of ODQA models to handle questions with many answers over multiple passages from Wikipedia. We advocate to evaluate ODQA models not on QAM-PARI alone, but alongside benchmarks such as NQ and TriviaQA. This joint evaluation would better test for ODQA models ability to handle both singleand multi-answer questions, tests which are conspicuously absent from current benchmarks.
The QAMPARI benchmark, models and relevant codebase are available at: https:// samsam3232.github.io/qampari/.
Dataset Construction
We present our process of generating examples for QAMPARI. Each example in QAMPARI is a triple (q, A, P), where q is a question, A is a set of answers and P is a set of passages from our target corpus. Each answer a ∈ A has 1-2 evidence passages from P (see Fig. 1). We define passages to be consecutive sentences from our corpus (Wikipedia), that span at most 100 words. As our focus is on questions with many answers, all examples in QAMPARI have |A| ≥ 5.
Overview
We generate examples using two steps. First, we generate simple questions that involve a single entity and a single relation, e.g. "Who was drafted by the Brooklyn Nets?" ( §2.1). Then, we expand such questions in order to generate complex questions that involve intersection and composition operations ( §2.2).
To increase diversity, our questions are generated using two data sources, Wikidata and Wikipedia tables. We first describe the example generation over Wikidata, then briefly present the generation process from Wikipedia tables in §2.3. In both cases, we ensure all answers can be derived from evidence passages in Wikipedia. 1 Tab. 1 presents examples from each data source and question type.
Notation We introduce notation for formal queries over Wikidata, used for explaining our example generation process. Wikidata is a knowledge graph, K, that can be viewed as a set of labeled edges (e 1 , r, e 2 ). Graph nodes e 1 , e 2 ∈ E are entities which are connected by an edge labeled with the relation r ∈ R. For example, one possible labeled edge is (BarackObama, ReceivedAward, NobelPeacePrize). What are the museums found in Concord, Massachusetts?
The Wayside V. One can query K by applying a relation r over an entity e, resulting in a simple query r(e) whose denotation (answer set) is r(e) = {e i | (e i , r, e) ∈ K}. Composition queries can be formed by applying a relation over the result of a simple query. We denote a composition query by r 2 (r 1 (e)), and its denotation is r 2 (r 1 (e)) = {e i | ∃e j s.t (e i , r 2 , e j ) ∈ K ∧ (e j , r 1 , e) ∈ K}. Last, an intersection query r 1 (e 1 ) r 2 (e 2 ) corresponds to the intersection of two simple queries, that is, r 1 (e 1 ) r 2 (e 2 ) = {e i | (e i , r 1 , e 1 ) ∈ K ∧ (e i , r 2 , e 2 ) ∈ K}.
Simple Questions
Fig. 2 provides an overview of our semi-automatic procedure for creating simple question examples: (i) We manually define query templates, (ii) automatically populate query templates using K to create queries with a sufficiently large number of answers in K, (iii) automatically identify evidence passages for the answers and filter out noisy examples, (iv) map query templates to question templates to obtain pseudo-language questions, and (v) validate answers and paraphrase pseudo-language questions through crowdsourcing. Next, we describe each of these steps in detail.
Generating query templates We manually select a set of 135 relationsR ⊂ R, which will be used in our query templates. We select relations going through the most frequent relations in Wikidata and choosing ones for which denotations often contain many entities (e.g., ReceivedAward). The full list of relations is provided in App. A. For each relation, we manually write a template that will be used to map queries to pseudo-language questions. For example, the template for ReceivedAward is "Who received the award X?" Some relations are underspecified -for example, LocatedIn can describe the location of buildings, geographical features, and cities. When generating synthetic questions, this leads to vague questions such as "What is located in Paris?". To address this issue we manually split these to typed relations that specify the semantic type of their answers/denotations. This split is done using the type hierarchy given in Wikidata and given the type t of answer entities. We denote typed relations by r t , and the denotation of r t (e) comprises all entities of type t returned by r(e). 2 For example, the entity The Louvre has type cultural organization, and we can map the relevant query template to the pseudo-language question "Which cultural organization is located in Paris?".
Simple query generation We instantiate all possible simple queries using all r ∈R and entities e in Wikidata. For a relation r (or r t ), we keep the query r(e) iff |r(e)| ≥ 5. We denote this set of instantiated simple queries by S, which contains 1,431,268 simple queries.
Finding evidence sentences As our goal is to create an ODQA benchmark, we must verify that every answer is indeed found in our target text corpus. We do this by (a) identifying candidate evidence sentences from Wikipedia, and (b) verifying that they entail the answer, using a Natural Language Inference (NLI) model.
Specifically, every simple query-answer pair can be viewed as a triple (e 1 , r, e 2 ). We use a "distant supervision" approach (Mintz et al., 2009), similar to KELM (Agarwal et al., 2021), and define any sentence in the Wikipedia page of entity e 1 that contains the entity e 2 , or one of its Wikidata aliases, as a candidate evidence sentence (and vice versa in the Wikipedia page of e 2 ). For example, in Fig. 2, the evidence for the triple (BarackObama, ReceivedAward, NobelPeacePrize) appears on the Wikipedia page of Barack Obama, where the phrase Nobel Peace Prize appears.
Aligning Wikipedia sentences to Wikidata can lead to false positives. For example, for the triple (TheGoonies, HasScreenwriter, StevenSpielberg), most mentions of Spielberg in the page TheGoonies are not as a screenwriter. To account for this, we use an off-the-shelf NLI model. 3 For every answer, we consider each candidate evidence sentence along with its two preceding sentences, and check whether they entail the hypothesis phrase describing triple (e 1 , r, e 2 ). We use templates to phrase triples as short declarative sentences ("The Goonies has Steven Spielberg as screenwriter"
). An answer is validated if there is an evidence sentence that entails the triple. Manual analysis shows this process eliminates 70% of false positives (sentences not entailing the triple), while removing only 7.5% of the correct alignments.
Query filtering After finding evidence sentences, we only keep queries that at least 80% of their answers were validated and their number of validated answers lies between 5 and 200. The resulting set contains 60,792 simple queries, where each query has a set of validated answers, A, and of passages P that contain the identified evidence sentences. 4 We now describe how simple queries are expanded to complex queries.
Complex Questions
To increase the diversity of QAMPARI, we automatically expand simple queries to composition and intersection queries, for which answers require reading two passages.
Intersection Intersection queries are generated by finding two simple queries such that the size of the intersection of their denotations is at least 5. To avoid improbable questions such as "Which competition was won by Manchester City and had Manchester City as a participant?", we add a constraint that the denotation of one of the simple queries cannot be a subset of the other. Formally, the set of intersection queries are all queries r 1 (e 1 ) r 2 (e 2 ) such that | r 2 (e 2 ) r 1 (e 1 ) | ≥ 5, r 1 (e 1 ) r 2 (e 2 ) and r 2 (e 2 ) r 1 (e 1 ) . Pseudo-language questions are generated by heuristically combining the two simple questions, for example "Which television program had Chris Carter as screenwriter and had Frank Spotnitz as screenwriter?". There is no need to perform answer validation since all of the underlying intersecting answers were already validated.
Composition To create composition queries, we manually handpick a set of 423 relations R comp ⊂ R (list in our codebase), in a process similar to simple queries. Then, we generate all the possible composition queries r 2 (r 1 (e)) such that r 1 (e) ∈ S, r 2 ∈ R comp , and | r 2 (r 1 (e)) | ≥ 5. An example composition query is "What is the height of buildings located in Dubai?".
Unlike intersection queries, in composition queries we need to validate that our new triples (e i , r 2 , e j ), where e j ∈ r 1 (e) , are indeed supported by Wikipedia sentences. We use the same procedure to find evidence sentences for triples (e i , r 2 , e j ), and consider an answer e i as validated if both (e i , r 2 , e j ) and (e j , r 1 , e) can be aligned to Wikipedia. We keep all complex queries where 80% of the answers are validated. Finally, we manually define templates for relations in R comp to generate pseudo-language questions.
Questions from Wikipedia Tables
To further diversify QAMPARI, we create an analogous pipeline for generating simple and composition questions from Wikipedia tables, with more open-ended relations compared to Wikidata. We briefly describe this pipeline.
We look at all Wikipedia tables with title "List of X" that have at least 5 rows, in total, 1,897 tables. We find the "key" column, c key in each table using the table classifier from Talmor et al. (2021), which outputs the column of entities that the table describes. For example, in the table List of nuclear whistle blowers, c key is 'name' and specifies the whistle-blower names. This naturally creates simple questions of the form "Who or what is X?".
Simple questions are expanded to composition questions by looking at non-key columns, c non-key and asking what rows in the table have the value v in column c non-key . For example, what is the value in the column 'Year' for nuclear whistle-blowers.
Questions from Wikipedia are validated using a procedure similar to Wikidata. For each answer entity e, we validate that the Wikipedia page for e contains the relevant words that are part of the name of the table as well as the value (for composition questions), and only keep questions where 80% of the table rows are validated and the number of validated answers is at least 5. Overall, we generate 170 simple questions and 6,036 composition questions using this process.
Data Split
We provide a training set with QAMPARI, whose goal is to teach the model to handle multi-answer questions. However, we do not want the model to use the training set to memorize how particular Wikidata relations map to text patterns, as our goal is to test language understanding regardless of Wikidata relations.
Consequently, we perform a relation split, randomly splitting the setR into two equally-sized setsR train andR test . Simple queries are assigned to the train/test set based on their relation, composition queries r 2 (r 1 (e)) are assigned to the test set iff either r 1 or r 2 are inR test , and intersection queries r 1 (e 1 ) r 2 (e 2 ) are placed in the test set iff both r 1 and r 2 are inR test .
At this point, we can create the final train/development/test split (see also Tab. 2). The main bottleneck in our example generation pipeline is validation of the test set through crowdsourcing ( §2.5), since each question requires validating all of the answers in the list. Thus, we pre-determine the test set to contain 1,000 simple questions (830 from Wikidata and 170 from Wikipedia tables) and 1,000 complex questions (400 Wikidata composition questions, 400 Wikidata intersection questions, and 200 Wikipedia tables composition questions). For simple Wikidata questions, we sample 830 questions such that the distribution over relations fromR test is roughly uniform. All Wikipedia tables simple questions are placed in the test set, and for complex questions we randomly sample the pre-determined number from the set of generated questions. Last, the test set is randomly split in half to a development set and test set. We also sub-sample training set examples, such that each relation appears in at most 1,000 examples.
Crowdsourcing
We validate the correctness of development and test examples and paraphrase them into natural language through crowdsourcing.
Correctness validation For every question and answer, we present a crowdsourcing worker with the question, the answer, and links to the Wikipedia page (or pages for complex questions) with the evidence passage. We then ask the worker to check if the question can be answered from the given pages, using the text only (no infoboxes or tables).
As the grand majority of examples are correct, we control for quality by injecting wrong answers in 10% of the cases and reject workers that fail to identify those wrong answers. Moreover, we manually verify 5% of examples marked as correct and all examples marked as incorrect, and again reject low-performing workers.
Overall, 24 annotators validated 30,259 answers for an average pay of 12.5$ per hour. We find that our process for generating examples is accurate, with 96.6% of the answers validated. Nonvalidated questions were replaced until 2,000 questions were validated. A question is defined nonvalidated if its number of distinct answers goes bellow 5. Snapshots from the presented tasks are in App. B.
Paraphrasing Since our questions are in pseudolanguage, we follow past work (Wang et al., 2015) and ask workers to re-phrase the questions in the development/test sets. We restrict this task to workers from the US or the UK who pass a qualification test. We randomly verified half of the paraphrases for each worker for quality assurance. Tab. 2 provides key statistics on QAMPARI (development and test statistics are aggregated). Test examples in QAMPARI have 13.23 answers on average and a median of 7 answers. This is substantially higher than, e.g., AmbigQA, where the median is 2. Simple questions tend to have more answers than complex questions and are typically shorter than complex questions. Test examples are shorter than training examples since they were rephrased by annotators who used more concise and natural phrasings. Figure 3a shows a binned distribution over the number of answers on the development and test sets. We can see that roughly half of the questions have 8 answers or more, a non-negligible fraction (20%) have more than 15 answers, and 3.5% have more than 50.
Tab. 3 shows the frequency of the top-10 relations in the training and test sets of QAMPARI, as well as the top-10 semantic types of answers, which can be inferred from relation. Mirroring Wikipedia, we observe that the relations and types are focused on people, locations, and the entertainment world.
Manual analysis As mentioned, we use crowdsourcing to validate that gold answers have evi-dence in Wikipedia. However, Wikipedia can contain additional correct answers that are not in Wikidata/Wikipedia tables. Since manually annotating all correct answers on Wikipedia is virtually impossible, we estimate the frequency of this phenomenon. We sample 50 examples from QAM-PARI, and ask an expert annotator to find more correct answers on Wikipedia within 5-10 min. Fig. 3b shows the results of the manual analysis. In 20% of the questions, no additional answers were found, and for more than 60% of the questions, the gold set of answers contains at least half of the answers found by the annotator. Overall, precision estimates of models against the gold set of answers should be taken with a grain of salt.
Experimental Evaluation
We now turn to our experimental evaluation.
Models
ODQA models typically fall into either the retrieveand-read framework or the 'closed-book' framework (Roberts et al., 2020), where the model provides answers using knowledge encoded in its parameters. Here, we use baseline models from the retrieve-and-read family as they reach state-of-theart performance and are more parameter-efficient.
Retriever We use BM25 (Robertson and Zaragoza, 2009) to index Wikipedia passages and query the index. As mentioned ( §2), we chunk Wikipedia into passages, each containing consecutive sentences with at most 100 tokens, similar to DPR .
BM25 in a strong sparse retrieval baseline that scores question-passage pairs based on lexical similarity. BM25 is notoriously hard to beat using unsupervised methods Ram et al., 2022), and obtains respectable performance even compared to supervised methods. We leave training a dense passage retriever for the near future. Specifically, we return from BM25 the top-200 passages for each question.
Reader We evaluate two encoder-decoder readers, both initialized from T5 (Raffel et al., 2019). a "RAG-like" model ) that generates answers from each passage independently and Fusion-in-Decoder (FiD) , which encodes multiple passages and decodes a list of answers.
For RAG, the model encodes each of the 200 retrieved passages independently and outputs either "Not Relevant" for no answer, or "Answer: X" for some X. The predicted list of answers is the union of all answers predicted across passages. We train RAG in the following manner. For simple questions with answers A = {a i } |A| i=1 , we take the evidence passage p i for each a i and train the model to decode a i given the passage p i and the question q. For complex questions, where each answer a i requires 1-2 passages of evidence, we create a positive examples from each of the two evidence passages. Last, to train RAG to emit "Not Relevant", we sample for each positive passage a negative passage p neg by selecting the top-scoring BM25 passage that is not an evidence passage and does not contain the answer. We then train the model to emit "Not Relevant" given q and p neg .
FiD uses an encoder to encode each of the retrieved passages (along with the input question) and the decoder attends to the encoded representations to decode a list of answers. Since each example is now an input-output pair, we train with standard maximum likelihood.
FiD is computationally expensive as the decoder attends to a large number of encoded tokens and the generated output is long. Thus, we can only fit the top-50 passages from BM25 using T5-large on a single A100 GPU. We apply teacher forcing (Williams and Zipser, 1989) at training time, i.e., the model is given the gold passages and then the top scoring passages according to BM25. If |P| > 50, the model sees a random sample of size 50 from the set of gold passages.
Experimental Setup
As explained in §1, we create QAMPARI as a benchmark to be evaluated alongside other ODQA benchmarks such as NQ. Since QAMPARI is semiautomatically generated, one can develop models tailored for QAMPARI, but our goal should be to have a single model that can perform well across a wide variety of question types. Thus, we train and test models on QAMPARI only, but also in a multi-task setup, where models are trained on both NQ and QAMPARI.
Our main metrics are recall, precision, and F 1 . Specifically, for a test example (q, P, A), and a predicted set of answers A pred , recall, precision, and F 1 are computed in the typical manner, comparing A and A pred , while allowing for aliases, that is, a gold answer is considered covered if it or one of its aliases appears as a predicted answer. A model score is given by averaging recall, precision, and F 1 across all examples. To get a sense of the average accuracy across examples we measure the fraction of examples where F 1 is at least 0.5 (%F 1 ≥0.5) and the fraction where recall is at least 0.8 (%Recall≥0.8). We focus on recall and F 1 since, as shown in §3, precision is a only approximate due to additional answers not covered by Wikidata. For NQ, we use the standard exact match (EM) metric.
We evaluate the retriever with RECALL@K, that is, the fraction of answers that appear in the top-K retrieved passages, averaged across examples. This metric comes in two variants: (a) Answer RECALL@K ( ARECALL@K): for every gold answer whether it or one of its aliases appear in the top-K retrieved passages. This is a loose metric since an answer can appear even if the evidence does not support the answer; (b) Evidence RECALL@K (ERECALL@K): since we have evidence paragraphs for every answer, we consider for every gold answer the fraction of evidence passages in the top-K retrieved passages. This is a strict metric since an answer can sometimes be answered by passages other than the ones we identified.
Results
Tab. 5 shows test retrieval results of BM25 on QAMPARI. We observe that even given 200 retrieved passages average recall is only 55.2-62.7. Moreover, for low values of K, average recall is quite low (17.4-25.9 for K=10, 28.8-38.1 for K=25), and there is still substantial improvement from K=100 to K=200. These results illustrate the challenge in retrieving a large number of relevant passages into the "beam" of retrieved results. Tab. 4 shows test results on QAMPARI. Overall, performance on QAMPARI is low, where %F 1 ≥0.5 is at most 20.1 (FiD-Large, trained on QAM-PARI only (QO)) and %Recall≥0.8 is at most 30.9 (RAG-large, multi-task (MT) training). When training on both NQ and QAMPARI (MT), results on QAMPARI are lower than NQ (despite the more permissive evaluation metric), illustrating the challenge in answering multi-answer questions.
RAG is recall-oriented, while FiD is precisionoriented. This can be attributed to the number of negative passages at test time, which is larger than training time for RAG, and also to RAG taking 200 passages as input, while FiD only takes 50. F 1 for both models is around 25-27, and %Recall≥0.8 is extremely low for FiD (3.9% at most).
The MT setup benefits RAG, but not FiD. We hypothesize this is because training on NQ acts as a regularizer that reduces the number of predicted answers.
Analysis
Question type analysis We break down by question type the test performance of both FiD-Large and RAG-Large trained on QAMPARI only (Tab. 6). The mean number of answers predicted by RAG is dramatically higher than FiD (30.5 vs 5.1), which agrees with our observation that RAG is recall-oriented, while FiD is precision oriented.
Surprisingly, for FiD, performance on simple questions is lower than performance on complex questions, and specifically, intersection questions seem easiest. Possible explanations are: (a) simple questions have on average more answers (see Tab. 2), which makes them harder, and (b) Models can predict the right answer, even with only one of the evidence passages, due to either "shortcuts" (Chen and Durrett, 2019), or knowledge encoded in the parameters (Longpre et al., 2021).
Unlike FiD, the performance of RAG on Wikidata composition questions is lower than simple questions, potentially since it cannot reason over multiple passages and can only emit answers from each passage independently. Specifically, the recall of RAG on composition questions is less than half the recall on intersection questions. An analogous analysis for the MT setup is in Fig. 8 Model precision As mentioned ( §3), precision is a lower bound due to potential additional correct answers on Wikipedia. To estimate the effect of this, we randomly sampled 30 questions from the development set and manually computed "true" precision by checking whether answers are correct on Wikipedia. For FiD-Large (QO), estimated precision on this set is 36.0%, while true precision is 67.3%. For RAG-Large (MT) estimated precision is 21.6%, but true precision is 42.1%. This demonstrates that precision should be used to rank models, but not as an absolute measure of true precision.
Related work
Datasets Work on ODQA has mostly focused on datasets where an answer is a single phrase from a single passage, such as NaturalQuestions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), WebQuestions (Berant et al., 2013), Cu-ratedTREC (Baudiš and Šedivý, 2015), SQuAD (Rajpurkar et al., 2016), and EntityQuestions (Sciavolino et al., 2021)). Multi-hop datasets, such as HotpotQA (Yang et al., 2018), WikiHop (Welbl et al., 2018), and Multi-evidence FEVER (Thorne et al., 2018) output a single phrase but require reasoning over more than one passage. AmbigQA deals with questions that have multiple answers due to ambiguity. QAMPARI has questions with many answers and thus requires retrieving a much larger number of passages and generating answers from them. WikiNLDB (Thorne et al., 2021) tests the ability of models to reason over sets of facts. However, retrieval is restricted to at most 1,000 facts, which are model-generated. Here, our corpus is Wikipedia, and we retrieve much longer passages, which makes the setup more realistic, and retrieval and generation more challenging.
Models
The retrieve-and-read paradigm is currently the prevailing approach in ODQA, due to its ability to scale to large corpora (Chen et al., 2017;Yang et al., 2019;Sachan et al., 2021). However, when the number of evidence passages is large, retrieve-and-read models need to fetch all relevant passages to generate the correct answer. An alternative approach is to use closed-book models, where information is encoded in the model parameters (Roberts et al., 2020;Tay et al., 2022), however this entails using very high-capacity models. Last, a less explored model family that is potentially suitable for large answer sets are virtual knowledge-bases, which encode a corpus into a differentiable knowledge-base that is amenable for retrieval and logical operations (Sun et al., 2021;Dhingra et al., 2020).
Conclusion
We presented QAMPARI, an ODQA benchmark which focuses on the ability of models to handle questions that have many answers and thus require reading multiple text passages. QAMPARI is semi-automatically generated, where examples are generated from WikiData and Wikipedia tables, and manual work is done only to prepare pseudolanguage templates, to validate examples, and to re-phrase questions. We evaluate strong baselines on QAMPARI and show that the need to retrieve a large number of passages and generate long lists is challenging for state-of-the-art models from the retrieve-and-read family.
We view multi-answer questions as an integral part of the ODQA problem that has thus far been neglected, and invite the research community to develop models that can simultaneously answer a wide range of question types, include single-and multi-answer questions.
A Simple Relations
In Tab. 7. we gathered all the 135 relations we used to create our simple questions. The 423 relations used to create our composition questions can be found in our code base. Table 8 breaks down the performance of RAG-Large and FiD-Large in the multi-task setup.
B Crowdsourcing Validation
C Analysis of RAG-Large
D Development Set Results
In Tab. 9 we present results analogous to those in Tab. 4 for the development set.
q: Who are the directors of movies produced by Eric Newman? . . .
Figure 1 :
1An example from QAMPARI with a generated question q, a subset of its evidence Wikipedia passages (left, p i ) and the answers they lead to.
Figure 2 :
2An overview of example generation for simple questions.
Figure 3 :
3Left: binned distribution of the number of answers per example. Right: Results of manual analysis over 50 examples for whether Wikipedia contains more correct answers that are absent from Wikidata. We show in each bin the growth factor in the number of correct answers that were found w.r.t the size of the gold set.
Statistics
QAMPARI contains 63,911 examples with 1,000 examples in the development set, 1,000 in the test set, and the rest in the training set. Tab. 1 shows examples for all sources and questions types.
Fig. 4
4shows two screenshots of the task crowdsourcing workers performed.
Figure 4 :
4Screenshots from crowdsourcing task.
Table 1 :
1Example questions and one representative answer for all data sources and question types.Query/question template generation
ReceivedAward(X)
Who received award X?
I.
Simple query generation
ReceivedAward(NobelPeacePrize):
1. UN 2. EU … N: Barack Obama
II.
Finding evidence
Obama: Nine months later, he was named
the 2009 Nobel Peace Prize […].
III.
Pseudo language question:
Who received the award
Nobel Peace Prize?
IV.
Paraphrase + fact verification:
Who are all the Nobel Peace
Prize recipients ?
Table 2 :
2Key statistics of QAMPARI by question type and data source (WD for Wikidata, WP for Wikipedia tables, Comp for composition). Test statistics are an aggregation over development and test sets.Train relations
Test relations
Semantic type
Name
%
Name
%
Name
%
Cast member
11.1
Directors
13.1
Human
27.6
Performers
9.8
Screenwriters
10.7
Creative work
24.2
Location
9.4
Producers
6.9
Film
11.8
Part of
8.3
Education place
6.4
Spatial entity
7.1
Publication date
5.3
Winners
5.2
Competition
5.8
Place of birth
4.6
Owners
4.7
T.V series
2.9
Dates of birth
4.6
Is a
4.6
Album
2.9
Teams
3.7
Composers
4.5
Person
2.6
Directors
3.71
Country of origin
4.1
Building
1.4
Sport played
2.8
Main group
3.6
Human settlement
1.4
Table 3 :
3The 10 most frequent relations and semantic types in QAMPARI.
Table 4 :
4Models performance on the QAMPARI test set. (QO): Trained on QAMPARI only. (MT) Multi-task
training with NQ. We also provide FiD results on the NQ test set.
ARECALL@K
ERECALL@K
BM25
K=10
25.9
17.4
K=25
38.1
28.8
K=50
46.8
38.9
K=100
54.7
47.7
K=200
62.7
55.2
Table 5 :
5BM25 Retriever test results.
, App. C.QAMPARI
Recall Precision
F1
%F1 ≥0.5 %Recall≥0.8
Mean # answers
FiD
WD Simple
17.6
29.4
20.0
12.5
2.2
5.55
WD Intersection
34.9
48.5
39.0
40.0
11.5
5.75
WD Composition
22.1
40.6
27.2
22.2
5.1
3.83
WP Simple
5.5
22.7
8.2
2.4
0.0
4.45
WP Compostion
24.0
38.5
27.6
25.2
6.1
4.96
RAG
WD Simple
48.4
18.4
23.3
12.7
29.4
36.32
WD Intersection
81.0
25.3
35.4
23.8
73.1
35.32
WD Composition
35.3
18.2
19.9
7.7
8.3
27.56
WP Simple
25.8
17.5
18.9
9.4
7.1
37.09
WP Composition
48.2
19.0
23.6
10.5
27.3
33.36
Table 6 :
6Question type analysis of RAG-Large and FiD-Large, trained in the QO setup. (WD): Wikidata, (WP) Wikipedia tables.
Table 7 :
7Simple relationsQAMPARI
Recall Precision
F1
%F1 ≥0.5 %Recall≥0.8
Mean # answers
FiD
WD -Simple
15.1
28.1
18.0
9.5
1.0
4.86
WD -Intersection
30.8
46.0
35.3
33.0
7.0
5.57
WD -Composition
20.6
41.5
25.8
21.2
1.6
3.64
WP -Simple
5.1
25.1
7.7
2.3
0.0
3.30
WP -Composition
19.5
31.7
22.8
21.2
4.0
4.57
RAG
WD -Simple
50.0
18.0
23.1
11.7
29.7
37.96
WD -Intersection
80.7
27.6
37.6
29.5
73.1
31.21
WD -Composition
39.2
26.5
26.8
18.2
11.6
22.14
WP -Simple
30.5
14.1
17.0
2.4
8.2
52.96
WP -Composition
49.4
20.8
25.9
17.9
29.5
30.58
Table 8 :
8Question type analysis of RAG-Large and FiD-Large, trained in the MT setup. (WD): Wikidata, (WP) Wikipedia tables.QAMPARI
Recall Precision
F1
%F1 ≥0.5 %Recall≥0.8
RAG-Base
QO
43.6
17.7
21.3
9.6
25.9
MT
45.0
20.6
23.8
11.5
26
RAG-Large
QO
45.6
19.5
23.8
11.9
26.9
MT
47.8
20.3
25.0
13.3
27.7
FiD-Base
QO
14.1
29.6
17.6
12
0.9
MT
12.6
30.7
16.4
10.0
0.9
FiD-Large
QO
19.6
33.0
22.5
17.7
3.1
MT
18.2
34.0
21.9
16.6
3.1
Table 9 :
9Development results for all models on QAMPARI. (QO): Trained on QAMPARI only. (MT) Multi-task training with NQ.
Wikipedia dump: 2021-08-01
This can be written as r(e) Type(t), as Type is a Wikidata relation.
https://huggingface.co/ynie/ roberta-large-snli_mnli_fever_anli_R1_ R2_R3-nli
We keep a single evidence passage for every triple.
AcknowledgementsWe want to thank Omer Bigi Amouyal, Levana Amouyal and Joseph McCrum for their help with the annotation verification process. We also want to thank Ori Ram for his helpful comments. This research was supported in part by The Yandex Initiative for Machine Learning, and The European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800).
Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. Oshin Agarwal, Heming Ge, Siamak Shakeri, Rami Al-Rfou, 10.18653/v1/2021.naacl-main.278Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlineOshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based syn- thetic corpus generation for knowledge-enhanced language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 3554-3565, On- line. Association for Computational Linguistics.
Modeling of the question answering task in the YodaQA system. Petr Baudiš, 10.1007/978-3-319-24027-5_20Proceedings of the 6th International Conference on Experimental IR Meets Multilinguality, Multimodality. the 6th International Conference on Experimental IR Meets Multilinguality, MultimodalityBerlin, HeidelbergSpringer-Verlag9283Petr Baudiš and Jan Šedivý. 2015. Modeling of the question answering task in the YodaQA system. In Proceedings of the 6th International Conference on Experimental IR Meets Multilinguality, Multimodal- ity, and Interaction -Volume 9283, CLEF'15, page 222-228, Berlin, Heidelberg. Springer-Verlag.
Semantic parsing on Freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Empirical Methods in Natural Language Processing (EMNLP). Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Nat- ural Language Processing (EMNLP).
An analysis of the AskMSR question-answering system. Eric Brill, Susan Dumais, Michele Banko, 10.3115/1118693.1118726Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002). the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002)Association for Computational LinguisticsEric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the AskMSR question-answering sys- tem. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Process- ing (EMNLP 2002), pages 257-264. Association for Computational Linguistics.
Reading Wikipedia to answer opendomain questions. Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes, 10.18653/v1/P17-1171Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Association for Computational LinguisticsDanqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada. Association for Computa- tional Linguistics.
Understanding dataset design choices for multi-hop reasoning. Jifan Chen, Greg Durrett, 10.18653/v1/N19-1405Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jifan Chen and Greg Durrett. 2019. Understanding dataset design choices for multi-hop reasoning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4026-4032, Minneapolis, Minnesota. Association for Computational Linguistics.
Differentiable reasoning over a virtual knowledge base. Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W Cohen, abs/2002.10640CoRRBhuwan Dhingra, Manzil Zaheer, Vidhisha Balachan- dran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable rea- soning over a virtual knowledge base. CoRR, abs/2002.10640.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave, arXiv:2112.09118Towards unsupervised dense information retrieval with contrastive learning. arXiv preprintGautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Towards unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118.
Leveraging passage retrieval with generative models for open domain question answering. Gautier Izacard, Edouard Grave, 10.18653/v1/2021.eacl-main.74Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeGautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Vol- ume, pages 874-880, Online. Association for Com- putational Linguistics.
TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer, 10.18653/v1/P17-1147Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver1Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Van- couver, Canada. Association for Computational Lin- guistics.
Dense passage retrieval for open-domain question answering. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-Tau Yih, 10.18653/v1/2020.emnlp-main.550Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.
Natural questions: A benchmark for question answering research. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M Dai, Jakob Uszkoreit, Quoc Le, Slav Petrov, 10.1162/tacl_a_00276Transactions of the Association for Computational Linguistics. 7Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:452-466.
Latent retrieval for weakly supervised open domain question answering. Kenton Lee, Ming-Wei Chang, Kristina Toutanova, 10.18653/v1/P19-1612Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsKenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics.
Retrieval-augmented generation for knowledgeintensive nlp tasks. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-Tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela, Advances in Neural Information Processing Systems. Curran Associates, Inc33Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- intensive nlp tasks. In Advances in Neural Infor- mation Processing Systems, volume 33, pages 9459- 9474. Curran Associates, Inc.
Entity-based knowledge conflicts in question answering. Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris Dubois, Sameer Singh, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingShayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 7052-7063.
AmbigQA: Answering ambiguous open-domain questions. Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer, 10.18653/v1/2020.emnlp-main.466Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering am- biguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 5783- 5797, Online. Association for Computational Lin- guistics.
Distant supervision for relation extraction without labeled data. Mike Mintz, Steven Bills, Rion Snow, Daniel Jurafsky, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPSuntec, SingaporeAssociation for Computational LinguisticsMike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Computational Linguistics.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, abs/1910.10683CoRRColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. CoRR, abs/1910.10683.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
Learning to retrieve passages without supervision. Ori Ram, Omer Shachaf, Jonathan Levy, Amir Berant, Globerson, North American Association for Computational Linguistics (NAACL). Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, and Amir Globerson. 2022. Learning to retrieve pas- sages without supervision. In North American Asso- ciation for Computational Linguistics (NAACL).
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, 10.18653/v1/2020.emnlp-main.437Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsAdam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.
The probabilistic relevance framework: BM25 and beyond. Stephen Robertson, Hugo Zaragoza, 10.1561/1500000019Found. Trends Inf. Retr. 34Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Found. Trends Inf. Retr., 3(4):333-389.
End-to-end training of multi-document reader and retriever for open-domain question answering. Devendra Singh Sachan, Siva Reddy, William L Hamilton, Chris Dyer, Dani Yogatama, Advances in Neural Information Processing Systems. Devendra Singh Sachan, Siva Reddy, William L. Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. In Advances in Neural Information Processing Sys- tems.
Simple entity-centric questions challenge dense retrievers. Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, Danqi Chen, 10.18653/v1/2021.emnlp-main.496Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsChristopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric ques- tions challenge dense retrievers. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing, pages 6138-6148, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Reasoning over virtual knowledge bases with open predicate relations. Haitian Sun, Pat Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, William W Cohen, abs/2102.07043CoRRHaitian Sun, Pat Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, and William W. Cohen. 2021. Rea- soning over virtual knowledge bases with open pred- icate relations. CoRR, abs/2102.07043.
Mul-timodal{qa}: complex question answering over text, tables and images. Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, Jonathan Berant, International Conference on Learning Representations. Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Han- naneh Hajishirzi, and Jonathan Berant. 2021. Mul- timodal{qa}: complex question answering over text, tables and images. In International Conference on Learning Representations.
Transformer memory as a differentiable search index. Yi Tay, Q Vinh, Mostafa Tran, Jianmo Dehghani, Dara Ni, Harsh Bahri, Zhen Mehta, Kai Qin, Zhe Hui, Jai Zhao, Gupta, arXiv:2202.06991arXiv preprintYi Tay, Vinh Q Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer mem- ory as a differentiable search index. arXiv preprint arXiv:2202.06991.
FEVER: a large-scale dataset for fact extraction and VERification. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal, 10.18653/v1/N18-1074Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
Database reasoning over text. James Thorne, Majid Yazdani, Marzieh Saeidi, Fabrizio Silvestri, Sebastian Riedel, Alon Halevy, 10.18653/v1/2021.acl-long.241Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1James Thorne, Majid Yazdani, Marzieh Saeidi, Fab- rizio Silvestri, Sebastian Riedel, and Alon Halevy. 2021. Database reasoning over text. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 3091-3104, Online. Association for Computational Linguistics.
The TREC-8 question answering track. Ellen M Voorhees, Dawn M Tice, Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00). the Second International Conference on Language Resources and Evaluation (LREC'00)Athens, GreeceEuropean Language Resources Association (ELRAEllen M. Voorhees and Dawn M. Tice. 2000. The TREC-8 question answering track. In Proceed- ings of the Second International Conference on Language Resources and Evaluation (LREC'00), Athens, Greece. European Language Resources As- sociation (ELRA).
Wikidata: a free collaborative knowledgebase. Denny Vrandečić, Markus Krötzsch, Communications of the ACM. 5710Denny Vrandečić and Markus Krötzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85.
Building a semantic parser overnight. Yushi Wang, Jonathan Berant, Percy Liang, Association for Computational Linguistics (ACL). Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Associa- tion for Computational Linguistics (ACL).
Constructing datasets for multi-hop reading comprehension across documents. Johannes Welbl, Pontus Stenetorp, Sebastian Riedel, 10.1162/tacl_a_00021Transactions of the Association for Computational Linguistics. 6Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transac- tions of the Association for Computational Linguis- tics, 6:287-302.
A learning algorithm for continually running fully recurrent neural networks. J Ronald, David Williams, Zipser, Neural computation. 12Ronald J Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270- 280.
End-to-end open-domain question answering with bertserini. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin, arXiv:1902.01718arXiv preprintWei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718.
HotpotQA: A dataset for diverse, explainable multi-hop question answering. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D Manning, 10.18653/v1/D18-1259Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsZhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answer- ing. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
| [] |
[
"Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring",
"Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring"
] | [
"Linzi Xing lzxing@cs.ubc.ca \nDepartment of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada\n",
"Giuseppe Carenini carenini@cs.ubc.ca \nDepartment of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada\n"
] | [
"Department of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada",
"Department of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada"
] | [
"Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue"
] | Dialogue topic segmentation is critical in several dialogue modeling problems. However, popular unsupervised approaches only exploit surface features in assessing topical coherence among utterances. In this work, we address this limitation by leveraging supervisory signals from the utterance-pair coherence scoring task. First, we present a simple yet effective strategy to generate a training corpus for utterance-pair coherence scoring. Then, we train a BERT-based neural utterance-pair coherence model with the obtained training corpus. Finally, such model is used to measure the topical relevance between utterances, acting as the basis of the segmentation inference 1 . Experiments on three public datasets in English and Chinese demonstrate that our proposal outperforms the state-of-the-art baselines. | null | [
"https://www.aclanthology.org/2021.sigdial-1.18.pdf"
] | 235,422,212 | 2106.06719 | 3b23815297bec2b064f9574f62836ca13a65fa48 |
Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
July 29-31, 2021
Linzi Xing lzxing@cs.ubc.ca
Department of Computer Science
University of British Columbia Vancouver
V6T 1Z4BCCanada
Giuseppe Carenini carenini@cs.ubc.ca
Department of Computer Science
University of British Columbia Vancouver
V6T 1Z4BCCanada
Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
the 22nd Annual Meeting of the Special Interest Group on Discourse and DialogueJuly 29-31, 2021167
Dialogue topic segmentation is critical in several dialogue modeling problems. However, popular unsupervised approaches only exploit surface features in assessing topical coherence among utterances. In this work, we address this limitation by leveraging supervisory signals from the utterance-pair coherence scoring task. First, we present a simple yet effective strategy to generate a training corpus for utterance-pair coherence scoring. Then, we train a BERT-based neural utterance-pair coherence model with the obtained training corpus. Finally, such model is used to measure the topical relevance between utterances, acting as the basis of the segmentation inference 1 . Experiments on three public datasets in English and Chinese demonstrate that our proposal outperforms the state-of-the-art baselines.
Introduction
Dialogue Topic Segmentation (DTS), as a fundamental task of dialogue modeling, has received considerable attention in recent years. In essence, DTS aims to reveal the topic structure of a dialogue by segmenting the dialogue session into its topically coherent pieces. An example is given in Table 1. Topic transition happens after Turn-4 and Turn-6, where the topic is correspondingly switched from "the requirement of the insurance coverage" to "the information presented on the insurance card", and then to "the way of submitting the insurance card". Dialogue topic segmentation plays a vital role for a variety of downstream dialogue-related NLP tasks, such as dialogue generation , summarization (Bokaei et al., 2016) and response prediction (Xu et al., 2021).
Different from the monologue topic segmentation (MTS) task (Koshorek et al., 2018;Xing et al., Turns Dialogue Text Turn-1: A: For how long should the liability insurance coverage remain in effect? Turn-2: B: As long as the registration of your vehicle remains valid. Turn-3: A: Does this apply for motorcycles too? Turn-4: B: There are some exceptions for motorcycles. Turn-5: A: Regarding the name on my vehicle registration application and the one on the Insurance Identification Card, do they need to be the same? Turn-6: B: yes, the names must match in both documents. Turn-7: A: Can I submit copies or faxes of my Insurance identification card to the DMV? Turn-8: B: yes, you can. But take into consideration that the card will be rejected if the DMV barcode reader can not scan the barcode. Table 1: A dialogue topic segmentation example sampled from Doc2Dial (Feng et al., 2020). This dialogue is segmented into three topical-coherent units (utterances in the same color are about the same topic). 2020), the shortage of labeled dialogue corpora has always been a very serious problem for DTS. Collecting annotations about topic shifting between the utterances of dialogues is highly expensive and time-consuming. Hence, most of the proposed labeled datasets for DTS are typically used for model evaluation rather than training. They are either small in size (Xu et al., 2021) or artificially generated and possibly noisy (Feng et al., 2020). Because of the lack of training data, most previously proposed methods for DTS follow the unsupervised paradigm. The common assumption behind these unsupervised methods is that the utterances associated with the same topic should be more coherent together than the utterances about different topics (Hearst, 1997;Purver et al., 2006). Hence, effectively modeling the coherence among utterances becomes the key ingredient of a successful DTS model. However, the performances of the prior unsupervised DTS models are usually limited since the coherence measurements between utterances are typically based on surface features (eg,. lexical overlap) (Hearst, 1997;Eisenstein and Barzilay, 2008) or word-level semantics (Song et al., 2016;Xu et al., 2021). Even though these features are easy to extract and thus make models more generally applicable, they can only reflect the coherence between utterances in a rather shallow way. More recently, there is work departing from the unsupervised setting by casting DTS as a weakly supervised learning task and utilizing a RL-based neural model as the basic framework (Takanobu et al., 2018). However, while this approach has been at least partially successful on goal-oriented dialogues when provided with predefined in-domain topics, it cannot deal effectively with more general open-domain dialogues.
To alleviate the aforementioned limitations in previous work, in this paper, we still cast DTS as an unsupervised learning task to make it applicable to dialogues from diverse domains and resources. However, instead of merely utilizing shallow features for coherence prediction, we leverage the supervised information from the text-pair coherence scoring task (i.e., measuring the coherence of adjacent textual units (Wang et al., 2017;Xu et al., 2019;Wang et al., 2020)), which can more effectively capture the deeper semantic (topical) relations between them. Due to the absence of supervision, we propose a simple yet effective strategy to generate a training corpus for the utterance-pair coherence scoring task, with the paired coherent/notutterance pairs as datapoints. Then, after applying such strategy, we use the resulting corpus to train an utterance-pair coherence scoring model with the relative ranking objective (Li, 2011).
In practice, we create a training corpus from large conversational datasets containing real daily communications and covering various topics (proposed in Li et al. (2017) and Wang et al. (2021)). In particular, all the adjacent utterance pairs are firstly extracted to form the positive sample set. Then for each positive sample, the corresponding negative samples are generated by replacing the subsequent turn in the positive sample with (1) an non-adjacent turn randomly picked from the same dialogue, and (2) a turn randomly picked from another dialogue talking about another topic. Once the training corpus is ready, we re-purpose the Next Sentence Prediction (NSP) BERT model (Devlin et al., 2019) as the basic framework of our utteracepair coherence scoring model. After fine-tuning the pretrained NSP BERT on our automatically generated training corpus with the marginal ranking loss, the resulting model can then be applied to produce the topical coherence score for all the consecutive utterance pairs in any given dialogue. Such scores can finally be used for the inference of topic segmentation for that dialogue.
We empirically test the popular TextTiling algorithm (Hearst, 1997) enhanced by the supervisory signal provided by our learned utterance-pair coherence scoring model on two languages (English and Chinese). The experimental results show that TextTiling enhanced by our proposal outperforms the state-of-the-art (SOTA) unsupervised dialogue topic segmenters by a substaintial margin on the testing sets of both languages. Finally, in a qualitative analysis, by visualizing the segment predictions of the different DTS segmenters on a sample dialogue, we show that the effectiveness of our proposal seems to come from better capturing topical relations and consideration for dialogue flows.
Related Work
Dialogue Topic Segmentation (DTS) Similar to the topic segmentation for monologue, dialogue topic segmentation aims to segment a dialogue session into the topical-coherent units. Therefore, a wide variety of approaches which were originally proposed for monologue topic segmentation, have also been widely applied to conversational corpora. Early approaches, due to lack of training data, are usually unsupervised and exploit the word co-occurrence statistics (Hearst, 1997;Galley et al., 2003;Eisenstein and Barzilay, 2008) or sentences' topical distribution (Riedl and Biemann, 2012;Du et al., 2013) to measure the sentence similarity between turns, so that topical or semantic changes can be detected. More recently, with the availability of large-scale corpora sampled from Wikipedia, by taking the section mark as the ground-truth segment boundary (Koshorek et al., 2018;Arnold et al., 2019), there has been a rapid growth in supervised approaches for monologue topic segmentation, especially neural-based approaches (Koshorek et al., 2018;Badjatiya et al., 2018;Arnold et al., 2019). These supervised solutions are favored by researchers due to their more robust performance and efficiency.
However, compared with monologue documents, dialogues are generally more fragmented and contain many more informal expressions. The dis-course relation between utterances are also rather different from the monologue text. These distinctive features may introduce undesirable noise and cause limited performance when the supervised approaches trained on Wikipedia is applied. Since the lack of training data still remains a problem for DTS, unsupervised methods, especially the ones extending TextTiling (Hearst, 1997), are still the mainstream options. For instance, Song et al. (2016) enhanced TextTiling with word embeddings, which better capture the underlying semantics than bagof-words style features. Later, Xu et al. (2021) replaced word embeddings with BERT as the utterance encoder to produce the input for TextTiling, because pretrained language models like BERT better capture more utterance-level dependencies. Also, to avoid a too fragmented topic segmentation, they adjusted the TextTiling algorithm into a greedy manner, which however requires more hyper-parameters and greatly limits the model's transferability. In contrast, here we adopt the original TextTiling to minimize the need of hyperparameters and use coherence signals for utterances learned from real-world dialogues to make our proposal more suitable for conversational data.
Another line of research explores casting DTS as a topic tracking problem (Khan et al., 2015;Takanobu et al., 2018), with the predefined conversation topics as part of the supervisory signals. Even though they have achieved SOTA performance on the in-distribution data, their reliability on the out-of-distribution data is rather poor. In contrast, our proposal does not require any prior knowledge (i.e., predefined topics) as input, so it is more transferable to out-of-distribution data.
Coherence Scoring Early on Lapata (2005, 2008) observed that particular patterns of grammatical role transition for entities can reveal the coherence of monologue documents. Hence, they proposed the entity-grid approach by using entity role transitions mined from documents as the features for document coherence scoring. Later, Cervone and Riccardi (2020) explored the potential of the entity-grid approach on conversational data and further proved that it was also suitable for dialogues. However, one key limitation of the entity-grid model is that by excessively relying on the identification of entity tokens and their corresponding roles, its performance can be reduced by errors from other NLP pre-processing tasks, like coreference resolution, which can be very noisy.
In order to resolve this limitation, researchers have explored scoring a document coherence by measuring and aggregating the coherence of its adjacent text pairs (e.g., Xu et al. (2019)), with Wang et al. (2017) being the first work demonstrating the strong relation between text-pair coherence scoring and monologue topic segmentation. In particular, they argued that a pair of texts from the same segment should be ranked more coherent than a pair of texts randomly picked from different paragraphs. With this assumption, they proposed a CNN-based model to predict text-pair semantic coherence, and further use this model to directly conduct topic segmentation. In this paper, we investigate how their proposal can be effectively extended to dialogues. Furthermore, we propose a novel method for data generation and model training, so that DTS and coherence scoring can mutually benefit each other.
Methodology
Following most of the previous work, we adopt TextTiling (Hearst, 1997) as the basic algorithm for DTS to predict segment boundaries for dialogues ((b) in Figure 1). Formally, given a dialogue d in the form of a sequence of utterances {u 1 , u 2 , ..., u k }, there are k − 1 consecutive utterance pairs. Then an utterance-pair coherence scoring model is applied to all these pairs and finally get a sequence of coherence scores {c 1 , c 2 , ..., c k−1 }, where c i ∈ [0, 1] indicates how topically related two utterances in the ith pair are. Instead of directly using the coherence scores to infer segment boundaries, a sequence of "depth scores" {dp 1 , dp 2 , ..., dp k−1 } is calculated to measure how sharp a valley is by looking at the highest coherence scores hl(i) and hr(i) on the left and right of interval i: dp i = hl(i)+hr(i)−2c i 2 . Higher depth score means the pair of utterances are less topically related to each other. The threshold τ to identify segment boundaries is computed from the mean µ and standard deviation σ of depth scores: τ = µ− σ 2 . A pair of utterances with the depth score over τ will be select to have a segment boundary in between.
Next, we describe our novel training data generation strategy and the architecture of our new utterance-pair coherence scoring model, which are the two key contributions of this paper.
Training Data for Coherence Scoring
We follow previous work (Wang et al., 2017;Xu et al., 2019;Huang et al., 2020) to optimize the utterance-pair coherence scoring model (described in Section 3.2) with marginal ranking loss. Formally, the coherence scoring model CS receives two utterances (u 1 , u 2 ) as input and return the coherence score c = CS(u 1 , u 2 ), which reflects the topical relevance of this pair of utterances. Due to the lack of corpora labeled with ground-truth coherence scores, we follow the strategy in Wang et al. (2017) to train CS based on the pairwise ranking with ordering relations of coherence between utterance pairs as supervisory signals.
In order to create the training data labeled with coherence ordering relations, we make two assumptions: (1) A pair of adjacent utterances is more likely to be more topical coherent than a pair of non-adjacent utterances but still in the same dialogue session.
(2) A pair of utterances from the same dialogue is more likely to be more topical coherent than a pair of utterances sampled from different dialogues. To formalize the ordering relations, we notate a source dialogue corpus as C and use u k i to represent the ith utterance in the dialogue d k ∈ C. Then the two ordering relations based on the above assumptions can be formulated as:
CS(u k i , u k i+1 ) > CS(u k i , u k j ), j / ∈ {i − 1, i, i + 1} (1) CS(u k i , u k j ) > CS(u k i , u m j ), k = m(2)
Since the ranking objective is pairwise, given two utterance pairs, we deem the pair with higher/lower coherence score as the positive/negative instance. Taking eq. 1 as an example, (u k i , u k i+1 ) and (u k i , u k j ) are positive and negative instance respectively.
Since the generality of the obtained coherence scoring model will significantly impact the robustness of the overall segmentation system, having a proper source dialogue corpus C to generate training data from is a critical step. We believe that an ideal source corpus should satisfy the following key requirements: (1) having a fairly large size; (2) covering as many topics as possible; (3) containing both formal and informal expressions. To test the strength of our proposal in a multilingual setting, we select DailyDialog 2 and NaturalConv 3 (Wang et al., 2021) for English and Chinese respectively. These two conversational corpora both consist of 2 yanran.li/dailydialog 3 ai.tencent.com/ailab/nlp/dialogue/ Due to the lack of space, next we will only use DailyDialog as our running example source dialogue corpus C to illustrate the training data generation process for coherence scoring. Given the source corpus DailyDialog, we first collect positive instances by extracting the adjacent utterance pairs which meet the Bi-turn Dialog Flow described in Li et al. (2017). The utterances in this corpus are labeled with the dialogue acts including {Questions, Inform, Directives, Commissives}. Among all the possible combinations, Questions-Inform and Directives-Commissives are deemed as basic dialogue act flows which happen regularly during conversations. Once positive instances P = {(s i , t + i )|i ∈ N } have been collected, we adopt negative sampling to construct the negative instance for each positive instance by randomly picking:
t − i : an utterance not adjacent to s i but in the same dialogue. -t − i : an utterance from another dialogue different from s i . These utterances will replace t + i in the positive instance to form two negative instances:
(s i , t − i ) and (s i , t − i ), where CS(s i , t + i ) > CS(s i , t − i ) > CS(s i , t − i )
. In order to further enlarge the margins of coherence relations presented above, we set two constraints. Firstly, t − i should be labeled with (2) Leveraging the fine-tuned BERT as the coherence scoring model to predict coherence scores for all the consecutive utterance pairs in a testing dialogue. TextTiling algorithm is further utilized to infer segment boundaries.
the dialogue act different from t + i . Secondly, t − i should be sampled from a dialogue about a topic different from the dialogue which t + i belongs to. Notice that the second corpus NaturalConv does not have dialogue act labels, so all the instance generation strategies with dialog acts in need are not applicable. In particular, positive instances for Nat-uralConv are simply adjacent utterances and the additional constraint for creating negative instances, in which t − i should be labeled with the dialogue act different from t + i , cannot be applied as well. By applying our novel data generation process, we obtain 91,581 and 599,148 paired pos/neg samples for DailyDialog and NaturalConv respectively. We split them into training (80%), validation (10%) and testing sets (10%) for further model training and evaluation.
Utterance-Pair Coherence Scoring Model
As illustrated in Figure 1(a), we choose the Next Sentence Prediction (NSP) BERT (Devlin et al., 2019) (trained for the Next Sentence Prediction task) as the basic framework of our utterance-pair coherence scoring model due to the similarity of these two tasks 5 . They both take a pair of sentences/utterances as input and only a topically re-5 Instead of NSP BERT (a cross-encoder), we could have also modelled such pairwise scoring with a bi-encoder, which first encodes each utterance independently. We eventually selected the cross-encoder due to the results in Thakur et al. (2021) showing that cross-encoders usually outperform biencoders for pairwise sentence scoring. lated sentence should be predicted as the appropriate next sentence. We first initialize the model with BERT base , which was pretrained on multibillion publicly available data. At the fine-tuning stage, we expect the model to learn to discriminate the positive utterance pairs from their corresponding negative pairs. More specifically, the positive (s i , t + i ) and negative (s i , t − i ) as instances are fed into the model respectively in the form of ([CLS]||s i ||[SEP]||t +/− i ||[SEP]), where || denotes the concatenation operation for sequences and [CLS], [SEP] are both special tokens in BERT. Following the original NSP BERT training procedure, we also add position embeddings, segment embeddings and token embeddings of tokens all together to get the comprehensive input for BERT. The NSP BERT is formed by a sequence of transformer encoder layers, where each layer consists of a self-attentive layer and a skip connection layer. Here we use the contextualized representation of [CLS] as the topic-aware embedding to predict how much the two input utterances are matched in topic. The topical coherence score will be estimated by passing [CLS] representation through another multilayer perceptron (MLP).
To encourage the model to learn to assign a positive instance (s i , t + i ) a coherence score c + i higher than its paired negative instance (s i , t − i ) score c − i , we minimize the following marginal ranking loss:
L = 1 N N i=1 max(0, η + c − i − c + i ) (3)
where N is the size of the training set, η is the margin hyper-parameter tuned at validation set.
Experiments
We comprehensively test our proposal by empirically comparing it with multiple baselines on three datasets in two languages.
Data for Evaluation
DialSeg 711 (Xu et al., 2021): a real-world dataset consisting of 711 English dialogues sampled from two task-oriented multi-turn dialogue corpora: MultiWOZ (Budzianowski et al., 2018) and Stanford Dialog Dataset (Eric et al., 2017). Topic segments of this dataset are from manual annotation. Doc2Dial (Feng et al., 2020): This dataset consists of 4,130 synthetic English dialogues between a user and an assistant from the goal-oriented documentgrounded dialogue corpus Doc2Dial. This dataset is generated by first constructing the dialogue flow automatically based on the content elements sampled from text sections of the grounding document. Then crowd workers create the utterance sequence based on the obtained artificial dialogue flow. Topic segments of this dataset are extracted based on text sections of the grounding document where the utterances' information comes from. ZYS (Xu et al., 2021): is a real-world Chinese dataset consisting of 505 conversations recorded during customer service phone calls on banking consultation. Similar to DialSeg 711, gold topic segments of this dataset are manually annotated. More details of the three datasets are in Table 3.
Baselines
We compare our dialogue topic segmenter with following unsupervised baselines: Random: Given a dialogue with k utterances, we first randomly sample the number of segment boundaries b ∈ {0, ..., k − 1} for this dialogue. Then we determine if an utterance is the end of a segment with the probability b k . BayesSeg (Eisenstein and Barzilay, 2008): This method models the words in each topic segment as draws from a multinomial language model associated with the segment. Maximizing the observation likelihood of the dialogue yields a lexicallycohesive segmentation. GraphSeg (Glavaš et al., 2016): This method generates a semantic relatedness graph with utterances as nodes. Segments are then predicted by finding the maximal cliques of the graph. GreedySeg (Xu et al., 2021): This method greedily determines segment boundaries based on the similarity of adjacent utterances computed from the output of the pretrained BERT sentence encoder. TextTiling (TeT) (Hearst, 1997): The detailed description of this method can be found in Section 3. TeT + Embedding (Song et al., 2016): TextTiling enhanced by GloVe word embeddings, by applying word embeddings to compute the semantic coherence for consecutive utterance pairs. TeT + CLS (Xu et al., 2021): TextTiling enhanced by the pretrained BERT sentence encoder, by using output embeddings of BERT encoder to compute semantic similarity for consecutive utterance pairs. TeT + NSP: TextTiling enhanced by the pretrained BERT for Next Sentence Prediction (NSP), by leveraging the output probability to represent the semantic coherence for consecutive utterance pairs.
Evaluation Metrics
We apply three standard metrics to evaluate the performances of our proposal and baselines. They are: P k error score (Beeferman et al., 1999), Win-Diff (WD) (Pevzner and Hearst, 2002) and F 1 score (macro). P k and WD are both calculated based on the overlap between ground-truth segments and model's predictions within a certain size sliding window. Since they are both penalty metrics, lower score indicates better performance. F 1 is the standard armonic mean of precision and recall, with higher scores indicating better performance
Experimental Setup
We fine-tune the utterance-pair coherence scoring model on BERT base which consists of 12 layers and 12 heads in each layer. The hidden dimension of BERT base is 768. Training is executed with AdamW (Loshchilov and Hutter, 2019) as our optimizer and
Method
DialSeg 711 (Glavaš et al., 2016) 43.74 44.76 0.537 51.54 51.59 0.403 GreedySeg (Xu et al., 2021) 50.95 53.85 0.401 50.66 51.56 0.406 TextTiling (TeT) (Hearst, 1997) 40.44 44.63 0.608 52.02 57.42 0.539 TeT + Embedding (Song et al., 2016) 39.37 41.27 0.637 53.72 55.73 0.602 TeT + CLS (Xu et al., 2021) 40 the scheduled learning rate with warm-up (initial learning rate lr= 2e-5). Model training is done for 10 epochs with the batch size 16. Model's performance is monitored over the validation set and finally the margin hyper-parameter η in eq. 3 is set to 1 from the set of candidates {0.1, 0.5, 1, 2, 5}. Table 4 compares the results of baselines and our proposal on two English dialogue topic segmentation evaluation benchmarks. The chosen baselines are clustered into the top three sub-tables in Table 4: random baseline, unsupervised baselines not extended from TextTiling and unsupervised baselines extended from TextTiling. Overall, our proposal (full) is the clear winner for both testing sets in all metrics. Another observation is that the set of segmenters TeT + X, which were proved to be effective for monologue topic segmentation, cannot consistently outperform the basic TextTiling on conversational data. The reason may be that the coherence prediction components of such approaches all rely on signals learned from monologue text (eg., GloVe and pretrained BERT). Due to the grammatical and lexical difference, signals learned from monologues tend to introduce unnecessary noise and limit the effectiveness of unsupervised topic segmemters when applied to dialogues. In contrast, our coherence scoring model trained on the dataset of coherent/non-coherent utterance pairs automatically generated from dialogues performs better than all comparisons by a substantial margin.
Doc2Dial P k ↓ W D ↓ F 1 ↑ P k ↓ W D ↓ F 1 ↑Random
Results and Analysis
Overall, this validates that by effectively using the topical relations of utterances in dialogue corpora, the BERT for next sentence prediction is able to produce coherence scores reflecting to what extend the two input utterances are matched in topic.
To confirm the benefit of taking dialogue flows and topics into account, we also conduct an ablation study by removing either one of these two parts from the training data generation process for coherence scoring. As reported in the bottom sub -table of Table 4, sampling positive/negative utterance pairs (t + i /t − i in Section 3.1) without using dialogue flows causes substantial performance drop on both testing sets, while sampling the other negative utterance pair (t − i in Section 3.1) without taking dialogue topics into consideration seems to have a smaller impact on the trained model's performance. This observation shows that the dialogue flow is a more effective signal than the dialogue topic. One possible explanation is that there are some basic dialogue flows that are commonly followed and gen- Figure 2: Behaviors of four TextTiling-based segmenters on an example dialogue selected from Doc2Dial (Feng et al., 2020). The horizonal axis is the index of intervals in a session, and the vertical axis is the value of depth score (higher value means more topical unrelated). The reference and prediction of topic boundaries are marked by blue and red vertical lines respectively. The overlaps of reference and prediction are marked by purple lines.
Method
DialSeg 711 To further investigate the generality of our proposal for different languages, we train a Chinese coherence scoring model on the training data generated from NaturalConv (in Section 3.1) and use it together with TextTiling to infer segmentation for Chinese dialogues. Table 5 exhibits the performances of our method and baselines on the testing set ZYS. Since the publicly available implementations for BayesSeg and GraphSeg only support English text as input, they are not included in this comparison. We note that although we observe a pattern similar to English, namely that our method surpasses all the selected baselines, gains seem to be smaller. While this still validates the reliability of our proposal for languages other than English, explaining this interlingual difference is left as future work. With a proper open-domain dialogue corpus for a particular language, TextTiling can be enhanced by the high-quality topical coherence signals in that language captured by our proposal.
Case Study
To more intuitively analyze the performance of our method and of the baselines, a sample dialogue is presented in Figure 2. First, notice that in models using more advanced features to compute coherence (line charts from top to bottom), the variation of depth scores (see §3) becomes more pronounced, which seem to indicate the more advanced models learn stronger signals to discriminate topically related and unrelated content. In particular, as shown again on the right-top of Figure 2, the plain TextTiling, which uses TF-IDF to estimate the coherence for utterance pairs, yields depth scores close to each other. With features carrying more complex semantic information, like word embeddings and BERT encoder pretrained on large-scale textual data, the difference of depth scores becomes more obvious. Remarkably, our utterance-pair coherence scoring model optimized by marginal ranking loss further enlarges the difference. More tellingly, this trend holds in general for all three corpora as shown quantitatively in Table 6. We can observe that with more advanced features informing coherence computation, the variation of depth scores becomes more pronounced, which indicates that more advanced models can learn stronger signals to discriminate topically related and unrelated content. Remarkably, among all the presented methods, our proposal yields the largest average variance of depth scores across all three testing corpora.
A second key observation is about the benefit of our proposal taking dialogue flows into consideration in the training process. Consider (U7, U8) as an example, the first three segmenters tend to assign relatively high depth score (low coherence) to this utterance pair due to the very little content overlap between them. However, our method manages to assign this pair the minimal depth score. This is because such utterance pair is a Questions-Inform in the Dialog Flow, thus even if there is very limited content in common, the two utterances should still very likely belong to the same topic segment.
Conclusions and Future Work
This paper addresses a key limitation of unsupervised dialogue topic segmenters, namely their inability to model topical coherence among utterances in the dialogue. To this end, we leverage signals learned from a neural utterance-pair coherence scoring model based on fine-tuning NSP BERT. With no data labeled with gold coherence score, we also propose a simple yet effective way to automatically construct a training dataset from any source dialogue corpus. The experimental results on three testing sets in English and Chinese show that our proposal outperforms all the alternative unsupervised approaches.
For the future, although most recent work has built on TextTiling, we plan to explore if our proposal can also be integrated with other unsupervised topic segmentation methods, like GraphSeg and BayesSeg, rather than just TextTiling. Furthermore, we also plan to explore effective strategies to exploit external commonsense knowledge (eg., ConceptNet (Speer et al., 2017)) or user characters (Xing and Paul, 2017) in topic segmentation, since they have been shown to be beneficial in dialogue generation (Qiao et al., 2020;Ji et al., 2020b) and summarization (Ji et al., 2020a
Figure 1 :
1The overview of our proposed dialogue topic segmentation procedure. (a) Fine-tuning the NSP BERT on the training data of utterance-pair coherence scoring generated from the source dialogue corpus C.
Table 2 :
2Statistics of the two conversational corpora used for coherence scoring training data generation.open-domain conversations about daily topics. Ta-
ble 2 gives some statistics about them. Different
from task-oriented dialogues, open-domain dia-
logues usually contain more diverse topics and
expressions. From Table 2, we can see that
both corpora cover multiple topics 4 and some
topics like Politics, Finance and Tech
are supposed to have more technical language,
while others like Sports, Entertainment
and Ordinary Life should include more ca-
sual expressions.
Table 3 :
3Statistics of the three dialogue topic segmentation testing sets for model evaluation.
Table 4 :
4The experimental results on two English testing sets: DialSeg 711 (Xu et al., 2021) and Doc2Dial (Feng
et al., 2020). ↑/↓ after the name of metrics indicates if the higher/lower value means better performance. The best
performances among the listed methods are in bold.
Method
P k ↓ W D ↓ F 1 ↑
Random
52.79 67.73 0.398
GreedySeg
44.12 48.29 0.502
TextTiling
45.86 49.31 0.485
TeT + Embedding 43.85 45.13 0.510
TeT + CLS
43.01 43.60 0.502
TeT + NSP
42.59 43.95 0.500
Ours
40.99 41.32 0.521
Table 5 :
5The experimental results on the Chinese test-
ing set proposed in Xu et al. (2021). The best perfor-
mances among the listed methods are in bold.
Table 6 :
6The average variance of depth scores on three testing sets. Highest values are in bold eralize across different types of dialogues, while dialogue topics are more specific and vary much more between different dialogue corpora.
) .
)Mohammad Hadi Bokaei, Hossein Sameti, and Yang Liu. 2016. Extractive summarization of multi-party meetings through discourse segmentation. Natural Language Engineering, 22(1):41-72. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026, Brussels, Belgium. Association for Computational Linguistics. Alessandra Cervone and Giuseppe Riccardi. 2020. Is this dialogue coherent? learning from dialogue acts and entities. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 162-174, 1st virtual meeting. Association for Computational Linguistics.
Our code, proposed fine-tuned models and data can be found at https://github.com/lxing532/ Dialogue-Topic-Segmenter.
We omit topic categories of these two corpus for space, please refer original papers for more details.
AcknowledgmentsWe thank the anonymous reviewers and the UBC-NLP group for their insightful comments and suggestions. This research was supported by the Language & Speech Innovation Lab of Cloud BU, Huawei Technologies Co., Ltd.
SECTOR: A neural model for coherent topic segmentation and classification. Sebastian Arnold, Rudolf Schneider, Philippe Cudré-Mauroux, Felix A Gers, Alexander Löser, 10.1162/tacl_a_00261Transactions of the Association for Computational Linguistics. 7Sebastian Arnold, Rudolf Schneider, Philippe Cudré- Mauroux, Felix A. Gers, and Alexander Löser. 2019. SECTOR: A neural model for coherent topic seg- mentation and classification. Transactions of the As- sociation for Computational Linguistics, 7:169-184.
Attention-based neural text segmentation. Pinkesh Badjatiya, J Litton, Manish Kurisinkel, Vasudeva Gupta, Varma, Advances in Information Retrieval. ChamSpringer International PublishingPinkesh Badjatiya, Litton J. Kurisinkel, Manish Gupta, and Vasudeva Varma. 2018. Attention-based neu- ral text segmentation. In Advances in Information Retrieval, pages 180-193, Cham. Springer Interna- tional Publishing.
Modeling local coherence: An entity-based approach. Regina Barzilay, Mirella Lapata, 10.3115/1219840.1219858Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Ann Arbor, MichiganAssociation for Computational LinguisticsRegina Barzilay and Mirella Lapata. 2005. Model- ing local coherence: An entity-based approach. In Proceedings of the 43rd Annual Meeting of the As- sociation for Computational Linguistics (ACL'05), pages 141-148, Ann Arbor, Michigan. Association for Computational Linguistics.
Modeling local coherence: An entity-based approach. Regina Barzilay, Mirella Lapata, 10.1162/coli.2008.34.1.1Computational Linguistics. 341Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics, 34(1):1-34.
Statistical models for text segmentation. Machine Learning. Doug Beeferman, Adam Berger, John Lafferty, 34Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Ma- chine Learning, 34(1):177-210.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Topic segmentation with a structured topic model. Lan Du, Wray Buntine, Mark Johnson, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, GeorgiaAssociation for Computational LinguisticsLan Du, Wray Buntine, and Mark Johnson. 2013. Topic segmentation with a structured topic model. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 190-200, Atlanta, Georgia. Association for Computational Linguistics.
Bayesian unsupervised topic segmentation. Jacob Eisenstein, Regina Barzilay, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HawaiiAssociation for Computational LinguisticsJacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 334-343, Honolulu, Hawaii. Association for Computational Linguistics.
Key-value retrieval networks for task-oriented dialogue. Mihail Eric, Lakshmi Krishnan, Francois Charette, Christopher D Manning, 10.18653/v1/W17-5506Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueSaarbrücken, GermanyAssociation for Computational LinguisticsMihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37-49, Saarbrücken, Germany. Association for Computational Linguistics.
2020. doc2dial: A goal-oriented document-grounded dialogue dataset. Song Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, Luis Lastras, 10.18653/v1/2020.emnlp-main.652Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsSong Feng, Hui Wan, Chulaka Gunasekara, Siva Patel, Sachindra Joshi, and Luis Lastras. 2020. doc2dial: A goal-oriented document-grounded dia- logue dataset. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 8118-8128, Online. As- sociation for Computational Linguistics.
Discourse segmentation of multi-party conversation. Michel Galley, Kathleen R Mckeown, Eric Fosler-Lussier, Hongyan Jing, 10.3115/1075096.1075167Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsSapporo, JapanAssociation for Computational LinguisticsMichel Galley, Kathleen R. McKeown, Eric Fosler- Lussier, and Hongyan Jing. 2003. Discourse seg- mentation of multi-party conversation. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 562-569, Sap- poro, Japan. Association for Computational Linguis- tics.
Unsupervised text segmentation using semantic relatedness graphs. Goran Glavaš, Federico Nanni, Simone Paolo Ponzetto, 10.18653/v1/S16-2016Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics. the Fifth Joint Conference on Lexical and Computational SemanticsBerlin, GermanyAssociation for Computational LinguisticsGoran Glavaš, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation us- ing semantic relatedness graphs. In Proceedings of the Fifth Joint Conference on Lexical and Computa- tional Semantics, pages 125-130, Berlin, Germany. Association for Computational Linguistics.
Text tiling: Segmenting text into multi-paragraph subtopic passages. Marti A Hearst, Computational Linguistics. 231Marti A. Hearst. 1997. Text tiling: Segmenting text into multi-paragraph subtopic passages. Computa- tional Linguistics, 23(1):33-64.
GRADE: Automatic graphenhanced coherence metric for evaluating opendomain dialogue systems. Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, Xiaodan Liang, 10.18653/v1/2020.emnlp-main.742Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsLishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. GRADE: Automatic graph- enhanced coherence metric for evaluating open- domain dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9230-9240, Online. Association for Computational Linguistics.
Generating commonsense explanation by extracting bridge concepts from reasoning paths. Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Minlie Huang, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsHaozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, and Min- lie Huang. 2020a. Generating commonsense expla- nation by extracting bridge concepts from reasoning paths. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Compu- tational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 248-257, Suzhou, China. Association for Computa- tional Linguistics.
Language generation with multi-hop reasoning on commonsense knowledge graph. Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, Minlie Huang, 10.18653/v1/2020.emnlp-main.54Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Xiaoyan Zhu, and Minlie Huang. 2020b. Language gen- eration with multi-hop reasoning on commonsense knowledge graph. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 725-736, Online. Asso- ciation for Computational Linguistics.
Hypotheses ranking and state tracking for a multi-domain dialog system using multiple asr alternates. O Z Khan, Jean-Philippe Robichaud, Paul A Crook, R Sarikaya, INTERSPEECH. O. Z. Khan, Jean-Philippe Robichaud, Paul A. Crook, and R. Sarikaya. 2015. Hypotheses ranking and state tracking for a multi-domain dialog system us- ing multiple asr alternates. In INTERSPEECH, page 1810-1814.
Text segmentation as a supervised learning task. Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, Jonathan Berant, 10.18653/v1/N18-2075Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesShort Papers; New Orleans, Louisiana2Association for Computational LinguisticsOmri Koshorek, Adir Cohen, Noam Mor, Michael Rot- man, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 2 (Short Pa- pers), pages 469-473, New Orleans, Louisiana. As- sociation for Computational Linguistics.
A short introduction to learning to rank. Hang Li, 10.1587/transinf.E94.D.1854IEICE Transactions on Information and Systems. E94.D(10Hang Li. 2011. A short introduction to learning to rank. IEICE Transactions on Information and Sys- tems, E94.D(10):1854-1862.
Deep reinforcement learning for dialogue generation. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, Jianfeng Gao, 10.18653/v1/D16-1127Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsJiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep rein- forcement learning for dialogue generation. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1192- 1202, Austin, Texas. Association for Computational Linguistics.
DailyDialog: A manually labelled multi-turn dialogue dataset. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, Taiwan1Asian Federation of Natural Language ProcessingYanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manu- ally labelled multi-turn dialogue dataset. In Proceed- ings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), pages 986-995, Taipei, Taiwan. Asian Federa- tion of Natural Language Processing.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, ICLR. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR.
A critique and improvement of an evaluation metric for text segmentation. Lev Pevzner, Marti A Hearst, 10.1162/089120102317341756Computational Linguistics. 281Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):19- 36.
Unsupervised topic modelling for multi-party spoken discourse. Matthew Purver, Konrad P Körding, Thomas L Griffiths, Joshua B Tenenbaum, 10.3115/1220175.1220178Proceedings of the 21st International Conference on Computational Linguistics and 44th. the 21st International Conference on Computational Linguistics and 44thMatthew Purver, Konrad P. Körding, Thomas L. Grif- fiths, and Joshua B. Tenenbaum. 2006. Unsuper- vised topic modelling for multi-party spoken dis- course. In Proceedings of the 21st International Conference on Computational Linguistics and 44th
Annual Meeting of the Association for Computational Linguistics. Sydney, AustraliaAssociation for Computational LinguisticsAnnual Meeting of the Association for Computa- tional Linguistics, pages 17-24, Sydney, Australia. Association for Computational Linguistics.
A sentiment-controllable topic-to-essay generator with topic knowledge graph. Lin Qiao, Jianhao Yan, Fandong Meng, Zhendong Yang, Jie Zhou, 10.18653/v1/2020.findings-emnlp.299Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsLin Qiao, Jianhao Yan, Fandong Meng, Zhendong Yang, and Jie Zhou. 2020. A sentiment-controllable topic-to-essay generator with topic knowledge graph. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 3336-3344, Online. Association for Computational Linguistics.
TopicTiling: A text segmentation algorithm based on LDA. Martin Riedl, Chris Biemann, Proceedings of ACL 2012 Student Research Workshop. ACL 2012 Student Research WorkshopJeju Island, KoreaAssociation for Computational LinguisticsMartin Riedl and Chris Biemann. 2012. TopicTiling: A text segmentation algorithm based on LDA. In Proceedings of ACL 2012 Student Research Work- shop, pages 37-42, Jeju Island, Korea. Association for Computational Linguistics.
Dialogue session segmentation by embedding-enhanced texttiling. Yiping Song, Lili Mou, R Yan, Li Yi, Zinan Zhu, X Hu, M Zhang, IN-TERSPEECH. Yiping Song, Lili Mou, R. Yan, Li Yi, Zinan Zhu, X. Hu, and M. Zhang. 2016. Dialogue session seg- mentation by embedding-enhanced texttiling. In IN- TERSPEECH, page 2706-2710.
Conceptnet 5.5: An open multilingual graph of general knowledge. Robyn Speer, Joshua Chin, Catherine Havasi, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17. the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17AAAI PressRobyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty- First AAAI Conference on Artificial Intelligence, AAAI'17, page 4444-4451. AAAI Press.
A weakly supervised method for topic segmentation and labeling in goal-oriented dialogues via reinforcement learning. Ryuichi Takanobu, Minlie Huang, Zhongzhou Zhao, Fenglin Li, Haiqing Chen, Xiaoyan Zhu, Liqiang Nie, 10.24963/ijcai.2018/612Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18International Joint Conferences on Artificial Intelligence OrganizationRyuichi Takanobu, Minlie Huang, Zhongzhou Zhao, Fenglin Li, Haiqing Chen, Xiaoyan Zhu, and Liqiang Nie. 2018. A weakly supervised method for topic segmentation and labeling in goal-oriented dialogues via reinforcement learning. In Proceed- ings of the Twenty-Seventh International Joint Con- ference on Artificial Intelligence, IJCAI-18, pages 4403-4410. International Joint Conferences on Ar- tificial Intelligence Organization.
Augmented SBERT: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. Nandan Thakur, Nils Reimers, Johannes Daxenberger, Iryna Gurevych, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNandan Thakur, Nils Reimers, Johannes Daxen- berger, and Iryna Gurevych. 2021. Augmented SBERT: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 296-310, Online. Association for Computa- tional Linguistics.
Learning to rank semantic coherence for topic segmentation. Liang Wang, Sujian Li, Yajuan Lv, Houfeng Wang, 10.18653/v1/D17-1139Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsLiang Wang, Sujian Li, Yajuan Lv, and Houfeng Wang. 2017. Learning to rank semantic coherence for topic segmentation. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing, pages 1340-1344, Copenhagen, Den- mark. Association for Computational Linguistics.
Response selection for multi-party conversations with dynamic topic tracking. Weishi Wang, C H Steven, Shafiq Hoi, Joty, 10.18653/v1/2020.emnlp-main.533Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsWeishi Wang, Steven C.H. Hoi, and Shafiq Joty. 2020. Response selection for multi-party conversations with dynamic topic tracking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6581-6591, Online. Association for Computational Linguistics.
Naturalconv: A chinese dialogue dataset towards multi-turn topic-driven conversation. Xiaoyang Wang, Chen Li, Jianqiao Zhao, Dong Yu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Xiaoyang Wang, Chen Li, Jianqiao Zhao, and Dong Yu. 2021. Naturalconv: A chinese dialogue dataset towards multi-turn topic-driven conversation. Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, 35(16):14006-14014.
Improving context modeling in neural topic segmentation. Linzi Xing, Brad Hackinen, Giuseppe Carenini, Francesco Trebbi, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsLinzi Xing, Brad Hackinen, Giuseppe Carenini, and Francesco Trebbi. 2020. Improving context model- ing in neural topic segmentation. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Lan- guage Processing, pages 626-636, Suzhou, China. Association for Computational Linguistics.
Incorporating metadata into content-based user embeddings. Linzi Xing, Michael J Paul, 10.18653/v1/W17-4406Proceedings of the 3rd Workshop on Noisy Usergenerated Text. the 3rd Workshop on Noisy Usergenerated TextCopenhagen, DenmarkAssociation for Computational LinguisticsLinzi Xing and Michael J. Paul. 2017. Incorporating metadata into content-based user embeddings. In Proceedings of the 3rd Workshop on Noisy User- generated Text, pages 45-49, Copenhagen, Den- mark. Association for Computational Linguistics.
A cross-domain transferable neural coherence model. Peng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, Jackie Chi Kit Cheung, 10.18653/v1/P19-1067Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsPeng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, and Jackie Chi Kit Cheung. 2019. A cross-domain transfer- able neural coherence model. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 678-687, Florence, Italy. Association for Computational Linguistics.
Topicaware multi-turn dialogue modeling. Yi Xu, Hai Zhao, Zhuosheng Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Yi Xu, Hai Zhao, and Zhuosheng Zhang. 2021. Topic- aware multi-turn dialogue modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14176-14184.
| [
"https://github.com/lxing532/"
] |
[
"Prix-LM: Pretraining for Multilingual Knowledge Base Construction",
"Prix-LM: Pretraining for Multilingual Knowledge Base Construction"
] | [
"Wenxuan Zhou \nLUKA Lab\nUniversity of Southern California\nUSA\n",
"Fangyu Liu \nLanguage Technology Lab\nTAL\nUniversity of Cambridge\nUK\n",
"Ivan Vulić \nLanguage Technology Lab\nTAL\nUniversity of Cambridge\nUK\n",
"Nigel Collier \nLanguage Technology Lab\nTAL\nUniversity of Cambridge\nUK\n",
"Muhao Chen muhaoche@usc.edu \nLUKA Lab\nUniversity of Southern California\nUSA\n"
] | [
"LUKA Lab\nUniversity of Southern California\nUSA",
"Language Technology Lab\nTAL\nUniversity of Cambridge\nUK",
"Language Technology Lab\nTAL\nUniversity of Cambridge\nUK",
"Language Technology Lab\nTAL\nUniversity of Cambridge\nUK",
"LUKA Lab\nUniversity of Southern California\nUSA"
] | [
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] | Knowledge bases (KBs) contain plenty of structured world and commonsense knowledge. As such, they often complement distributional text-based information and facilitate various downstream tasks. Since their manual construction is resource-and timeintensive, recent efforts have tried leveraging large pretrained language models (PLMs) to generate additional monolingual knowledge facts for KBs. However, such methods have not been attempted for building and enriching multilingual KBs. Besides wider application, such multilingual KBs can provide richer combined knowledge than monolingual (e.g., English) KBs. Knowledge expressed in different languages may be complementary and unequally distributed: this implies that the knowledge available in high-resource languages can be transferred to low-resource ones. To achieve this, it is crucial to represent multilingual knowledge in a shared/unified space. To this end, we propose a unified representation model, Prix-LM , for multilingual KB construction and completion. We leverage two types of knowledge, monolingual triples and cross-lingual links, extracted from existing multilingual KBs, and tune a multilingual language encoder XLM-R via a causal language modeling objective. Prix-LM integrates useful multilingual and KB-based factual knowledge into a single model. Experiments on standard entity-related tasks, such as link prediction in multiple languages, cross-lingual entity linking and bilingual lexicon induction, demonstrate its effectiveness, with gains reported over strong task-specialised baselines. | 10.18653/v1/2022.acl-long.371 | [
"https://www.aclanthology.org/2022.acl-long.371.pdf"
] | 239,016,364 | 2110.08443 | 69e4c79f6c58da551883d533c017332d3486a1d4 |
Prix-LM: Pretraining for Multilingual Knowledge Base Construction
Long PapersCopyright Long PapersMay 22-27, 2022
Wenxuan Zhou
LUKA Lab
University of Southern California
USA
Fangyu Liu
Language Technology Lab
TAL
University of Cambridge
UK
Ivan Vulić
Language Technology Lab
TAL
University of Cambridge
UK
Nigel Collier
Language Technology Lab
TAL
University of Cambridge
UK
Muhao Chen muhaoche@usc.edu
LUKA Lab
University of Southern California
USA
Prix-LM: Pretraining for Multilingual Knowledge Base Construction
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1May 22-27, 2022
Knowledge bases (KBs) contain plenty of structured world and commonsense knowledge. As such, they often complement distributional text-based information and facilitate various downstream tasks. Since their manual construction is resource-and timeintensive, recent efforts have tried leveraging large pretrained language models (PLMs) to generate additional monolingual knowledge facts for KBs. However, such methods have not been attempted for building and enriching multilingual KBs. Besides wider application, such multilingual KBs can provide richer combined knowledge than monolingual (e.g., English) KBs. Knowledge expressed in different languages may be complementary and unequally distributed: this implies that the knowledge available in high-resource languages can be transferred to low-resource ones. To achieve this, it is crucial to represent multilingual knowledge in a shared/unified space. To this end, we propose a unified representation model, Prix-LM , for multilingual KB construction and completion. We leverage two types of knowledge, monolingual triples and cross-lingual links, extracted from existing multilingual KBs, and tune a multilingual language encoder XLM-R via a causal language modeling objective. Prix-LM integrates useful multilingual and KB-based factual knowledge into a single model. Experiments on standard entity-related tasks, such as link prediction in multiple languages, cross-lingual entity linking and bilingual lexicon induction, demonstrate its effectiveness, with gains reported over strong task-specialised baselines.
Introduction
Multilingual knowledge bases (KBs), such as DB-Pedia (Lehmann et al., 2015), Wikidata (Vrandečić and Krötzsch, 2014), and YAGO (Suchanek et al., 2007), provide structured knowledge expressed in * Indicating equal contribution. multiple languages. Those KBs are modeled as knowledge graphs (KGs) that possess two types of knowledge: monolingual triples which describe relations of entities, and cross-lingual links which match entities across languages. The knowledge stored in such KGs facilitates various downstream applications such as question answering (Dai et al., 2016;Bauer et al., 2018;Wang et al., 2021b), recommendation (Zhang et al., 2016;Wang et al., , 2021c, and dialogue systems (Madotto et al., 2018;Yang et al., 2020). Manually constructing large-scale knowledge bases has been labor-intensive and expensive (Paulheim, 2018), leading to a surge of interest in automatic knowledge base construction (Ji et al., 2022). Recent research (Bosselut et al., 2019;Yao et al., 2019;Wang et al., 2020, inter alia) proposes to generate structured knowledge using pretrained lan-guage models (PLMs; Devlin et al. 2019), where missing elements in KB facts (i.e., triples) can be completed (i.e., filled in) by the PLM.
Prix-LM
While these methods arguably perform well for English, such automatic KB construction has not yet been tried for multilingual KBs -improving the knowledge in multilingual KBs would have a positive impact on applications in other languages beyond English. Moreover, KBs in multiple languages may possess complementary knowledge, and knowledge bases in low-resource languages often suffer severely from missing entities and facts. This issue could be mitigated by propagating knowledge from multiple well-populated highresource languages' KBs (e.g., English and French KBs) to the KBs of low-resource languages, this way 'collectively' improving the content stored in the full multilingual KB. 1 However, training LMs to capture structural knowledge independently for each language will fall short of utilizing complementary and transferable knowledge available in other languages. Therefore, a unified representation model is required, which can capture, propagate and enrich knowledge in multilingual KBs. In this work, we thus propose to train a language model for constructing multilingual KBs. Starting from XLM-R (Conneau et al., 2020) as our base model, we then pretrain it on the multilingual DBpedia, which stores both monolingual triples and cross-lingual links (see Figure 1). We transform both types of knowledge into sequences of tokens and pretrain the language model with a causal LM objective on such transformed sequences. The monolingual triples infuse structured knowledge into the language model, while the cross-lingual links help align knowledge between different languages. This way, the proposed model Prix-LM (Pre-trained Knowledge-incorporated Cross-lingual Language Model) is capable of mapping knowledge of different languages into a unified/shared space. We evaluate our model on four different tasks essential for automatic KB construction, covering both high-resource and low-resource languages: link prediction, cross-lingual entity linking, bilingual lexicon induction, and prompt-based LM 1 This intuition is illustrated by the example in Figure 1. Consider the prediction of facts (e.g., genre) about the oldest Japanese novel The Tale of Genji. English DBpedia records its genre only as Monogatari (story), whereas complementary knowledge can be propagated from the Japanese KB, which provides finer-grained genre information, including Love Story, Royal Family Related Story, and Monogatari. knowledge probing. The main results across all tasks indicate that Prix-LM brings consistent and substantial gains over various state-of-the-art methods, demonstrating its effectiveness.
Prix-LM
We now describe Prix-LM, first outlining the data structure and pretraining task, and then describing its pretraining procedure in full ( §2.1), and efficient inference approaches with Prix-LM ( §2.2).
Pretraining Task. We rely on multilingual DBpedia, but note that Prix-LM is also applicable to other KBs. DBpedia contains two types of structured knowledge: monolingual knowledge triples, and cross-lingual links between entities. The monolingual triples represent (relational) facts expressed in a structured manner. Each triple is denoted as {e 1 , r, e 2 }: the elements of a triple are identified as the subject entity e 1 , relation (or predicate) r, and object entity e 2 , respectively (see also Figure 1 for examples). For instance, the fact "The capital of England is London" can be represented as {England, capital, London}. The cross-lingual links, denoted as {e a , e b }, represent the correspondence of 'meaning-identical' entities e a and e b in two different languages: e.g., the English entity London is mapped to Londres in Spanish.
We treat both types of knowledge using the same input format {s, p, o}, where s = e 1 , p = r, o = e 2 for monolingual knowledge triples, and s = e a , p = null, o = e b for cross-lingual entity links. The pretraining task is then generating o given s and p. This objective is consistent with the link prediction task and also benefits other entity-related downstream tasks, as empirically validated later.
Pretraining Language Models
Prix-LM is initialized by a multilingual PLM such as XLM-R (Conneau et al., 2020): starting from XLM-R's pretrained weights, we train on the structured knowledge from a multilingual KB.
Input Representation. We represent knowledge from the KB as sequences of tokens. In particular, given some knowledge fact {s, p, o}, where each element is the surface name of an entity or a relation, we tokenize 2 the elements to sequences of subtokens X s , X p , and X o . We treat each element in the knowledge fact as a different text segment and concatenate them to form a single sequence.
We further introduce special tokens to represent different types of knowledge:
(1) Monolingual Triples. We use special tokens to indicate the role of each element in the triple, which converts the sequence to the following format:
<s> [S]X s </s> </s> [P]X p </s> </s> [O]X o [EOS]</s>.
<s> is the special token denoting beginning of sequence; </s> is the separator token, both adopted from XLM-R. Additional special tokens [S], [P] and [O] denote the respective roles of subject, predicate, and object of the input knowledge fact.
[EOS] is the end-of-sequence token.
(2) Cross-Lingual Links. As the same surface form of an entity can be associated with more than language, we use special language tokens to indicate the actual language of each entity. These extra tokens can also be interpreted as the relation between entities. The processed sequence obtains the following format:
<s> [S]X s </s> </s> [P][S-LAN][O-LAN] </s> </s> [O]X o [EOS]</s>.
<s> and </s> are the same as for monolingual triples. [S-LAN] and [O-LAN] denote two placeholders for language tokens, where they get replaced by the two-character ISO 639-1 codes of the source and target language, respectively. For example, if the cross-lingual connects an English entity London to a Spanish entity Londres, the two language tokens [EN][ES] will be appended to the token [P]. The new special tokens are randomly initialized, and optimized during training. The original special tokens are kept and also optimized.
Training Objective. The main training objective of Prix-LM is to perform completion of both monolingual knowledge triples and cross-lingual entity links (see §2). In particular, given X s and X p , the model must predict 1) X o from monolingual triples (i.e., X p is a proper relation), or X o as the crosslingual counterpart of X s for cross-lingual pairs (i.e., X p is a pair of language tokens). This task can be formulated into an autoregressive language modeling training objective:
L LM = − x t ∈X o ∪{[EOS]} log P (x t | x <t ) ,
where P (x t | x <t ) is the conditional probability of generating x t given previous subtokens. The probability of generating token x t is calculated from the hidden state of its previous token h t−1 in the final layer of Transformer as follows:
P (x t | x <t ) = softmax(W h t−1 ),
where W is a trainable parameter initialized from PLMs for subtoken prediction. Note that this training objective is applied to both monolingual knowledge triples and cross-lingual links as they can both be encoded in the same {s, p, o} format.
Since models like mBERT or XLM-R rely on masked language modeling which also looks 'into the future', subtokens can be leaked by attention. Therefore, we create adaptations to support causal autoregressive training using attention masks (Yang et al., 2019), so that the X o subtokens can only access their previous subtokens. In particular, in the Transformer blocks, given the query Q, key K, and value V , we adapt them to a causal LM:
att (Q, K, V ) = softmax QK √ d + M V ,
where Q, K, V ∈ R l×d ; l is the length of the input sequence, d is the hidden size, M ∈ R l×l is an attention mask, which is set as follows:
M i j = 0 x i X o ∪ {[EOS]} 0 x i ∈ X o ∪ {[EOS]}, j ≤ i −∞ x i ∈ X o ∪ {[EOS]}, j > i
Inference
Different downstream tasks might require different types of inference: e.g., while link prediction tasks should rely on autoregressive inference, similaritybased tasks such as cross-lingual entity linking rely on similarity-based inference, that is, finding nearest neighbors in the multilingual space. In what follows, we outline both inference types.
Autoregressive Inference. For link prediction tasks test input is in the format of {s, p, ?}, where the model is supposed to generate the missing o given s and p. For such tasks, o comes from a known set of candidate entities O. A simple way to perform inference is to construct candidate tuples {s, p, o } using each o ∈ O and return the one with the minimum LM loss. This straightforward approach requires encoding |O| sequences. However, as |O| can be large for high-resource languages (e.g., 2M items for English), this might yield a prohibitively expensive inference procedure. We thus propose to speed up inference by applying and adapting the constrained beam search (Anderson et al., 2017). In a nutshell, instead of calculating loss on the whole sequence, we generate one subtoken at a time and only keep several most promising sequences in the expansion set for beam search. The generation process ends when we exceed the maximum length of entities.
More precisely, given s and p (or only s when dealing with cross-lingual links), we concatenate them as the initial sequence X 0 and initialize the sequence loss to 0. We then extend the sequence using subtokens from the PLM's vocabulary V. For each subtoken w 1 ∈ V, we create a new sequence {X 0 , w 1 } and add − log P (w 1 |X 0 ) to the sequence loss. For the next round, we only keep the sequences that can be expanded to an entity in the expansion set, and retain at most K sequences with the smallest sequence loss, where K is a hyperparameter. This process is repeated until there are no more candidate sequences to be added to the expansion set. Finally, for any candidate entity o ∈ O, if it has been generated from a corresponding candidate sequence, we set its loss to the total LM loss (sum of sequence losses), otherwise we set its loss to ∞. Finally, we return the entity with the smallest loss. A more formal description of this procedure is summarized in Alg. 1 in the Appendix.
This inference variant only requires encoding at most L · K sequences, where L is the maximum number of subtokens in an entity. It is much more efficient when L · K |O|, which generally holds for tasks such as link prediction.
Similarity-Based Inference. For some tasks it is crucial to retrieve nearest neighbors (NN) via embedding similarity in the multilingual space. Based on prior findings concerning multilingual PLMs (Liu et al., 2021b) and our own preliminary experiments, out-of-the-box Prix-LM produces entity embeddings of insufficient quality. However, we can transform them into entity encoders via a simple and efficient unsupervised Mirror-BERT procedure (Liu et al., 2021a). In short, Mirror-BERT is a contrastive learning method that calibrates PLMs and converts them into strong universal lexical or sentence encoders. The NN search is then performed with the transformed "Mirror-BERT" Prix-LM variant. 3
Experiments and Results
In this section, we evaluate Prix-LM in both highresource and low-resource languages. The focus is on four tasks that are directly or indirectly related to KB construction. 1) Link prediction (LP) is the core task for automatic KB construction since it discovers missing links given incomplete KBs.
2) Knowledge probing from LMs (LM-KP) can also be seen as a type of KB completion task as it performs entity retrieval given a subject entity and a relation. 3) Cross-lingual entity linking (XEL) and 4) Bilingual lexicon induction (BLI) can be very useful for multilingual KB construction as they help to find cross-lingual entity links.
Experimental Setup
Training Configuration. We train our model on knowledge facts for 87 languages which are represented both in DBpedia and in XLM-R (Base). The training set comprises 52M monolingual knowledge triples and 142M cross-lingual links. We implement our model using Huggingface's Transformers library (Wolf et al., 2020), and primarily follow the optimization hyperparameters of XLM-R. 4 For LP we use the final checkpoint; for LM-LP, results are reported using the checkpoint at 20k steps; for BLI and XEL, the checkpoint at 150k steps is used. We discuss the rationales of checkpoint selection in §3.6.
Inference Configuration. For similarity-based inference, as in previous work (Liu et al., 2021a) the Mirror-BERT procedure relies on the 10k most frequent English words for contrastive learning. 5 For constrained beam search, used with the LP task, we set the hyperparameter K to 50.
Link Prediction
(Short) Task Description. Following relevant prior work (Bosselut et al., 2019;Yao et al., 2019), 4 In summary: The model is trained for 5 epochs with the Adam optimizer (Kingma and Ba, 2015) using β 1 = 0.9, β 2 = 0.98 and a batch size of 1,024. The learning rate is 5e−5, with a warmup for the first 6% steps followed by a linear learning rate decay to 0. We use dropout (Srivastava et al., 2014) with a rate of 0.1 on all layers and attention weights. For efficiency, we drop all triples with sequence lengths ≥ 30, which only constitutes less than 1.3% of all triples. The full training takes about 5 days with one Nvidia RTX 8000 GPU. 5 We use English words only for simplicity and direct comparisons. According to Liu et al. (2021a), Mirror-BERT tuning which uses words from the actual test language pair might yield even better performance. Our training config is identical to the original Mirror-BERT work, except the use of a smaller batch size (128 instead of 200) due to hardware constraints. given a subject entity e 1 and relation r, the aim of the LP task is to determine the object entity e 2 .
Task Setup. We evaluate all models on DBpedia. We randomly sample 10% of the monolingual triples as the test set for 9 languages and use remaining data to train the model. 6 The data statistics are reported in Tab. 1. The evaluation metrics are standard Hits@1, Hits@3, and Hits@10. 7
Models in Comparison.
We refer to our model as Prix-LM (All) and compare it to the following groups of baselines. First, we compare to three rep-6 Following Bordes et al. (2013), we use the filtered setting, removing corrupted triples appearing in the training or test set. Moreover, following existing LP tasks (Toutanova et al., 2015;Dettmers et al., 2018) we remove redundant triples (e 1 , r 1 , e 2 ) from the test set if (e 2 , r 2 , e 1 ) appears in the training set. 7 We do not calculate mean rank and mean reciprocal rank as constrained beam search does not yield full ranked lists. resentative and widely used KG embedding models 8 : 1) TransE (Bordes et al., 2013) interprets relations as translations from source to target entities, 2) ComplEx (Trouillon et al., 2016) uses complexvalued embedding to handle binary relations, while 3) RotatE interprets relations as rotations from source to target entities in the complex space. In fact, RotatE additionally uses a self-adversarial sampling strategy in training, and offers state-of-the-art performance on several KG completion benchmarks (Rossi et al., 2021). Second, Prix-LM (Single) is the ablated monolingual version of Prix-LM, which uses an identical model structure to Prix-LM (All), but is trained only on monolingual knowledge triples of the test language. Training adopts the same strategy from prior work on pretraining monolingual LMs for KG completion (Bosselut et al., 2019;Yao et al., 2019). We train the Prix-LM (Single) for the same number of epochs as Prix-LM (All): this means that the embeddings of subtokens in the test language are updated for the same number of times.
Results and Discussion. The results in Tab lang.→ model↓ en-it en-tr en-ru en-fi fi-ru fi-tr Acc MRR Acc MRR Acc MRR Acc MRR Acc MRR Acc MRR XLM-R + Mirror 12.0 16.6 6.9 8.6 2.9 5.9 5.9 7.4 2.0 3.3 5.7 7.0 Prix-LM + Mirror 11.5 20.4 6.7 11.1 3.7 11.4 6.9 11.5 4.2 9.0 7.7 11.0 show that the Prix-LM (All) achieves the best Hits@1 on average, outperforming TransE, Com-plEx, and RotatE by 21.5%, 11.8%, and 5.6%, respectively. It also outperforms the baselines on Hits@3 and Hits@10. Moreover, Prix-LM (All) outperforms in almost all languages its monolingual counterpart Prix-LM (Single): the average improvements are > 3% across all metrics, demonstrating that the model can effectively leverage complementary knowledge captured and transferred through massive pretraining on multiple languages. Interestingly, the advantages of Prix-LM (both Single and All models) over baselines are not restricted to low resource languages but are observed across the board. This hints that, beyond integrating multilingual knowledge, Prix-LM is essentially a wellsuited framework for KB completion in general.
Cross-lingual Entity Linking
(Short) Task Description. In XEL 9 , a model is asked to link an entity mention in any language to a corresponding entity in an English KB or in a language-agnostic KB. 10 XEL can contribute to multilingual KB construction in two ways. First, 9 XEL in our work refers only to entity mention disambiguation; it does not cover the mention detection subtask. 10 A language-agnostic KB has universal interlingual concepts without being restricted to a specific language. since XEL links mentions extracted from free text to KBs, it can be leveraged to enrich KBs with textual attributes. Second, it also provides a way to disambiguate knowledge with similar surface forms but different grounded contexts.
Task Setup. We evaluate Prix-LM on two XEL benchmarks: (i) the Low-resource XEL benchmark (LR-XEL; Zhou et al. 2020) and (ii) crosslingual biomedical entity linking (XL-BEL; Liu et al. 2021b). LR-XEL covers three low-resource languages te, lo, and mr 11 where the model needs to associate mentions in those languages to the English Wikipedia pages. XL-BEL covers ten typologically diverse languages (see Tab. 3 for the full list). It requires the model to link an entity mention to entries in UMLS (Bodenreider, 2004), a language-agnostic medical knowledge base.
Models in Comparison.
For XEL and all following tasks, we use multilingual MLMs (i.e. mBERT and XLM-R) as our baselines as they are the canonical models frequently used in prior work and have shown promising results in cross-lingual entitycentric tasks (Vulić et al., 2020;Liu et al., 2021b;Kassner et al., 2021). We remind the reader that the 'Mirror-BERT' fine-tuning step is always applied, yielding an increase in performance.
Results and Discussion. On LR-XEL, Prix-LM achieves gains for all three languages over its base model XLM-R. Especially on mr, where XLM-R and mBERT are almost fully ineffective, Prix-LM leads to over 20% of absolute accuracy gain, again showing the effectiveness of incorporating multilingual structural knowledge. On lo, mBERT is slightly better than Prix-LM, but Prix-LM again yields gains over its base model: XLM-R. On XL-BEL, a large increase is again observed for almost all target languages (see Prix-LM (All) + Mirror). The only exception is English, where the model performance drops by 3.5%. This is likely to be a consequence of trading-off some of the extensive English knowledge when learning on multilingual triples. Beyond English, substantial improvements are obtained in other Indo-European languages including Spanish, German and Russian (+10-20%), stressing the necessity of knowledge injection even for high-resource languages. Like LP, we also experimented with Prix-LM trained with only monolingual data (see Prix-LM (Single) + Mirror). Except for English, very large boosts are obtained on all other languages when comparing All and Single models, confirming that multilingual training has provided substantial complementary knowledge.
Bilingual Lexicon Induction
(Short) Task Description. BLI aims to find a counterpart word or phrase in a target language. Similar to XEL, BLI can also evaluate how well a model can align a cross-lingual (entity) space.
Task Setup. We adopt the standard supervised embedding alignment setting (Glavaš et al., 2019) of VecMap (Artetxe et al., 2018) with 5k translation pairs reserved for training (i.e., for learning linear alignment maps) and additional 2k pairs for testing. The similarity metric is the standard crossdomain similarity local scaling (CSLS; Lample et al. 2018). 12 We experiment with six language pairs and report accuracy (i.e., Hits@1) and mean reciprocal rank (MRR).
Results and Discussion. The results are provided in Tab. 4. There are accuracy gains observed on 4/6 language pairs, while MRR improves for all pairs. These findings further confirm that Prix-LM in general learns better entity representations and improved cross-lingual entity space alignments.
Prompt-based Knowledge Probing
(Short) Task Description. LM-KP (Petroni et al., 2019) queries a PLM with (typically human-designed) prompts/templates such as Dante was born in . (the answer should be Florence). It can be viewed as a type of KB completion since the queries and answers are converted from/into KB triples: in this case, {Dante, born-in, Florence}.
Task Setup. We probe how much knowledge a PLM contains in multiple languages relying on the multilingual LAnguage Model Analysis (mLAMA) benchmark (Kassner et al., 2021). To ensure a strictly fair comparison, we only compare XLM-R and Prix-LM. We exclude multi-token answers as they require multi-token decoding modules, which will be different for causal LMs like Prix-LM versus MLMs such as XLM-R. For both Prix-LM and XLM-R, we take the word with highest probability at the [Mask] token as the model's prediction. Punctuation, stop words, and incomplete Word-Pieces are filtered out from the vocabulary during prediction. 13
Results and Discussion. Tab. 5 indicates that Prix-LM achieves better performance than XLM-R on mLAMA across all languages. We suspect that the benefits of Prix-LM training are twofold. First, multilingual knowledge is captured in the unified LM representation, which improves LM-KP as a knowledge-intensive task. The effect of this is particularly pronounced on low-resource languages such as fi, et and hu, showing that transferring knowledge from other languages is effective. Second, the Prix-LM training on knowledge triples is essentially an adaptive fine-tuning step (Ruder, 2021) that exposes knowledge from the existing PLMs' weights. We will discuss this conjecture, among other analyses, in what follows.
Additional Analysis
Inconsistency of the Optimal Checkpoint across Tasks (Fig. 2). How many steps should we pretrain Prix-LM on knowledge triples? The plots in Fig. 2 reveal that the trend is different on tasks that require language understanding (mLAMA) versus tasks that require only entity representations (LP and XL-BEL). On mLAMA, Prix-LM's performance increases initially and outperforms the base model (XLM-R, at step 0). However, after around 20k steps it starts to deteriorate. We speculate that this might occur due to catastrophic forgetting, as mLAMA requires NLU capability to process queries formatted as natural language. Training on knowledge triples may expose the PLMs' capability of generating knowledge at the earlier training stages: this explains the steep increase from 0-20k iterations. However, training on knowledge triples for (too) long degrades the model's language understanding capability. On the other hand, longer training seems almost always beneficial for LP and XL-BEL: these tasks require only high-quality entity embeddings instead of understanding complete sentences. A nuanced difference between LP and XL-BEL is that Prix-LM's performance on XL-BEL saturates after 100k-150k steps, while on LP the Hits@1 score still increases at 200k steps.
Link Prediction on Unseen Entities (Tab. 6).
KG embedding models such as RotatE require that entities in inference must be seen in training. However, the Prix-LM is able to derive (non-random) representations also for unseen entities. We evaluate this ability of Prix-LM on triples (s, r, o) where the subject entity s or object entity o is unseen during training. The results indicate that Prix-LM can generalize well also to unseen entities.
Related Work
Injecting Structured Knowledge into LMs. Conceptually, our work is most related to recent work on knowledge injection into PLMs. Know-BERT (Peters et al., 2019) connects entities in text and KGs via an entity linker and then re-contextualizes BERT representations conditioned on the KG embeddings. KG-BERT (Yao et al., 2019) trains BERT directly on knowledge triples by linearizing their entities and relations into a sequence and predicting plausibility of the sequence. Wang et al. (2021a) improve KG-BERT by splitting a subject-relation-object knowledge triple into a subject-relation pair representation and an object entity representation, then modeling their similarities with a dual/Siamese neural network. Other work on knowledge injection such as K-BERT (Liu et al., 2020a) and ERNIE (Zhang et al., 2019) mainly aims to leverage external knowledge to improve on downstream NLU tasks instead of performing KG completion. While prior studies have focused on incorporating monolingual (English) structured knowledge into PLMs, our work focuses on connecting knowledge in many languages, allowing knowledge in each language to be transferred and collectively enriched.
Multilingual LMs pretrained via MLM, such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), cover 100+languages and are the starting point (i.e. initialization) of Prix-LM. 14 With the notable exception of Calixto et al. (2021) who rely on the prediction of Wikipedia hyperlinks as an auxiliary/intermediate task to improve XLM-R's multilingual representation space for crosslingual transfer, there has not been any work on augmenting multilingual PLMs with structured knowledge. Previous work has indicated that off-the-shelf mBERT and XLM-R fail on knowledge-intensive multilingual NLP tasks such as entity linking and KG completion, and especially so for low-resource languages (Liu et al., 2021b). These are the crucial challenges addressed in this work.
KB Completion and Construction. Before PLMs, rule-based systems and multi-staged information extraction pipelines were typically used for automatic KB construction (Auer et al., 2007;Fabian et al., 2007;Hoffart et al., 2013;Dong et al., 2014). However, such methods require expensive human effort for rule or feature creation (Carlson et al., 2010;Vrandečić and Krötzsch, 2014), or they rely on (semi-)structured corpora with easy-to- Table 6: LP scores of Prix-LM (All) on unseen entities.
consume formats (Lehmann et al., 2015). Petroni et al. (2019) showed that modern PLMs such as BERT could also be used as KBs: querying PLMs with fill-in-the-blank-style queries, a substantial amount of factual knowledge can be extracted. This in turn provides an efficient way to address the challenges of traditional KB methods. Jiang et al. (2020) and Kassner et al. (2021) extended the idea to extracting knowledge from multilingual PLMs. Work in monolingual settings closest to ours is COMET (Bosselut et al., 2019): Prix-LM can be seen as an extension of this idea to multilingual and cross-lingual setups. Prix-LM's crucial property is that it enables knowledge population by transferring complementary structured knowledge across languages. This can substantially enrich (limited) prior knowledge also in monolingual KBs.
In another line of work, multilingual KG embeddings (Chen et al., 2017Sun et al., 2020aSun et al., , 2021 were developed to support cross-KG knowledge alignment and link prediction. Such methods produce a unified embedding space that allows link prediction in a target KG based on the aligned prior knowledge in other KGs (Chen et al., 2020). Research on multilingual KG embeddings has made rapid progress recently, e.g., see the survey of Sun et al. (2020b). However, these methods focus on a closed-world scenario and are unable to leverage open-world knowledge from natural language texts. Prix-LM combines the best of both worlds and is able to capture and combine knowledge from (multilingual) KGs and multilingual texts.
Conclusion
We have proposed Prix-LM, a unified multilingual representation model that can capture, propagate and enrich knowledge in and from multilingual KBs. Prix-LM is trained via a casual LM objective, utilizing monolingual knowledge triples and cross-lingual links. It embeds knowledge from the KB in different languages into a shared representation space, which benefits transferring complementary knowledge between languages. We have run comprehensive experiments on 4 tasks relevant to KB construction, and 17 diverse languages, with performance gains that demonstrate the effectiveness and robustness of Prix-LM for automatic KB construction in multilingual setups. The code and the pretrained models will be available online at: https://github.com/luka-group/prix-lm.
Figure 1 :
1An illustration of the main idea supporting Prix-LM: it infuses complementary multilingual knowledge from KGs into a multilingual causal LM; e.g., Japanese KG stores more comprehensive genre information of The Tale of Genji than KGs in other languages. Through cross-lingual links (translations), such knowledge is then propagated across languages.
Figure 2 :
2Prix-LM performance on LP, mLAMA, and XL-BEL over different checkpoints. Results of a sample of languages are shown for clarity.
Table 1: Link prediction statistics and results. The languages (see Appendix for the language codes) are ordered based on their proximity to English (e.g., it, de and fr being close to en and hu and ja are distant to en; Chiswick and Miller 2005). fi, et, tr and hu have less than 1M Wikipedia articles and are relatively low-resource.lang.→
en
it
de
fr
fi
et
tr
hu
ja
avg.
# entities (K)
2175 525 304 671 187
32
159 151 422
-
# triples (K)
7256 1543 618 1912 634
66
528 535 1159
-
Hits@1
TransE
11.3
4.1
4.8
3.0
2.4
2.6
6.1 11.4 1.9
5.3
ComplEx
15.3 12.8 11.6 16.3 18.8 16.3 16.3 15.0 12.7 15.0
RotatE
19.7 17.3 17.5 23.0 19.8 21.5 26.2 29.8 15.8 21.2
Prix-LM (Single) 25.5 17.9 17.8 23.8 19.0 16.1 37.6 32.6 19.7 23.3
Prix-LM (All)
27.3 22.7 20.8 25.0 22.4 25.8 41.8 35.1 20.6 26.8
Hits@3
TransE
28.0 25.0 24.0 27.2 26.0 20.0 31.0 36.1 20.6 26.4
ComplEx
22.3 22.2 20.7 24.0 30.1 24.8 26.9 29.0 22.9 24.8
RotatE
29.6 28.4 26.8 30.1 32.8 34.6 37.4 42.6 26.7 32.1
Prix-LM (Single) 34.1 27.7 24.8 29.6 27.6 25.6 46.1 44.1 29.4 32.1
Prix-LM (All)
35.6 32.2 29.7 32.4 31.8 36.7 49.8 47.5 29.4 36.1
Hits@10
TransE
41.4 42.3 38.8 43.5 47.9 38.3 50.3 51.0 37.9 43.5
ComplEx
32.2 34.7 32.7 35.7 44.4 35.6 41.7 45.0 35.5 37.5
RotatE
39.1 42.2 40.0 44.9 47.7 46.4 52.3 55.2 40.0 45.3
Prix-LM (Single) 42.5 38.2 33.3 37.6 39.2 34.8 54.3 55.4 36.7 41.3
Prix-LM (All)
44.3 42.5 40.1 40.3 44.0 47.5 58.7 56.8 38.0 45.8
lang.→
te
lo
mr avg.
XLM-R + Mirror
2.1
4.0 0.1
2.1
mBERT + Mirror
3.2
8.0 0.1
3.8
Prix-LM + Mirror 13.09 7.6 21.0 13.9
Table 2 :
2XEL accuracy on the LR-XEL task for lowresource languages.
. 1 8
1The KG embedding baselines are implemented based on OpenKE(Han et al., 2018) and trained using the default hyper-parameters in the library.lang.→
en
es
de
fi
ru
tr ko zh
ja
th avg.
XLM-R + Mirror
75.4 34.0 13.7 4.2 7.4 19.5 1.8 1.4 2.7 3.2 16.3
mBERT + Mirror
73.1 40.1 16.6 4.4 5.0 22.0 1.9 1.1 2.3 2.4 16.9
Prix-LM (Single) + Mirror 75.4 39.5 16.9 8.4 12.4 27.4 2.1 3.5 4.1 6.9 19.7
Prix-LM (All) + Mirror
71.9 49.2 25.7 15.2 24.5 34.1 9.3 6.9 13.7 14.5 26.5
Table 3 :
3XEL Accuracy on XL-BEL.
Table 4 :
4Accuracy and MRR for BLI. mBERT results are omitted since it performs much worse than XLM-R.lang.→ en it de fr fi et tr hu avg.
XLM-R 21.0 19.3 13.9 7.6 5.6 6.1 20.5 6.1 12.5
Prix-LM 23.8 21.8 20.7 17.8 16.1 7.4 23.9 13.1 18.1
Table 5 :
5Accuracy on mLAMA.
lang . →
.en it de fr fi et tr hu ja avg.Hits@1 17.2 22.9 17.0 16.0 18.3 31.3 19.2 28.5 12.4 20.3
Hits@3 24.7 30.1 24.0 22.3 23.5 37.7 24.7 38.5 19.0 27.1
Hits@10 31.0 34.9 28.9 27.8 31.9 42.3 30.8 44.2 23.6 32.8
XLM-R's dedicated multilingual tokenizer is used to processes entity and relation names in each language.
For a fair comparison, we also apply the same transformation on baseline PLMs.
Marathi (mr, an Indo-Aryan language spoken in Western India, written in Devanagari script), Lao (lo, a Kra-Dai language written in Lao script) and Telugu (te, a Dravidian language spoken in southeastern India written in Telugu script).
Note that the models are not fine-tuned but only their embeddings are used. Further, note that the word translation pairs in the BLI test sets have < 0.001% overlap with the cross-lingual links used in Prix-LM training.
The exclusion of multi-token answers and also a customised set of non-essential tokens make our results incomparable with the original paper. However, this is a fair probing setup for comparing Prix-LM and XLM-R since they share the same tokenizer and their prediction candidate spaces will thus be the same.
We will explore autoregressive multilingual PLMs such as mBART(Liu et al., 2020b) and mT5(Xue et al., 2021) in the future. While they adopt autoregressive training objectives at pretraining, it is non-trivial to extract high-quality embeddings from such encoder-decoder architectures, which is crucial for some tasks in automatic KB completion (e.g. XEL and BLI).
AcknowledgementWe appreciate the reviewers for their insightful comments and suggestions. Wenxuan ZhouAlgorithm 1: Constrained Beam SearchInput: Subject entity s, relation p, set of object entities O, maximum entity length L, size of expansion set K, PLM vocabulary set V. Output: Predicted entity. Create the initial sequence X 0 by concatenating s and p. Create a set of sequences X = ∅.to X and X t . Remove the sequences in X t that cannot expand to entities in O.Keep at most K sequences in X t with the smallest loss. For object entities that appear in X, return the one with the smallest loss.A Language CodesB Constrained Beam Search AlgorithmThe detailed algorithm of constrained beam search is described in Alg. 1.
Guided open vocabulary image captioning with constrained beam search. Peter Anderson, Basura Fernando, Mark Johnson, Stephen Gould, Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary im- age captioning with constrained beam search. In EMNLP 2017.
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In ACL 2018.
Dbpedia: A nucleus for a web of open data. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary Ives, https:/link.springer.com/chapter/10.1007/978-3-540-76298-0_52The Semantic Web. SpringerSören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, pages 722-735. Springer.
Commonsense for generative multi-hop question answering tasks. Lisa Bauer, Yicheng Wang, Mohit Bansal, EMNLP 2021. Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. In EMNLP 2021.
The unified medical language system (UMLS): integrating biomedical terminology. Olivier Bodenreider, Nucleic acids research. 32suppl_1Olivier Bodenreider. 2004. The unified medical language system (UMLS): integrating biomed- ical terminology. Nucleic acids research, 32(suppl_1):D267-D270.
. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko.
Translating embeddings for modeling multirelational data. NeurIPS. Translating embeddings for modeling multi- relational data. In NeurIPS 2013.
COMET: Commonsense transformers for automatic knowledge graph construction. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi, Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In ACL 2019.
Wikipedia entities as rendezvous across languages: Grounding multilingual language models by predicting Wikipedia hyperlinks. Iacer Calixto, Alessandro Raganato, Tommaso Pasini, NAACL 2021. Iacer Calixto, Alessandro Raganato, and Tommaso Pasini. 2021. Wikipedia entities as rendezvous across languages: Grounding multilingual language models by predicting Wikipedia hyperlinks. In NAACL 2021.
Toward an architecture for never-ending language learning. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, R Estevam, Tom M Hruschka, Mitchell, https:/dl.acm.org/doi/10.5555/2898607.2898816AAAI. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka, and Tom M Mitchell. 2010. Toward an architecture for never-ending lan- guage learning. In AAAI 2010.
Cross-lingual entity alignment with incidental supervision. Muhao Chen, Weijia Shi, Ben Zhou, Dan Roth, EACL 2021. Muhao Chen, Weijia Shi, Ben Zhou, and Dan Roth. 2021. Cross-lingual entity alignment with inciden- tal supervision. In EACL 2021.
Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. Muhao Chen, Yingtao Tian, Mohan Yang, Carlo Zaniolo, Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph em- beddings for cross-lingual knowledge alignment. In IJCAI 2017.
Ankith Uppunda, Yizhou Sun, and Carlo Zaniolo. 2020. Multilingual knowledge graph completion via ensemble knowledge transfer. Xuelu Chen, Muhao Chen, Changjun Fan, EMNLP 2020 (Findings). Xuelu Chen, Muhao Chen, Changjun Fan, Ankith Up- punda, Yizhou Sun, and Carlo Zaniolo. 2020. Mul- tilingual knowledge graph completion via ensemble knowledge transfer. In EMNLP 2020 (Findings).
Linguistic distance: A quantitative measure of the distance between english and other languages. R Barry, Paul W Chiswick, Miller, Journal of Multilingual and Multicultural Development. 261Barry R Chiswick and Paul W Miller. 2005. Linguistic distance: A quantitative measure of the distance be- tween english and other languages. Journal of Multi- lingual and Multicultural Development, 26(1):1-11.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, 2020Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL 2020.
CFO: Conditional focused neural question answering with largescale knowledge bases. Zihang Dai, Lei Li, Wei Xu, Zihang Dai, Lei Li, and Wei Xu. 2016. CFO: Condi- tional focused neural question answering with large- scale knowledge bases. In ACL 2016.
Convolutional 2d knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, AAAI. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In AAAI 2018.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL 2019.
Knowledge vault: A web-scale approach to probabilistic knowledge fusion. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, Wei Zhang, https:/dl.acm.org/doi/10.1145/2623330.2623623KDD. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowl- edge fusion. In KDD 2014.
YAGO: A core of semantic knowledge unifying wordnet and wikipedia. Kasneci Ms Fabian, Gjergji, Gerhard, MS Fabian, Kasneci Gjergji, WEIKUM Gerhard, et al. 2007. YAGO: A core of semantic knowledge unify- ing wordnet and wikipedia. In WWW 2007.
How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. Goran Glavaš, Robert Litschko, Sebastian Ruder, Ivan Vulić, Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vulić. 2019. How to (properly) evaluate cross- lingual word embeddings: On strong baselines, com- parative analyses, and some misconceptions. In ACL 2019.
OpenKE: An open toolkit for knowledge embedding. Xu Han, Shulin Cao, Lv Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, Juanzi Li, Xu Han, Shulin Cao, Lv Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. OpenKE: An open toolkit for knowledge embedding. In EMNLP 2018.
YAGO2: A spatially and temporally enhanced knowledge base from wikipedia. Johannes Hoffart, M Fabian, Klaus Suchanek, Gerhard Berberich, Weikum, Artificial Intelligence. 194Johannes Hoffart, Fabian M Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. YAGO2: A spatially and temporally enhanced knowledge base from wikipedia. Artificial Intelligence, 194:28-61.
A survey on knowledge graphs: Representation, acquisition, and applications. Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, S Yu Philip, 10.1109/TNNLS.2021.3070843IEEE Transactions on Neural Networks and Learning Systems. 332Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Mart- tinen, and S Yu Philip. 2022. A survey on knowl- edge graphs: Representation, acquisition, and appli- cations. IEEE Transactions on Neural Networks and Learning Systems, 33(2):494-514.
X-FACTR: Multilingual factual knowledge retrieval from pretrained language models. Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, Graham Neubig, 2020Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig. 2020. X-FACTR: Multilingual factual knowledge retrieval from pre- trained language models. In EMNLP 2020.
Multilingual LAMA: Investigating knowledge in multilingual pretrained language models. Nora Kassner, Philipp Dufter, Hinrich Schütze, EACL 2021. Nora Kassner, Philipp Dufter, and Hinrich Schütze. 2021. Multilingual LAMA: Investigating knowl- edge in multilingual pretrained language models. In EACL 2021.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR 2015.
Word translation without parallel data. Guillaume Lample, Alexis Conneau, Marc'aurelio Ranzato, Ludovic Denoyer, Hervé Jégou, Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In ICLR 2018.
Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, 6Semantic webJens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia-a large-scale, mul- tilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167-195.
Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders. Fangyu Liu, Ivan Vulić, Anna Korhonen, Nigel Collier, EMNLP 2021. Fangyu Liu, Ivan Vulić, Anna Korhonen, and Nigel Collier. 2021a. Fast, effective, and self-supervised: Transforming masked language models into univer- sal lexical and sentence encoders. In EMNLP 2021.
Learning domain-specialised representations for cross-lingual biomedical entity linking. Fangyu Liu, Ivan Vulić, Anna Korhonen, Nigel Collier, ACL-IJCNLP 2021. Fangyu Liu, Ivan Vulić, Anna Korhonen, and Nigel Collier. 2021b. Learning domain-specialised repre- sentations for cross-lingual biomedical entity link- ing. In ACL-IJCNLP 2021.
Haotang Deng, and Ping Wang. 2020a. K-bert: Enabling language representation with knowledge graph. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, 2020Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020a. K-bert: Enabling language representation with knowledge graph. In AAAI 2020.
Multilingual denoising pre-training for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, TACL. 8Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020b. Multilingual denoising pre-training for neural machine translation. TACL, 8:726-742.
Knowledge aware conversation generation with explainable reasoning over augmented graphs. Zhibin Liu, Zheng-Yu Niu, Hua Wu, Haifeng Wang, EMNLP-IJCNLP. Zhibin Liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang. 2019. Knowledge aware conversation gen- eration with explainable reasoning over augmented graphs. In EMNLP-IJCNLP 2019.
Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. Andrea Madotto, Chien-Sheng Wu, Pascale Fung, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In ACL 2018.
How much is a triple? estimating the cost of knowledge graph creation. Heiko Paulheim, Heiko Paulheim. 2018. How much is a triple? estimat- ing the cost of knowledge graph creation. In ISWC 2018.
Knowledge enhanced contextual word representations. E Matthew, Mark Peters, Robert Neumann, Roy Logan, Vidur Schwartz, Sameer Joshi, Noah A Singh, Smith, EMNLP-IJCNLP. Matthew E Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. In EMNLP-IJCNLP 2019.
Language models as knowledge bases?. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, EMNLP-IJCNLP. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In EMNLP-IJCNLP 2019.
Knowledge graph embedding for link prediction: A comparative analysis. Andrea Rossi, Denilson Barbosa, Donatella Firmani, Antonio Matinata, Paolo Merialdo, ACM Transactions on Knowledge Discovery from Data (TKDD). 152Andrea Rossi, Denilson Barbosa, Donatella Fir- mani, Antonio Matinata, and Paolo Merialdo. 2021. Knowledge graph embedding for link prediction: A comparative analysis. ACM Transactions on Knowl- edge Discovery from Data (TKDD), 15(2):1-49.
Recent Advances in Language Model Fine-tuning. Sebastian Ruder, Sebastian Ruder. 2021. Recent Advances in Lan- guage Model Fine-tuning. http://ruder.io/ recent-advances-lm-fine-tuning.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, JMLR. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958.
Yago: A core of semantic knowledge. Fabian M Suchanek, Gjergji Kasneci, Gerhard Weikum, Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowl- edge. In WWW 2007.
Knowing the no-match: Entity alignment with dangling cases. Zequn Sun, Muhao Chen, Wei Hu, ACL-IJCNLP 2021Zequn Sun, Muhao Chen, and Wei Hu. 2021. Knowing the no-match: Entity alignment with dangling cases. In ACL-IJCNLP 2021.
Knowledge graph alignment network with gated multi-hop neighborhood aggregation. Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, Yuzhong Qu, 2020Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020a. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In AAAI 2020.
A benchmarking study of embedding-based entity alignment for knowledge graphs. Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, Chengkai Li, Proceedings of the VLDB Endowment. the VLDB Endowment13Zequn Sun, Qingheng Zhang, Wei Hu, Cheng- ming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020b. A benchmarking study of embedding-based entity alignment for knowledge graphs. Proceedings of the VLDB Endowment, 13(11):2326-2340.
RotatE: Knowledge graph embedding by relational rotation in complex space. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang, Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. RotatE: Knowledge graph embedding by relational rotation in complex space. In ICLR 2019.
Representing text for joint embedding of text and knowledge bases. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, Michael Gamon, Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In EMNLP 2015.
Complex embeddings for simple link prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML 2016.
Wikidata: A free collaborative knowledgebase. Denny Vrandečić, Markus Krötzsch, 10.1145/2629489Commun. ACM. 5710Denny Vrandečić and Markus Krötzsch. 2014. Wiki- data: A free collaborative knowledgebase. Commun. ACM, 57(10):78-85.
Probing pretrained language models for lexical semantics. Ivan Vulić, Maria Edoardo, Robert Ponti, Goran Litschko, Anna Glavaš, Korhonen, 2020Ivan Vulić, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In EMNLP 2020.
Structureaugmented text representation learning for efficient knowledge graph completion. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, Yi Chang, https:/dl.acm.org/doi/fullHtml/10.1145/3442381.3450043WWW 2021. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. 2021a. Structure- augmented text representation learning for efficient knowledge graph completion. In WWW 2021.
. Chenguang Wang, Xiao Liu, arXiv:2010.11967arXiv preprintand Dawn Song. 2020. Language models are open knowledge graphsChenguang Wang, Xiao Liu, and Dawn Song. 2020. Language models are open knowledge graphs. arXiv preprint arXiv:2010.11967.
DKN: Deep knowledge-aware network for news recommendation. Hongwei Wang, Fuzheng Zhang, Xing Xie, Minyi Guo, Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018. DKN: Deep knowledge-aware network for news recommendation. In WWW 2018.
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021b. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In ACL-IJCNLP 2021 (findings). Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021b. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In ACL-IJCNLP 2021 (findings).
Xiangnan He, and Tat-Seng Chua. 2021c. Learning intents behind interactions with knowledge graph for recommendation. Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, WWW 2021. Xiang Wang, Tinglin Huang, Dingxian Wang, Yancheng Yuan, Zhenguang Liu, Xiangnan He, and Tat-Seng Chua. 2021c. Learning intents behind interactions with knowledge graph for recommendation. In WWW 2021.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Gugger, EMNLP 2020: System Demonstrations. Mariama Drame, Quentin Lhoest, and Alexander RushThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In EMNLP 2020: System Demonstrations.
Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, NAACL 2021. Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In NAACL 2021.
GraphDialog: Integrating graph knowledge into endto-end task-oriented dialogue systems. Shiquan Yang, Rui Zhang, Sarah Erfani, 2020Shiquan Yang, Rui Zhang, and Sarah Erfani. 2020. GraphDialog: Integrating graph knowledge into end- to-end task-oriented dialogue systems. In EMNLP 2020.
XLNet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, NeurIPS. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In NeurIPS 2019.
Liang Yao, Chengsheng Mao, Yuan Luo, arXiv:1909.03193KG-BERT: BERT for knowledge graph completion. arXiv preprintLiang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for knowledge graph completion. arXiv preprint arXiv:1909.03193.
Collaborative knowledge base embedding for recommender systems. Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, Wei-Ying Ma, Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. 2016. Collaborative knowl- edge base embedding for recommender systems. In KDD 2016.
ERNIE: Enhanced language representation with informative entities. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu, Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In ACL 2019.
Improving candidate generation for low-resource cross-lingual entity linking. Shuyan Zhou, Shruti Rijhwani, John Wieting, Jaime Carbonell, Graham Neubig, TACL. 8Shuyan Zhou, Shruti Rijhwani, John Wieting, Jaime Carbonell, and Graham Neubig. 2020. Improving candidate generation for low-resource cross-lingual entity linking. TACL, 8:109-124.
| [
"https://github.com/luka-group/prix-lm."
] |
[
"CLUES: A Benchmark for Learning Classifiers using Natural Language Explanations",
"CLUES: A Benchmark for Learning Classifiers using Natural Language Explanations"
] | [
"Rakesh R Menon rrmenon@cs.unc.edu \nUNC Chapel Hill\n\n",
"Sayan Ghosh sayghosh@cs.unc.edu \nUNC Chapel Hill\n\n",
"Shashank Srivastava ssrivastava@cs.unc.edu \nUNC Chapel Hill\n\n"
] | [
"UNC Chapel Hill\n",
"UNC Chapel Hill\n",
"UNC Chapel Hill\n"
] | [
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] | Supervised learning has traditionally focused on inductive learning by observing labeled examples of a task. In contrast, humans have the ability to learn new concepts from language. Here, we explore learning zero-shot classifiers for structured data 1 purely from language from natural language explanations as supervision. For this, we introduce CLUES, a benchmark for Classifier Learning Using natural language ExplanationS, consisting of a range of classification tasks over structured data along with natural language supervision in the form of explanations. CLUES consists of 36 real-world and 144 synthetic classification tasks. It contains crowdsourced explanations describing real-world tasks from multiple teachers and programmatically generated explanations for the synthetic tasks. We also introduce ExEnt, an entailment-based method for training classifiers from language explanations, which explicitly models the influence of individual explanations in making a prediction. ExEnt generalizes up to 18% better (relative) on novel tasks than a baseline that does not use explanations. We identify key challenges in learning from explanations, addressing which can lead to progress on CLUES in the future. Our code and datasets are available at: https: //clues-benchmark.github.io. | 10.18653/v1/2022.acl-long.451 | [
"https://www.aclanthology.org/2022.acl-long.451.pdf"
] | 248,178,237 | 2204.07142 | 34c0d0522c8e216a7bda23865b8033ff551a977f |
CLUES: A Benchmark for Learning Classifiers using Natural Language Explanations
Long PapersCopyright Long PapersMay 22-27, 2022
Rakesh R Menon rrmenon@cs.unc.edu
UNC Chapel Hill
Sayan Ghosh sayghosh@cs.unc.edu
UNC Chapel Hill
Shashank Srivastava ssrivastava@cs.unc.edu
UNC Chapel Hill
CLUES: A Benchmark for Learning Classifiers using Natural Language Explanations
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1May 22-27, 2022
Supervised learning has traditionally focused on inductive learning by observing labeled examples of a task. In contrast, humans have the ability to learn new concepts from language. Here, we explore learning zero-shot classifiers for structured data 1 purely from language from natural language explanations as supervision. For this, we introduce CLUES, a benchmark for Classifier Learning Using natural language ExplanationS, consisting of a range of classification tasks over structured data along with natural language supervision in the form of explanations. CLUES consists of 36 real-world and 144 synthetic classification tasks. It contains crowdsourced explanations describing real-world tasks from multiple teachers and programmatically generated explanations for the synthetic tasks. We also introduce ExEnt, an entailment-based method for training classifiers from language explanations, which explicitly models the influence of individual explanations in making a prediction. ExEnt generalizes up to 18% better (relative) on novel tasks than a baseline that does not use explanations. We identify key challenges in learning from explanations, addressing which can lead to progress on CLUES in the future. Our code and datasets are available at: https: //clues-benchmark.github.io.
Introduction
Humans have a remarkable ability to learn concepts through language (Chopra et al., 2019;Tomasello, 1999). For example, we can learn about poisonous mushrooms through an explanation like 'a mushroom is poisonous if it has pungent odor'. Such Figure 1: We explore learning classification tasks over structured data from natural language supervision in form of explanations. The explanations provide declarative supervision about the task, and are not examplespecific. This is an example from the UCI Mushroom dataset, one of the 36 real-world datasets for which we collect multiple sets of explanations in CLUES.
an approach profoundly contrasts with the predominant paradigm of machine learning, where algorithms extract patterns by looking at scores of labeled examples of poisonous and edible mushrooms. However, it is unnatural to presume the availability of labeled examples for the heavy tail of naturally occurring concepts in the world.
This work studies how models trained to learn from natural language explanations can generalize to novel tasks without access to labeled examples. While prior works in this area (Srivastava et al., 2017(Srivastava et al., , 2018Hancock et al., 2018;Murty et al., 2020;Andreas et al., 2018;Wang* et al., 2020;Ye et al., 2020;Zhou et al., 2020) have explored explanations as a source of supervision, they evaluate models on a small number of tasks (2-3 relation extraction tasks in (Hancock et al., 2018;Wang* et al., 2020;Murty et al., 2020;Zhou et al., 2020), 7 email categorization tasks (Srivastava et al., 2017)). Owing to the paucity of large-scale benchmarks for learning from explanations over diverse tasks, we develop CLUES, a benchmark of classification tasks paired with natural language explanations. Over the last few decades, researchers and engineers alike have put immense effort into constructing structured and semi-structured knowledge bases (e.g., structured tables on Wikipedia, e-commerce sites, etc.). Developing models that can reason over structured data is imperative to improve the accessibility of machine learning models, enabling even non-experts to interact with such data. Hence, in this work, we specifically formulate our classification tasks over structured data.
Our benchmark is divided into CLUES-Real and CLUES-Synthetic consisting of tasks from real-world (UCI, Kaggle, and Wikipedia) and synthetic domains respectively. Explanations for CLUES-Real are crowdsourced to mimic the diversity and difficulty of human learning and pedagogy. For CLUES-Synthetic, we generate the explanations programmatically to explicitly test models' reasoning ability under a range of structural and linguistic modifications of explanations.
We train models with a mix of explanations and labeled examples, in a multi-task setup, over a set of seen classification tasks to induce generalization to novel tasks, where we do not have any labeled examples. Ye et al. (2021) refer to this problem setup as "cross-task generalization". Some recent methods on cross-task generalization from language use instructions/prompts (Mishra et al., 2022;Sanh et al., 2022;Wei et al., 2021) describing information about 'what is the task?' to query large language models. In contrast, language explanations in CLUES provide the logic for performing the classification task, or intuitively 'how to solve the task?'. For the running example of mushroom classification, an instruction/prompt might be 'can you classify a mushroom with pungent odor as poisonous or edible?'. On the other hand, an example of an explanation in CLUES is 'a mushroom is poisonous if it has pungent odor'.
We find that simply concatenating explanations to the input does not help pre-trained models, like RoBERTa (Liu et al., 2019), generalize to new tasks. Thus, we develop ExEnt, an entailmentbased model for learning classifiers guided by explanations, which explicitly models the influence of individual explanations in deciding the label of an example. ExEnt shows a relative improvement of up to 18% over other baselines on unseen tasks.
To identify the challenges of learning from explanations, we perform extensive analysis over synthetic tasks. Our analysis explores how the structure of an explanation (simple clauses vs. nested clauses) and the presence of different linguistic components in explanation (conjunctions, disjunctions, and quantifiers) affect the generalization ability of models.
The rest of the paper is structured as follows: we describe our crowdsourced-benchmark creation pipeline in §3. In §4, we analyze our collected data. In §5, we describe our models, experiments, and results. We conclude with a brief discussion on the contributions and our findings, followed by a statement of ethics and broader impact. Our contributions are: • We introduce CLUES, a benchmark for learning classifiers over structured data from language. • We develop ExEnt, an entailment-based model for learning classifiers guided by explanations. ExEnt shows a relative improvement of up to 18% over other baselines on generalization to novel tasks. • We explore the effect on the generalization ability of models learning from language by ablating the linguistic components and structure of explanations over our benchmark's synthetic tasks.
Related Work
Learning concepts from auxiliary information: Prior work has explored techniques to incorporate 'side-information' to guide models during training (Mann and McCallum, 2010;Ganchev et al., 2010). More recently, researchers have explored using language in limited data settings for learning tasks such as text classification (Srivastava et al., 2017(Srivastava et al., , 2018Hancock et al., 2018) and question answering (Wang* et al., 2020;Ye et al., 2020). However, we diverge from these works by exploring the generalization ability of classifiers learned by using language over novel tasks as opposed to gauging performance only on seen tasks. (2017,2018). We differ from these works as (1) our benchmark comprises a large set of classification tasks spanning diverse concepts for learning from explanations as opposed to working on a limited set of tasks in prior work and (2) our benchmark is domain agnostic in the source of classification tasks considered as long as we can represent the inputs of the task in a tabular (structured) format.
Few-shot & Zero-shot learning: Large pretrained language models (LMs) (Devlin et al., 2019;Liu et al., 2019;Raffel et al., 2020) have been shown to perform impressively well in few-shot settings (Brown et al., 2020;Lester et al., 2021).
Reformulating natural language tasks with patterns has been shown to boost few-shot learning ability for small language models as well (Schick and Schütze, 2021;Tam et al., 2021). More recently, a few works have focused on evaluating the generalization of models to unseen tasks by using prompts and performing multi-task training (Mishra et al., 2022;Ye et al., 2021;Sanh et al., 2022;Min et al., 2021;Chen et al., 2022;Aghajanyan et al., 2021) While the training and evaluation setup is similar, our work is significantly different from these works as (1) the explanations in our work provide rationales for making a classification decision as opposed to explaining a task using prompts, (2) we explore classification over structured data as opposed to free-form text by designing a model that can leverage explanations.
Creating CLUES
In this section, we describe our benchmark creation process in detail. In CLUES, we frame classification tasks over structured data represented in tabular format. Based on the source of tables used to construct the classification tasks, we consider two splits of our benchmark, CLUES-Real (real-world datasets) and CLUES-Synthetic (synthetic datasets).
CLUES-Real
We first gather/create classification tasks from UCI, Kaggle, and Wikipedia tables, then collect explanations for each classification task. Upon qualification, the turker advances to the explanation collection phase of the HIT. At this stage, the turker is provided with 15-16 labeled examples of a task in CLUES-Real and we ask them to write explanations describing the logic behind the classification for each class. Turkers are required to submit a minimum of two explanations (≥ 5 tokens each) for each task.
Further, teachers can test their understanding by taking a validation quiz, where they make predictions over new unlabeled examples from the task. Based on their informed classification accuracy, teachers can optionally refine their explanations.
Finally, when turkers are content with their performance, they 'freeze' the explanations and advance to the test-quiz where they are evaluated on a new set of unlabeled examples from the task (different from validation quiz). 4 We will refer to turkers who have provided responses at this stage as 'teachers' since they provide explanations to 'teach' models about different classification tasks. Verification of explanations: After the explanation collection, we validate the utility of the sets of explanations for a task from each teacher by evaluating if they are useful they are for other humans in learning the task. For this, a second set of turkers 5 is provided access to the collected explanations from a teacher for a task, but no labeled examples. These turkers are then asked to predict the labels of test examples from the held-out test set, solely based on the provided explanations.
Additionally, we ask turkers in the verification stage to give a Likert rating (1-4 scale) on the usefulness of each explanation. Since the turkers in the verification stage perform the classification task using language explanations from a teacher, we refer to them as 'students' for our setup.
Thus, the tasks in CLUES-Real contain explanations from multiple teachers and multiple students corresponding to a teacher. This provides rich information about variance in teacher and student performance indicating how amenable different tasks are for learning via language. We provide insights into the performance of teachers and students of our setup in §4.
CLUES-Real
CLUES-Synthetic
The complexity and fuzziness of real-world concepts and the inherent linguistic complexity of crowdsourced explanations can often shroud the aspects of the task that make it challenging for models to learn from explanations. To evaluate models in controlled settings where such aspects are not conflated, we create CLUES-Synthetic, a set of programmatically created classification tasks with varying complexity of explanations (in terms of structure and presence of quantifiers, conjunctions, etc.) and concept definitions. We create tasks in CLUES-Synthetic by first selecting a table schema from a pre-defined set of schemas, then generating individual examples of the task by randomly choosing values (within a pre-defined range, obtained from schema) for each column of the table. Next, we assign labels to each example by using a set of 'rules' for each task. In this context, a 'rule' is a conditional statement (analogous to conditional explanations that we see for real-world tasks) used for labeling the examples. We use the following types of rules that differ in structure and complexity (c i denotes i th clause and l denotes a label):
• Simple: IF c 1 THEN l • Conjunctive: IF c 1 AND c 2 THEN l • Disjunctive: IF c 1 OR c 2 THEN l •
Dataset analysis
In this section, we describe the tasks and the collected explanations in CLUES.
Task Statistics: Table 1 shows the statistics of tasks in CLUES. The real-world tasks in our benchmark are from a wide range of domains, such as data corresponding to a simple game (e.g. tic-tactoe), medical datasets (e.g. identifying liver patients), merit-classification of teachers and students, network-related datasets (eg. internet-firewall), among others. The synthetic tasks are created using table schemas denoting different domains, such as species of animals, species of birds, etc. (details in Appendix A).
As seen in Table 1, 5.4 explanation sets were collected for each classification task from human teachers on average. Further, each explanation set was verified by 3 students during the verification task. An aggregate of 133 teachers provide 318 explanations for tasks in CLUES-Real. All collected explanations were manually filtered and irrelevant explanations were removed. Lexical analysis of explanations: Table 2a shows the statistics for explanation texts in our dataset. 6 We evaluate the average length of the explanation texts, vocabulary size and number of unique bigrams present in the explanations. Explanation characteristics: Following Chopra et al. (2019), we categorize the explanations based on the different aspects of language (generics, quantifiers, conditional, and negation) present in these explanations. Table 3 shows the statistics of various categories in our dataset. Note that an explanation might belong to more than one category (for example, an example like "if the number of hands equal Table 3: Count of explanations in our dataset based on various aspects of language present in them to 2, then it is usually foo", will be categorized both as having both conditional and quantifiers.) We found that around 52% of the explanations for the real-world tasks had quantifiers (such as 'some', 'majority', 'most', etc.) in them. A full list of quantifiers present in the data is given in Appendix A.
Reading complexity: We analyze the reading complexity of crowdsourced explanations by using Flesch reading ease 7 . Reading complexity values for our crowdsourced explanations vary from 3.12 (professional grade reading level) to 106.67 (easier than 3rd-grade reading level), with a median value of 65.73 (8th/9th-grade reading level). Usefulness of the explanations: During the validation stage, we ask the turkers to provide a rating (on a Likert scale from 1 to 4) on the utility of the explanations for classification. The semantics of ratings are, 1 -'not helpful', 2 -'seems useful', 3 -'helped in predicting for 1 sample', and 4 -'mostly helpful in prediction'. The average rating for the explanations in CLUES-Real is 2.78, denoting most explanations were useful, even if they did not directly help predict labels in some cases. In Figure 2(a), we also provide a histogram of the Likert ratings provided by the students.
VALIDATION TEST
Teacher 69% 64% Student -55% Characteristics of teachers and students: Figure 2(b) shows the normalized teacher performance vs normalized student performance for teacherstudent pairs in CLUES-Real. Normalized performance of an individual teacher (or, student) on a task is defined as the difference between the performances of the teacher (or, student) and an average teacher (or, student) for the same task. The positive correlation (ρ = 0.17) suggests that students tend internet-firewall website-phishing tic-tac-toe-endgame entrepreneur-competency banknote-authentication dry-bean mushroom mammographic-mass teaching-assistant-evaluation shill-bidding contraceptive-method-choice car-evaluation stroke-prediction engineering-placement job-change campus-placement wine vertebral-column indian-liver-patient caesarian-section occupancy-detection student-performance proteinogenic-acid color-luminance water-potability teacher student random Figure 3: Average student vs average teacher performance for tasks in CLUES-Real. Red lines indicate cases where the student performance is more than the teacher performance. Green lines indicate cases where teachers perform better than students.
to perform well if taught by well-performing teachers. Positive correlation (ρ = 0.48) in Figure 2(c), indicates that task difficulty (captured by classification accuracy) is well-correlated for a teacher and student on average. On visualizing the difference between an average student and an average teacher performance for each task in CLUES-Real, we find that an average teacher performs better than the average student on most tasks. However, for the 'tic-tac-toe' task in CLUES-Real, we find that the student accuracy was around 13% higher than average teacher performance. We hypothesize that this task can be solved by commonsense reasoning without relying on the provided explanations, resulting in students performing better than teachers. We quantify the average performance of teachers and students on CLUES-Real in Table 4. 8 We find that students per-
Experiment Setup and Models
In this section, we describe our training and evaluation setup, our models, and experimental findings.
Training and Evaluation Setup
Our goal is to learn a model that, at inference, can perform classification over an input x to obtain the class label y, given the set of explanations E for the classification task. Figure 4 shows our setup, where we train our model using multi-task training over a set of tasks T seen and evaluate generalization to a new task, t ∈ T novel . The task split we use for our experiments can be found in Appendix E.1. We select our best model for zero-shot evaluation based on the validation scores on the seen tasks. Since we do not make use of any data from the novel tasks to select our best model, we maintain the true zero-shot setting (Perez et al., 2021).
We encode each structured data example, x, as a text sequence, by linearizing it as a sequence of attribute-name and attribute-value pairs, separated by [SEP] tokens. To explain, the leftmost attribute-name and attribute-value pair of structured input example in Figure 1 is represented as 'odor | pungent'. The linearization allows us to make use of pre-trained language models for the classification task. Our linearization technique explanations. These 9 datasets had extremely few samples (∼5), so this procedure was adopted. The list of crowdsourced tasks can be found in Table 7. is similar to the one used in Yin et al. (2020) with the exception that we do not use the column type. We will refer to the linearized format of structured inputs by 'Features-as-Text' or 'FaT'.
Baseline models
For our baselines, we make use of a pre-trained RoBERTa model (Liu et al., 2019). However, RoBERTa with the standard-fine-tuning approach cannot allow a generalization test as the number of output classes varies for each task. Furthermore, we cannot train individual class heads at inference since we test zero-shot. Hence, we make the following modifications to make RoBERTa amenable for zero-shot generalization tests: a pre-trained RoBERTa model takes the linearized structured data (FaT) as input and outputs a representation for this context (in the [CLS] token). Next, we run another forward pass using RoBERTa to obtain a representation of the labels based on their text (e.g., 'poisonous' or 'edible' for our example in Figure 1). Finally, we compute the probability distribution over labels by doing a dot-product of the representations of the input and the labels. We train this model using cross-entropy loss. In our experiments, we refer to this model as RoBERTa w/o Exp since the model does not use any explanations.
We also experiment with a RoBERTa w/ Exp. model where a RoBERTa model takes as input a concatenated sequence of all the explanations for the task along with FaT. The rest of the training setup remains the same as RoBERTa w/o Exp.
We find that a simple concatenation of explanations is not helpful for zero-shot generalization to novel tasks (results in Figure 6). Next, we describe ExEnt which explicitly models the role of each explanation in predicting the label for an example.
ExEnt
To model the influence of an explanation towards deciding a class label, we draw analogies with the entailment of an explanation towards the structured input. Here, given a structured input (premise) and an explanation (hypothesis), we need to decide whether the explanation strengthens the belief about a specific label (entailment), weakens belief about a specific label (contradiction) or provides no information about a label (neutral). Figure 5 shows the overview of our explanationguided classification model, ExEnt; given a structured input and explanation of a task, let l exp denote the label mentioned in the explanation, and L denote the set of labels of the task. The entailment model assigns logits p e , p c and p n to the hypothesis being entailed, contradicted or neutral respectively w.r.t. the premise. Based on the label assignment referred to by an explanation, we assign logits to class labels as follows:
• If explanation mentions to assign a label : Assign p e to l exp , p c is divided equally among labels in L \ {l exp }, and p n is divided equally among labels in L. • If explanation mentions to not assign a label :
This occurs if a negation is associated with l exp . Assign p c to l exp , p e is divided equally among labels in L \ {l exp }, and p n is divided equally among labels in L. We obtain logit scores over labels of the task corresponding to each explanation as described above. We compute the final label logits by aggregating (using mean) over the label logits corresponding to each explanation of the task. The final label logits are converted to a probability distribution over labels, and we train ExEnt using cross-entropy loss.
In experiments, we consider a pre-trained RoBERTa model fine-tuned on MNLI (Williams et al., 2017) corpus as our base entailment model. 9 Further, in order to perform the assignment of logits using an explanation, we maintain metainformation for each explanation to (1) determine if the explanation mentions to 'assign' a label or 'not assign' a label, and (2) Figure 5: ExEnt takes in concatenated pairs of individual task explanations and features of an example as input and uses a masked language model (MLM) to compute an entailment score for every explanation-feature pair of a task. Next, we map the entailment scores to class logits and finally apply an aggregation function over all the logits to obtain a final class prediction for the example. meta-information, while for the explanations in CLUES-Real, the authors manually annotate this meta-information. Additional training details and hyperparameters are provided in Appendix E.
Zero-Shot Generalization Performance
We evaluate ExEnt and the baselines on zero-shot generalization to novel tasks in our benchmark as described in §5.1. We train separate models for CLUES-Real and CLUES-Synthetic. Figure 6 shows the generalization performance of all models. On CLUES, we find that ExEnt outperforms the baselines suggesting that performing entailment as an intermediate step helps aggregate information from multiple explanations better. On CLUES-Real, ExEnt gets an 18% relative improvement over the baselines while having an 11% relative improvement on CLUES-Synthetic
To evaluate the utility of our synthetic tasks in enabling transfer learning to real-world tasks, we finetune a ExEnt model pre-trained on synthetic tasks. We experiment with three pre-training task sets -CLUES-Synthetic, CLUES-Synthetic (3x) and CLUES-Synthetic (5x) consisting of 144, 432, and 720 tasks. These larger synthetic task sets are created by sampling tasks from each of the 48 different synthetic tasks types similar to how CLUES-Synthetic was created (see §3.2 for reference). We find that pre-training on synthetic tasks boosts the performance of ExEnt on the novel tasks of CLUES-Real by up to 39% (relative) over the RoBERTa w/o Exp. model.
Human Performance
To situate the performance of the automated models, we performed human evaluation for tasks in test split of CLUES-Real using AMT. For this, we sampled at most 50 examples 10 from the test split of tasks in CLUES-Real and each example was 'labeled' by 2 turkers using the explanations of the 'best teacher' (the teacher whose students got the best performance during 'explanation verification' stage; see §3.1.2 for reference). The average human accuracy for this was about 70%. However, the performance numbers of humans and models are not directly comparable as the model looks at all the explanations for the task, whereas the humans observe a small number of explanations. Humans also see multiple examples of the task during the evaluation, which they can use to fine-tune their understanding of a concept. The automated models don't have a mechanism to leverage such data.
Key Challenges
To identify key challenges in learning from explanations, we perform experiments ablating the linguistic components and structure of explanations. For a robust analysis, we generate more tasks for each task type in CLUES-Synthetic, making 100 tasks for each of the 48 different task-types in CLUES-Synthetic (axes of variation include 4 negation types, 3 conjunction/disjunction types, 2 quantifier types, and number of labels; details in Appendix A.5).
We evaluate the generalization performance of ExEnt to novel tasks on each of the different types separately by training separate models for each task type. Figure 7 shows the relative gain in generalization performance of models learned using explanations compared to the performance of baseline RoBERTa w/o Exp. 11 Our results indicate that learning from explanations containing quantifiers is highly challenging. In the presence of quantifiers, models guided by explanations perform on par with the baseline RoBERTa w/o Exp model. Negations also pose a challenge, as indicated by the decline in relative gains of models guided by explanation compared to the RoBERTa w/o Exp model. Structurally complex explanations (containing conjunctions/disjunctions of clauses) are also hard to learn from compared to simple conditional statements. These challenges provide a fertile ground for future research and improvements.
Conclusion
We have introduced CLUES, a benchmark with diverse classification tasks over structured data along with natural language explanations to learn them. CLUES is agnostic in the domain of tasks allowing the research community to contribute more tasks in the future. We also present ExEnt, an entailmentbased model to learn classifiers guided by explanations. Our results are promising and indicate that explicitly modeling the role of each explanation through entailment can enable learning classifiers for new tasks from explanations alone. Future work can explore the open challenges in learning from explanations, such as modeling the influence of quantifiers and negations present in an explanation.
Our empirical analyses here aggregates explana-11 Accuracies have been averaged over the multi-class and binary datasets since the trends remain the same across both. tions for a task from multiple teachers. Future work can explore learning from explanations from individual teachers, as well as cross-teacher variance. Alternatively, rather than treat explanations from different teachers homogeneously, future work can model trustworthiness of a crowd of teachers from their provided explanations.
Ethics and Broader Impact
All tables in CLUES-Real were collected from free public resources (with required attributions) and tables in CLUES-Synthetic were created by us programmatically. We do not collect any personal information from the turkers who participated in our crowdsourced tasks. The dataset has been released without mentioning any personal details of turkers available automatically in AMT (such as turker IDs). The turkers were compensated fairly and the payment per task is equivalent to an hourly compensation that is greater than minimum wage (based on the median time taken by turkers). We provide details of the reward structure for the crowdsourcing tasks in Appendix D. For the Wikipedia mining task in this work, we limited the locale of eligible turkers to US, UK, New Zealand and Australia. For other crowdsourcing tasks, we limited the locale of eligible turkers to US. Further, to ensure good-faith turkers, we required that the approval rate of the turkers be above 98%. Our screening process has selection biases that likely over-samples turkers from demographics that are over-represented on AMT (ethnically white, college-educated, lowerto-medium income and young) (Hitlin, 2016), and this is likely to affect the type of language usage in the collected explanations.
The broader impact of this research in the longer term could make developing predictive technologies more accessible to ordinary users, rather than data-scientists and experts alone.
Appendix
A Additional details on creating CLUES-Synthetic
In this section we discuss in detail about the various table schemas followed by the details of quantifiers and label assignment for creating synthetic tasks.
A.1 Tables schemas
We define five different table schemas, each corresponding to a different domain. For all the attributes in a schema we define a fixed domain from which values for that attribute can be sampled.
• Species of bird: The classification task here is to classify a bird into a particular species based on various attributes (column names in table). We define several artificial species of birds using commonly used nonce words in psychological studies (Chopra et al., 2019) such as "dax", "wug", etc. • Species of animal: The classification task here is to classify an animal into a particular species based on various attributes (column names in table). Artificial species of animals are again defined using commonly used nonce words in psychological studies such as "dax", "wug", etc. • Rainfall prediction: This is a binary classification task where the objective is to predict whether it will rain tomorrow based on attributes such as "location", "minimum temperature", "humidity", "atmospheric pressure" etc. • Rank in league: This is a multi-label classification task where given attributes such "win percentage", "power rating", "field goal rating" of a basketball club, the objective is to predict its position in the league out of 1, 2, 3, 4, "Not qualified". • Bond relevance: This is a multi-label classification task where given attributes such "user age", "user knowledge", "user income", the objective is to predict the relevance of a bond out of 5 classes (1 to 5). In each of the above schemas, the attributes can be either of types categorical or numeral. For each of the above schemas we also define range of admissible values for each attribute. Detailed description of schemas are provided in Tables 8, 9, 10 11, 12.
A.2 List of quantifiers
The full list of quantifiers along with their associated probability values are shown in Table 5.
QUANTIFIERS PROBABILITY
"always", "certainly", "definitely" 0.95 "usually", "normally", "generally", "likely", "typically" 0.70 "often" 0.50 "sometimes", "frequently", 0.30 "occasionally" 0.20 "rarely", "seldom" 0.10 "never" 0.05
A.3 Creating synthetic explanations
We use a template-based approach to convert the set to rules into language explanations. We convert every operator in the clauses into their corresponding language format as:
• == → 'equal to' • > →'greater than' • >= →'greater than or equal to' • < →'lesser than' • <= →'lesser than or equal to' • ! = → "not equal to' Explanations:
• Mushrooms with pungent or foul odors are poisonous.
• Mostly edible if the stalk-surface-above-ring is smooth.
Explanation:
• If arms equal to yes and hair not equal to no, then fem.
• It venomous not equal to no and arms not equal to no, then not gazzer • ! > → 'not greater than' • ! < → 'not lesser than'
For example if we have a rule 'IF number of hands == 2 THEN foo', we convert it into a language explanation as 'If number of hands equal to 2, then foo'. In the presence of quantifiers, we add 'it is [INSERT QUANTIFIER]' before the label. For example if the rule was associated with a quantifier 'usually', the language explanation would be 'If number of hands equal to 2, then it is usually foo'.
A.4 Label Assignment using Rules
In Algorithm 1, we detail the procedure for obtaining label assignments for our synthetic tasks. Given that our rules are in an "IF ... THEN .." format, we split each rule into an antecedent and a consequent based on the position of THEN. Note that our voting-based approach to choose the final label for an example helps to tackle (1) negation on a label for multiclass tasks and (2) choose the most suited label in case antecedents from multiple rules are satisfied by an example.
A.5 Different synthetic task types
We create our synthetic tasks by varying along the following axes: Algorithm 1 Label Assignment 1: Given: Task T with rule set R and label set L 2: Votes ← Zeros(|L|) 3: for rule r ∈ R do 4:
ra : Antecedent of r 5:
rc : Consequent of r 6:
lr ← Label mentioned in rc 7:
t ← Truth Value of ra 8:
if any quantifier in r then 9:
pquant : Prob. of quantifier from Table 5 10:
Alter lr to any label in L \ lr with probability 11:
1 − pquant 12: end if 13:
if t = True then 14:
Votes[lr] += 1 15: else 16:
for label l ∈ L \ lr do 17:
Votes[l] += 1 18:
end for 19: end if 20:
l assigned ← argmax(Votes) 21: end for A.6 Large synthetic task collections for ablation experiment
In section §6 we describe an ablation experiment, for which we create collections of 100 tasks corresponding to each synthetic task type. Here, the task type of a collection denotes the maximum complexity of explanations in that collection. For example, for the collection 'multiclass classification with nested clauses and negation only in clause', all the 100 tasks might not have negations or nested clauses in their explanations. This collection might contain explanations with no negations or nonnested clauses. However, it will not contain explanations that have nested clauses and negations in both clause and label.
B Real-World Tasks from UCI, Kaggle and Wikipedia
For our benchmark, we made use of 18 datasets in UCI, 7 datasets in Kaggle, and 9 tables in Wikipedia. In Table 7, we list the keywords that we use to refer to these tasks along with the URLs to the datasets/tables.
B.1 Feature Selection for Real-World Datasets
During pilot studies for collection of explanations for CLUES-Real, we identified that annotators found it difficult to provide explanations for classifications tasks with more than 5 to 6 columns. Appropriately, we reduced the number of columns in most datasets of CLUES-Real (apart from some Wikipedia tables) to 5 by choosing the top features that had maximum mutual information with the labels in the training dataset. The mutual information between the features and the label was computed using the scikit-learn package with a random state of 624.
C Additional Analysis on Teacher-Student Performance
For the crowdsourced datasets, we show the number of explanations collected per task in Figure 11(a). The number of explanations is largely around an average value of 11 explanations per task. Figure 11(b) shows the relation between explanation quality (quantified by likert scores) and rank of the explanation. Rank denotes the order in which a teacher provided that explanation during our crowdsourced explanation collection phase. We find a positive correlation between quality and rank of explanation showing that teachers generally submit most useful explanations (as perceived by them) to teach a task. Finally, we do not observe any correlation between explanation length and ratings as indicated by Figure 11(c).
We also illustrate the differences between teacher and student on our tasks in §4. Here we present two additional plots showing the performance of (1) best teacher vs their students for each task ( Figure 9) and (2) worst teacher vs their students for each task (Figure 10). We find that even though the best teachers often attain near-perfect accuracies for the tasks, their students perform significantly worse than them in many tasks. The explanations from the worst teachers did not help students in getting significantly better than random performance for majority of the tasks, even though the student did outperform the worst teacher. 0.2 0.4 0.6 0.8 1.0 Accuracy travel-insurance somerville-happiness website-phishing tic-tac-toe-endgame internet-firewall entrepreneur-competency mushroom banknote-authentication dry-bean shill-bidding mammographic-mass car-evaluation contraceptive-method-choice teaching-assistant-evaluation stroke-prediction job-change campus-placement engineering-placement indian-liver-patient wine caesarian-section occupancy-detection vertebral-column student-performance water-potability proteinogenic-acid color-luminance best teacher performance student performance random Figure 9: Best teacher vs average of their students for tasks in CLUES-Real. Red lines indicate cases where the student performance is more than the teacher performance. Green lines indicate cases where teachers perform better than students.
0.2 0.4 0.6 0.8 1.0 Accuracy travel-insurance somerville-happiness website-phishing tic-tac-toe-endgame internet-firewall entrepreneur-competency mushroom banknote-authentication dry-bean shill-bidding mammographic-mass car-evaluation contraceptive-method-choice teaching-assistant-evaluation stroke-prediction job-change campus-placement engineering-placement indian-liver-patient wine caesarian-section occupancy-detection vertebral-column student-performance water-potability proteinogenic-acid color-luminance worst teacher performance student performance random Figure 10: Worst teacher vs average of their students for tasks in CLUES-Real. Red lines indicate cases where the student performance is more than the teacher performance. Green lines indicate cases where teachers perform better than students.
D Reward Structure for Crowd-sourcing Tasks
Our work involves multiple stages of crowdsourcing to collect high-quality explanations for the classification tasks. We pick turkers in the US for explanation collection and verification tasks (US,UK,NZ, and GB for Wikipedia mining Task) with a 98% HIT approval rate and a minimum of 1000 HITs approved. In Table 6, we summarize the payment structure provided to the turkers on the AMT platform for each of the stages (described in detail in §3) -(1) Wikipedia mining on tables scraped from Wikipedia, (2) Explanation collection for tables obtained from UCI, Kaggle and Wikipedia, and Figure 11: (a) On Average we obtain over 10 explanations per task in CLUES-Real for tasks that are crowdsourced (b) Weak positive correlation indicating later explanations were given higher likert scores by students. Likert ratings were averaged for each rank. (c) Near-zero correlation indicating that likert ratings given by students were almost independent of explanation length. Likert ratings were averaged for each length. (ρ denotes Pearson correlation coefficient in each of the plots)
(3) Explanation validation for collected explanations. For all the three crowdsourcing tasks, the turkers were compensated fairly and the payment per task is equivalent to an hourly compensation that is greater than minimum wage (based on the median time taken by turkers).
STAGE $/HIT BONUS
E Training details
In this section we proved details about implementation of various models, hyperparameter details, and details about hardware and software used along with an estimate of time taken to train the models. Code and dataset for our paper will be made public upon first publication.
E.1 Details of seen and novel tasks for CLUES-Real and CLUES-Synthetic
For as the real-world tasks have roughly two times more explanations on average than synthetic tasks.
We used the AdamW (Loshchilov and Hutter, 2019) optimizer commonly used to fine-tune pretrained Masked Language Models (MLM) models. For fine-tuning the pre-trained models on our benchmark tasks, we experimented with learning rates {1e − 5, 2e − 5} and chose 1e − 5 based on performance on the performance on the validation set of seen tasks. Batch sizes was kept as 2 with gradient accumulation factor of 8. The random seed for all experiments was 42. We train all the models for 20 epochs. Each epoch comprises of 100 batches, and in each batch the models look at one of the tasks in the seen split.
E.5 Training times
• Training on CLUES-Real:
The baseline RoBERTa w/o Exp model typically takes 3 seconds on average for training on 1 batch of examples. In 1 batch, the model goes through 16 examples from the tasks in seen split. RoBERTa w/ Exp. takes around 5 seconds to train on 1 batch. ExEnt takes longer time than baselines owing to the multiple forward passes. For training on 1 batch of CLUES-Real, ExEnt took 12 seconds on average. • Training on CLUES-Synthetic: All the models take comparatively much lesser time for training on our synthetic tasks owing to lesser number of explanations on average for a task. For training on 1 batch, all models took 1 seconds or less to train on 1 batch of examples from CLUES-Synthetic.
F Annotation interfaces
We present the different annotation templates and interfaces used for our explanation collection and verification stages in Figures 12,13 "description": "This dataset is used to predict the type of birds based on the given attributes. Each row provides the relevant attributes of a bird.", "column_names":{ "size" : ["categorical", ["large", "medium", "small"]], "size (number)" : ["number", [10, 100]], "color" : ["categorical", ["red", "blue", "green", "brown", "pink", " orange", "black", "white"]], "head" : ["categorical", ["yes", "no"]], "length" : ["categorical", ["tall", "medium", "short"]], "length (number)" : ["number", [10,100]], "tail" : ["categorical", ["yes", "no"]], "number of faces" : ["number", [1,3]], "arms" : ["categorical", ["yes", "no"]], "legs" : ["categorical", [2, 4, 6, 8]], "hair" : ["categorical", ["yes", "no"]], "wings" : ["categorical", ["yes", "no"]], "feathers" : ["categorical", ["yes", "no"]], "airborne" : ["categorical", ["yes", "no"]], "toothed" : ["categorical", ["yes", "no"]], "backbone" : ["categorical", ["yes", "no"]], "venomous" : ["categorical", ["yes", "no"]], "domestic" : ["categorical", ["yes", "no"]], "region": ["categorical", ["asia", "europe", "americas", "africas", " antartica", "oceania"]] }, "targets": { "bird species": ["wug", "blicket", "dax", "toma", "pimwit", "zav", " speff", "tulver", "gazzer", "fem", "fendle", "tupa"] } }
6539
{ "description": "This dataset is used to predict the type of an aquatic animal based on the given attributes. Each row provides the relevant attributes of an animal.", "column_names":{ "size" : ["categorical", ["large", "medium", "small"]], "size (number)" : ["number", [10, 100]], "color" : ["categorical", ["red", "blue", "green", "brown", "pink", " orange", "black", "white"]], "head" : ["categorical", ["yes", "no"]], "length" : ["categorical", ["tall", "medium", "short"]], "length (number)" : ["number", [10,100]], "tail" : ["categorical", ["yes", "no"]], "number of faces" : ["number", [1,3]], "arms" : ["categorical", ["yes", "no"]], "legs" : ["categorical", ["yes", "no"]], "hair" : ["categorical", ["yes", "no"]], "fins" : ["categorical", ["yes", "no"]], "toothed" : ["categorical", ["yes", "no"]], "venomous" : ["categorical", ["yes", "no"]], "domestic" : ["categorical", ["yes", "no"]], "region": ["categorical", ["atlantic", "pacific", "indian", "arctic"]] }, "targets": { "animal species": ["wug", "blicket", "dax", "toma", "pimwit", "zav", " speff", "tulver", "gazzer", "fem", "fendle", "tupa"] } } "description": "This dataset is used to predict if it will rain tomorrow or not based on the given attributes. Each row provides the relevant attributes of a day.", "column_names":{ "location" : ["categorical", ["sphinx", "doshtown", "kookaberra", " shtick union", "dysyen"]], "mintemp": ["number", [1,15]], "maxtemp": ["number", [17,35]], "rainfall today": ["categorical", [0, 0.2, 0.4, 0.6, 0.8, 1]], "hours of sunshine": ["categorical", [0, 4, 8, 12]], "humidity": ["number", [0,100]], "wind direction": ["categorical", ["N", "S", "E", "W", "NW", "NE", "SE", "SW"]], "wind speed": ["number", [10,85]], "atmospheric pressure": ["number", [950,1050]] }, "targets": { "rain tomorrow": ["yes", "no"] } } "description": "This dataset is used to predict the relevance (higher the better) of a bond to a user based on the given attributes. Each row provides the relevant attributes of a user.", "column_names":{ "user age": ["number", [15,65]], "user knowledge": ["categorical", [1,2,3,4,5]], "user gender": ["categorical", ["male", "female"]], "user loyalty": ["categorical", [1,2,3,4,5]], "user income": ["number", [1000,10000]], "user marital status": ["categorical", ["yes", "no"]], "user dependents": ["number", [0,3]] }, "targets": { "relevance score": ["1", "2", "3", "4", "5"] } }
Figure 2 :
2(a) Histogram of count of explanations corresponding to different usefulness likert ratings. (b) Students typically perform well when taught tasks by good teachers. (c) Positive correlation in the average performance between a teacher and student for a task. (ρ denotes Pearson correlation coefficient in each of the plots)
Figure 4 :
4Benchmark setup: The model is trained on a set of classification tasks using explanations. At inference, the model is evaluated zero-shot on novel tasks using only explanations for the novel tasks.
Figure 6 :
6Pre-trained on CLUES-Syn) ExEnt (Pre-trained on CLUES-Syn 3x) ExEnt (Pre-trained on CLUES-Syn 5x) Zero-shot generalization performance of models on novel tasks of CLUES.
Figure 7 :
7Ablation analysis on the effect of structural and linguistic variations of explanations on generalization ability of models. All bars indicate the relative performance gain over the RoBERTa w/o Exp. baseline.
Figure 8 :
8Example of tasks from CLUES. The left and right tables are sample tables and explanations drawn from CLUES-Real and CLUES-Synthetic respectively.
•
Number of labels: L = {'binary', 'multiclass' } • Structure of explanation: C = {'simple', 'conjunction/disjunction', 'nested' } • Quantifier presence: Q = {'not present', 'present'} • Negation: N = {'no negation', 'negation only in clause', 'negation only on label', 'negation in clause or on label'} The set of task types is defined as L×C×Q×N, enumerating to 48 different types.
E. 4
4Hardware and software specifications All the models are coded using Pytorch 1.4.0 14 (Paszke et al., 2019) and related libraries like numpy (Harris et al., 2020), scipy (Jones et al., 2001-) etc. We run all experiments on one of the following two systems -(1) GeForce RTX 2080 GPU of size 12 GB, 256 GB RAM and 40 CPU cores (2) Tesla V100-SXM2 GPU of size 16GB, 250 GB RAM and 40 CPU cores.
": "This dataset is used to predict the final league position of a team based on the given attributes. Each row provides the relevant attributes of a team.", "column_names":{ "win percentage":["number", [0,100]], "adjusted offensive efficiency": ["number", [0,100]], "adjusted defensive efficiency": ["number", [0,100]], "power rating": ["categorical",
Figure 12 :
12Explanation Collection: Annotation Task Examples page.
Figure 13 :
13Explanation Collection: Qualification Task page.
Figure 14 :
14Explanation Collection: Main Task page.
Figure 16 :
16Explanation Verification page.
Table 1 :
1Statistics of tasks in CLUES
Table 2 :
2Explanations Statistics for CLUESthe task. For brevity, we defer additional details on
the use of quantifiers, label assignment using rules,
and creation of synthetic explanations to Appendix
A. Overall we have 48 different task types (based
on the number of classes and rule variants) using
which we synthetically create 144 classification
tasks (each containing 1000 labeled examples).
Conditional If color code ... , then ... 15 % 100 %CATEGORY EXAMPLE
REAL SYN
Generic
Being over 50 increases
the risk of a stroke.
48 % 50 %
Quantifier
... usually means you
won't have heart disease.
52 % 50 %
Negations ... is not low.
16 % 50%
Table 4 :
4Teacher/student performance on CLUES-Real
track l exp (label mentioned in explanation). For CLUES-Synthetic, we parse the templated explanations to obtain theMLM
Exp.
FaT
[SEP]
[SEP]
[CLS]
Exp.
FaT
[SEP]
[SEP]
[CLS]
Exp.
FaT
[SEP]
[SEP]
[CLS]
Entailment
Logits
Agg.
Class
Logits
Final Class
Logits
Input
Representations
Michael Tomasello. 1999. The Cultural Origins of Human Cognition. Harvard University Press.Timo Schick and Hinrich Schütze. 2021. It's not just
size that matters: Small language models are also
few-shot learners. In Proceedings of the 2021 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, pages 2339-2352, Online. As-
sociation for Computational Linguistics.
Utkarsh Sharma and Naman Manchanda. 2020. Pre-
dicting and improving entrepreneurial competency
in university students using machine learning algo-
rithms. In 2020 10th International Conference on
Cloud Computing, Data Science Engineering (Con-
fluence), pages 305-309.
Shashank Srivastava, Igor Labutov, and Tom Mitchell.
2017. Joint concept learning and semantic parsing
from natural language explanations. In Proceed-
ings of the 2017 Conference on Empirical Methods
in Natural Language Processing, pages 1527-1536,
Copenhagen, Denmark. Association for Computa-
tional Linguistics.
Shashank Srivastava, Igor Labutov, and Tom Mitchell.
2018. Zero-shot learning of classifiers from natu-
ral language quantification. In Proceedings of the
56th Annual Meeting of the Association for Com-
putational Linguistics (Volume 1: Long Papers),
pages 306-316, Melbourne, Australia. Association
for Computational Linguistics.
Derek Tam, Rakesh R. Menon, Mohit Bansal,
Shashank Srivastava, and Colin Raffel. 2021. Im-
proving and simplifying pattern exploiting training.
In Proceedings of the 2021 Conference on Empiri-
cal Methods in Natural Language Processing, pages
4980-4991, Online and Punta Cana, Dominican Re-
public. Association for Computational Linguistics.
Ziqi Wang*, Yujia Qin*, Wenxuan Zhou, Jun Yan,
Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xi-
ang Ren. 2020. Learning from explanations with
neural execution tree. In International Conference
on Learning Representations.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin
Guu, Adams Wei Yu, Brian Lester, Nan Du, An-
drew M Dai, and Quoc V Le. 2021. Finetuned lan-
guage models are zero-shot learners. arXiv preprint
arXiv:2109.01652.
Sarah Wiegreffe and Ana Marasović. 2021. Teach me
to explain: A review of datasets for explainable nlp.
In Proceedings of NeurIPS.
Adina Williams, Nikita Nangia, and Samuel R Bow-
man. 2017. A broad-coverage challenge corpus for
sentence understanding through inference. arXiv
preprint arXiv:1704.05426.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Remi Louf, Morgan Funtow-
icz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
Teven Le Scao, Sylvain Gugger, Mariama Drame,
Quentin Lhoest, and Alexander Rush. 2020. Trans-
formers: State-of-the-art natural language process-
ing. In Proceedings of the 2020 Conference on Em-
pirical Methods in Natural Language Processing:
System Demonstrations, pages 38-45, Online. Asso-
ciation for Computational Linguistics.
Qinyuan Ye, Xiao Huang, Elizabeth Boschee, and Xi-
ang Ren. 2020. Teaching machine comprehension
with compositional explanations. In Findings of the
Association for Computational Linguistics: EMNLP
2020, pages 1599-1615, Online. Association for
Computational Linguistics.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021.
CrossFit: A few-shot learning challenge for cross-
task generalization in NLP. In Proceedings of the
2021 Conference on Empirical Methods in Natural
Language Processing, pages 7163-7189, Online and
Punta Cana, Dominican Republic. Association for
Computational Linguistics.
Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Se-
bastian Riedel. 2020. TaBERT: Pretraining for joint
understanding of textual and tabular data. In Pro-
ceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 8413-
8426, Online. Association for Computational Lin-
guistics.
Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiao-
dan Liang, Maosong Sun, Chenyan Xiong, and Jian
Tang. 2020. Towards interpretable natural language
understanding with explanations as latent variables.
In Advances in Neural Information Processing Sys-
tems, volume 33, pages 6803-6814. Curran Asso-
ciates, Inc.
Table 5 :
5Probability values used for quantifiers in
CLUES-Synthetic. We choose these values based on
Srivastava et al. (2018).
Table 6 :
6Payment structure for AMT Tasks
CLUES-Real, we chose the tasks from Wikipedia that have very examples to be part of novel task set. Among the tasks from Kaggle and UCI, we kept tasks with higher number of samples as part of seen tasks (training tasks).Novel tasks(16)for CLUES-Real are: We train on 70% of the labeled examples of the seen tasks and perform zero-shot generaliza-tion test over the 20% examples of each task in CLUES-Real. For the extremely small Wikipedia tasks (for which we do not crowdsource explanations), we use all examples for zero-shot testing. For CLUES-Synthetic, we have 96 tasks as seen (training) tasks and 48 as novel tasks. Task in CLUES-Synthetic that belong to the following schemas are part of the seen tasks: • Species of Animal • Species of Bird • Rainfall prediction Tasks belonging to 'Bond relevance classification' and 'League Rank Classification' were part of novel tasks for CLUES-Synthetic. We train on 700 labeled examples of each seen task and perform zero-shot generalization test over 200 examples of each novel task in CLUES-Synthetic. E.2 Model parameters • RoBERTa w/o Exp.: The number of parameters is same as the pretrained RoBERTa-base model available on HuggingFace library. • RoBERTa w/ Exp.: The number of parameters is same as the pretrained RoBERTa-base model available on HuggingFace library. • ExEnt: The number of parameters is same as the pre-trained RoBERTa mdoel finetuned on MNLI (Williams et al., 2017) corpus. We obtain the pretrained checkpoint from HuggingFace. 13 E.3 Hyper-parameter settings For all the transformer based models we use the implementation of HuggingFace library (Wolf et al., 2020). All the model based hyper-parameters are thus kept default to the settings in the HuggingFace library. We use the publicly available checkpoints to initialise the pre-trained models. For RoBERTa based baselines we use 'roberta-base' checkpoint available on HuggingFace. For our intermediate entailment model in ExEnt, we finetune a pretrained checkpoint of RoBERTa trained on MNLI corpus ('textattack/roberta-base-MNLI') When training on CLUES-Synthetic, we use a maximum of 64 tokens for our baseline RoBERTa w/o Exp. and ExEnt. For the RoBERTa w/ Exp. model we increase this limit to 128 tokens as it takes concatenation of all explanations for a task. When training on CLUES-Real, we use 256 tokens as limit for RoBERTa w/ Exp. using explanations 13 Weights link: https://huggingface.co/ textattack/roberta-base-MNLISeen tasks
(20) for CLUES-Real are:
• website-phishing
• internet-firewall
• mushroom
• dry-bean
12 ¢50 per table submitted
• wine
• caesarian-section
• occupancy-detection
• vertebral-column
• student-performance
• shill-bidding
• mammographic-mass
• teaching-assistant-evaluation
• somerville-happiness
• stroke-prediction
• job-change
• campus-placement
• engineering-placement
• water-potability
• color-luminance
• proteinogenic-acid
• banknote-authentication
• tic-tac-toe-endgame
• car-evaluation
• contraceptive-method-choice
• indian-liver-patient
• travel-insurance
• entrepreneur-competency
• award-nomination-result
• coin-face-value
• coin-metal
• driving-championship-points
• election-outcome
• hotel-rating
• manifold-orientability
• soccer-club-region
• soccer-league-type
,14,15 and Figure 16 respectively.
Table 7 :
7List of datasets and URLs that make up CLUES-Real.
Table 8 :
8Synthetic table schema 1: Species of Birds
Table 9 :
9Synthetic table schema 2: Species of Animal{
Table 10 :
10Synthetic table schema 3: Rainfall Prediction
Table 11 :
11Synthetic table schema 4: League Ranking Classification{
Table 12 :
12Synthetic table schema 5: Bond Relevance Classification6541
https://archive.ics.uci.edu/ml/ 3 https://www.kaggle.com/datasets
For reference, we show snapshots of our annotation interface in Appendix §F. 5 59 turkers participated in this stage.
Statistics inTable 2awas obtained using the spacy tokenizer.
https://en.wikipedia.org/wiki/Flesch_ Kincaid_readability_tests
Note that teacher scores in the tables and figures do not include 9 Wikipedia Tasks for which the authors formed the form lower than teachers on average as expected since a teacher has more expertise in the task. Moreover, it is challenging to teach a task perfectly using explanations in a non-interactive setting where a student cannot seek clarifications.Additional data analysis and details of HIT compensation can be found in Appendix C and D.
Weights link: https://huggingface.co/ textattack/roberta-base-MNLI
Many tasks (such as tasks created from Wikipedia tables) have less than 50 examples in their test split.)
https://pytorch.org/
| [] |
[
"Controlled Cue Generation for Play Scripts",
"Controlled Cue Generation for Play Scripts"
] | [
"Alara Dirik \nBogaziçi University Istanbul\nTurkey\n",
"Hilal Donmez hilal.donmez@boun.edu.tryanardag.pinar@gmail.com \nBogaziçi University Istanbul\nTurkey\n",
"Pinar Yanardag \nBogaziçi University Istanbul\nTurkey\n"
] | [
"Bogaziçi University Istanbul\nTurkey",
"Bogaziçi University Istanbul\nTurkey",
"Bogaziçi University Istanbul\nTurkey"
] | [] | In this paper, we use a large-scale play scripts dataset to propose the novel task of theatrical cue generation from dialogues. Using over one million lines of dialogue and cues, we approach the problem of cue generation as a controlled text generation task, and show how cues can be used to enhance the impact of dialogue using a language model conditioned on a dialogue/cue discriminator. In addition, we explore the use of topic keywords and emotions for controlled text generation. Extensive quantitative and qualitative experiments show that language models can be successfully used to generate plausible and attribute-controlled texts in highly specialised domains such as play scripts. Supporting materials can be found at: https://catlab-team.github.io/cuegen. | null | [
"https://arxiv.org/pdf/2112.06953v1.pdf"
] | 245,131,614 | 2112.06953 | 7cf4ce30f2fa38d6766ba5fb430540a566aaa12e |
Controlled Cue Generation for Play Scripts
Alara Dirik
Bogaziçi University Istanbul
Turkey
Hilal Donmez hilal.donmez@boun.edu.tryanardag.pinar@gmail.com
Bogaziçi University Istanbul
Turkey
Pinar Yanardag
Bogaziçi University Istanbul
Turkey
Controlled Cue Generation for Play Scripts
In this paper, we use a large-scale play scripts dataset to propose the novel task of theatrical cue generation from dialogues. Using over one million lines of dialogue and cues, we approach the problem of cue generation as a controlled text generation task, and show how cues can be used to enhance the impact of dialogue using a language model conditioned on a dialogue/cue discriminator. In addition, we explore the use of topic keywords and emotions for controlled text generation. Extensive quantitative and qualitative experiments show that language models can be successfully used to generate plausible and attribute-controlled texts in highly specialised domains such as play scripts. Supporting materials can be found at: https://catlab-team.github.io/cuegen.
Introduction
Script generation for theater plays involves the automatic generation of a sequence of lines of dialogue and cues that are coherent as a whole. While story and plot generation are relatively popular tasks, play and movie script generation remains a largely unexplored problem. In this paper, we focus on the generation of theatrical cues from character dialogue lines. A theatrical cue can be described as an informative text that is not spoken dialogue. It can be a trigger for an action, an informative description of the stage, thoughts of the characters or body language intended to amplify the effect of the play. Cues are highly variable in context and can range from sound effects, lighting changes, the movement of characters on stage, moods, thoughts, and reactions via silent gestures. The following example illustrates how a cue is used to direct a character's action on stage and add to their spoken lines.
JOHN: I don't know what to do anymore. (JOHN turns around and leaves.)
In addition to describing the actions of the characters, cues also describe the interaction between them, such as the following example: great impact. Another common use case for theatrical cues is to modernise and reinterpret old plays without changing the dialogue. In our work, we thoroughly investigate this use case by generating plausible cues based on the original dialogue lines. To this end, we have collected over 1500 play scripts with various topics, containing a total of 775,000 lines of dialogue and over 277,000 cues. To the best of our knowledge, we are the first to propose the novel task of generating cues from dialogues in plays.
In this work, we introduce a new task and use large-scale transformer-based language models trained on large text corpus for controlled text generation. Controlling the attributes of the generated text, such as specific topics or sentiments, remains difficult without fine-tuning the models for each attribute separately. To address this issue, we explore cue generation using the preceding dialogue and propose a cue/dialogue discriminator using the PPLM framework proposed by Dathathri et al. [2020]. We also explore other extensions such as emotion-based and topic-based text generation.
2 Related Work
Text generation
Text generation is a very popular NLP task where deep neural networks are widely used, with sequence-to-sequence (seq2seq) (see Sutskever et al. [2014a]) with attention (see Luong et al. [2015]) among the most popular models. Generative adversarial networks (GAN) and autoencoders (see Wang and Wan [2018], Hu et al. [2017b]) have also been used to generate text conditioned on specific attributes. These works focus on training generative models and variational autoencoders for style transfer, which rely on learning disentangled latent representations for style and content.
Most of the work on text generation in recent years has been based on the transformer architecture (see Vaswani et al. [2017], Çelikyilmaz et al. [2020], Hu et al. [2017a], Keskar et al. [2019]), which has enabled training large-scale language models (LMs) on very large datasets and significantly improved the state-of-the-art in natural language processing, as Radford [2018] shows. BERT by Devlin et al. [2018] and GPT-2 by are among the most successful transformer-based language models. Recent studies have used BERT for conditional text generation, employing a large pre-trained language model to generate text conditioned on intent labels (see Xia et al. [2020]). Similarly, Sheng et al. [2020], Prabhumoye et al. [2020], Ziegler et al. [2019] have conducted studies on using GPT-2 to generate text with controlled attributes and biases. However, these approaches are often not useful in practice as they require the model to be fine-tuned for each specific attribute separately. In our work, we focus on plug-and-play approaches and generate text by steering pre-trained language models towards acquiring the target attributes.
Story Generation
Previous research in story generation such as Clark et al. [2018] mostly focuses on using recurrent neural networks (RNNs) and long short term memory units (LSTMs) for text generation. However, RNNs have difficulties in generating longer and coherent texts (see , Sutskever et al. [2014b], ), hence other works such as Martin et al. [2018a] aim to provide different semantic representations for story generation. Martin et al. [2018a] proposed dividing the automated story generation task into two subtasks: successive generation of events (event2event) and generation of human-readable sentences from events (event2sentence). The event2event model generates successive events by extracting semantic information from each sentence and the event2sentence model translates the generated events into human-readable sentences. Controllable story generation (see Peng et al. [2018]) is another text generation method that uses an analyzer consisting of supervised classifiers and rule-based keyword extractors to extract control factors from story corpus and a generator that generates stories with an RNN conditioned on the control factors. While this approach can be used to generate stories that reflect the user's intent, a separate model needs to be trained for each new intent or control factor.
Interactive story generation is another research area where various machine learning methods have been proposed (see Riedl and Bulitko [2013]). Interactive story generation enables users to influence or direct stories with their inputs. Brahman et al. [2020] focused on the task of interactive story generation, where the user provides mid-level sentence abstractions in the form of cue phrases to the model during the generation process. Akoury et al. [2020] proposed another story generation system called STORIUM, where human authors query a model for suggested story continuations and edit them.
Dialogue Systems
The rise of deep learning based Natural Language Understanding (NLU) and Natural Language Generation (NLG) methods has significantly improved the performance of dialogue systems. Dialogue systems typically consist of two modules: an NLU module to extract information from user queries and an NLG module to produce relevant responses and start new dialogues. Since dialogue generation directly depends on the performance of the NLU approach used, it is critical to understand the user intent correctly. Vanzo et al. [2019]) tried to solve this problem by proposing a hierarchical multitask NLU architecture that creates a domain-independent and rich semantic representation of the user input. This approach aims to encode the structure of the user input along with the actions and arguments it contains via a self-attention mechanism, seq2seq BiLSTM encoders, and CRF tagging layers. Once the user intent is extracted, a conditional text generation method such as a conditional variational autoencoder (see d' Ascoli et al. [2020]) can be used to generate user-intent dependent responses.
Play Script Generation
The vast majority of previous work on creative text generation focuses on song lyrics generation, story generation (see Luo et al. [2019], Jain et al. [2017]), and movie plot and script generation (see Zhu et al. [2020], Martin et al. [2018b], Mangal et al. [2019]), while theater play script generation is explored to a much lesser extent. HTGAA [2017] trained a character-level RNN model on theater play scripts to generate entire plays and stage directions. However, previous work on creative text generation mainly investigates how to generate coherent, reasonable, and diverse stories and scripts. Since creating labeled datasets with the desired attributes is time-consuming and labor-intensive, this work limits the controllability of the generated texts to coarse-grained sentiments (e.g. positive, negative intent). Hence, fine-grained controllable play script generation remains an unexplored topic to the best of our knowledge.
More recently, Rosa et al. [2020] proposed THEaiTRE, a mixed framework that consists of generative language models and hierarchical generation approaches that use text summarization and machine translation methods. THEaiTRE finetunes a pre-trained GPT-2 model Radford et al. [2019] on a small dataset of formatted theater and movie scripts in English and Czech. Moreover, this work proposes to generate a new training dataset by cross-translating between Czech and English to overcome the limited amount of training data. However, it is not possible to evaluate the performance of this approach as the dataset, experimental results and generated play scripts have not been released.
Dataset
We have collected 1511 English-language play scripts with over 775,000 lines of dialogue and over 277,000 cues on a variety of themes including Comedy, Romance, Satire, and Greek. The collected play scripts are scraped from the Playscripts website 3 and usually include the title of the play, production notes, background information on the characters, and the play itself.
A play script is a highly structured text consisting of one or more acts defined by elements such as rising action, climax, and resolution. Each act consists of six or more scenes, with each scene containing conversations between 2-4 characters. While acts represent a broader storyline of interrelated events, a scene usually represents actions that take place in one place and time, and are delineated from the next scene by a curtain, a blackout, or a brief emptying of the stage. Therefore, conversations within a scene are often separate from the preceding scenes and take place between different characters.
Each scene in a play script consists of lines of dialogue and cues. In all play scripts, dialogue lines start with capitalized character names and cue lines are placed in parentheses. In our work, we pre-process raw scripts by eliminating pages that do not contain at least one line of dialogue and one line of cues. Cues are not meant to be spoken aloud by characters, and their lengths are highly variable. They contain stage directions, stage descriptions, and character descriptions that are essential to understanding the scenes, as well as the mood, feelings, and thoughts of the characters conveyed through silent expressions. Cues are valuable tools for actors to communicate with the audience and convey spoken lines/dialogue in a myriad of different ways. In addition, cues are often used to modernise and/or reinterpret plays without changing the dialogue. For example, a cold greeting as opposed to a friendly greeting can say a lot about the relationship between two characters. In addition to indicating the feelings of the characters, cues can also be stage directions such as:
(Silence as ROLAND exits stage left.) (LOWELL looks toward the stage right door.) (GRAHAM runs into the bathroom, stage right. He begins to vomit loudly. The knocking becomes even more persistent.)
A manual review of the dataset revealed that stage directions and scene changes make up a small portion of the dataset. To distinguish stage directions and scene changes from the rest of the cues, we counted the number of cues containing the word stage and found that 11K out of 227K cues contain the keyword stage. The number of character names in the cues varies widely. As shown on the left in Figure 1, some cues contain no character names, while some cues contain up to 10 characters. Cues can also describe actions that characters are supposed to perform (e.g. "Suddenly jumps up from the chair"). To analyze the categories of these actions, we examined the number of verbs that appear in the cues. The right side of Figure 1 shows that some cues contain no action, while some of them can have up to 20 actions.
Methodology
Plug and Play Language Models (PPLM) aim to leverage large pre-trained language models (LM) to generate attribute controlled text without fine-tuning or re-training the models. In the context of our work, controllable generation refers to modeling the conditional likelihood of generated text p(x|a), where a denotes desired controllable attribute(s) such as emotion, topic, sentence type/intent and x is the generated sample. PPLM plugs an attribute model p(a|x) together with a base generative model p(x) (GPT-2) and sample from the resulting conditional likelihood p(x|a) ∝ p(a|x)p(x). Therefore, it effectively creates a conditional generative model on the fly from any given attribute model, where the attribute models are either in the form of a bag-of-words (BoW) or a discriminator with a single learned layer, without any further training of the underlying base LM.
The PPLM method uses GPT-2 medium as the base LM, which is a left-to-right autoregressive model that generates one token at a time, using the preceding text as input. Given a sequence of tokens or preceding text {x 0 , · · · , x n−1 }, transformer based LMs compute the unconditional probability of the resulting sequence p(X) for all succeeding token candidates:
p(X) = n i=1 p (x i | x 0 , · · · , x i−1 )(1)
Moreover, the GPT-2 architecture uses the hidden representation H t to generate x t+1 , given x t . In order to steer the output of the LM, PPLM shifts the hidden representations H t towards the sum of two gradients at each generation step t: towards the higher log-likelihood of attribute a under the conditional attribute model p(a|x), and towards the higher log-likelihood of the base LM p(x). Thus, the shifted hidden representation (H t + ∆H t ) leads to a distribution of generated text that is more likely to contain the selected attribute(s). As in the original PPLM experiments, we initialize ∆H t to zero and update it with gradients from the attribute model that measures the closeness between the generated text and the desired attribute such as a topic, emotion, intent.
Furthermore, ∆H t is updated to minimize the KL divergence between the output distribution of the modified and unmodified language models to ensure fluency. In addition to minimizing KL divergence, post-norm fusion is performed similarly to Stahlberg et al. [2018] to bind the generated text to the unconditional p(x) LM distribution.
We note that the baseline PPLM framework uses only seven manually generated lists of topic words and a sentiment discriminator trained on the IMDB movie reviews dataset (see Maas et al. [2011]), which is insufficient for our task. Therefore, we use PPLM as our base framework and train a cue/dialogue sentence type discriminator to condition the generation towards cues (see Figure 2). In addition to the cue/dialogue sentence type discriminator, we introduce and experiment with two other attribute models: an automated topic modeling module and an external multi-label emotion classifier, Deep-Moji (see Felbo et al. [2017]), for controlled text generation. While the dialogue/cue discriminator and the topic-based approach aim to generate appropriate cues, the emotion classifier is used to steer the generated text towards the emotion label of the input text. We describe the details of the three attribute models we use in Sections 4.1, 4.2 and 4.3.
Controlled Generation using Cue/Dialogue Discriminator
We train a binary cue/dialogue discriminator using 10% of our dataset, where the input sentences x are tagged with their corresponding labels y. The discriminator consists of a single-layer classifier that predicts the target label. Based on the sentence type given as input by the user and the classifier prediction, PPLM shifts the activations towards the higher log-likelihood of either the dialogue lines or cues as specified by the user.
Controlled Generation using LDA
Unlike PPLM, where the lists of topic keywords are created manually, we create word lists by automatically extracting topics using Latent Dirichlet Allocation (LDA) (see Blei et al. [2003]). To this end, we create a cue corpus and model it as a distribution of 10 topics. We use the trained LDA model to extract topic keywords and automatically generate topic keyword lists. A target topic selected by the user is then used to steer the language generation process to maximize the log-likelihood of the extracted target topic keywords and generate cues with the target topic.
Controlled Generation with Emotions
Since plays contain a wide range of emotions, and not just positive or negative sentiments, we train an emotion classifier using DeepMoji, a sentiment model that predicts the emoji label of input sentences (see Felbo et al. [2017]). We use DeepMoji to predict the emojis corresponding to the given lines, and then map the predicted emojis to a subset of emotions from Plutchik's Wheel of Emotions (see Plutchik [1980]). We then use the input sentences (dialogue lines) and their corresponding emotion labels to train an emotion classifier. The trained classifier is used to steer the generation towards the target emotion label and does not necessarily generate cues.
Experimental Setup
We compare the PPLM-based extensions with fine-tuned GPT-2 and Infilling by Language Modeling (ILM) Donahue et al. [2020] baselines. For the PPLM experiments, we use the GPT-2 345M model fine-tuned on 80% of our dataset as our base generative model p(x). While the GPT-2 model is not fine-tuned in the original work, the structure and rigid syntax of play scripts require finetuning the model to generate plausible dialogues and cues. We use 10% of the dataset to perform steered inference and test the PPLM approaches, and the remaining 10% to train the conditional attribute models p(a|x): a binary cue/dialogue classifier, a multi-label emotion classifier, and for topic modeling via LDA to be used in the PPLM experiments. We also perform a simple preprocessing step to insert a white space between the punctuation and the alphanumeric characters. For the ILM experiments, we first divide our dataset into training, validation and test sets in a ratio of 80-10-10 and create infilling examples. To do this, we divide the dataset into successive triples of lines, where the lines can be either dialogues or cues in any order. We also randomly mask paragraphs, sentences, n-grams, and words with a masking probability of 3% each, resulting in a marginal token masking rate of 15%. For fine-tuning, we insert a bos token < BOS > at the beginning of each scene and an eos token < EOS > at the end of each scene to mark the beginning and end of different conversations. We filter the training and test datasets to only include consecutive dialogue-cue-dialogue triplets and use the start and end dialogue lines as input during inference.
• GPT-2+ FT : Given a line of dialogue as input, we use a GPT-2 model fine-tuned on our dataset to generate text.
• ILM: ILM enables LMs to infill variable-length spans using both preceding and subsequent text. We follow the same approach proposed in the ILM paper and fine-tune the GPT-2 small model on successive line triples following the order dialogue-cue-dialogue. The second line of the triplet is masked during the training and sampling processes since our goal is to generate cues. Once trained, infilling is performed by using the preceding and succeeding dialogue lines as inputs to the model.
• PPLM+LDA: We extract keywords using LDA and control the generation process based on the topic of the dialogue.
• PPLM+CueDisc: We train a cue/dialogue sentence type discriminator and control the generation process using this classifier.
• PPLM+Emotion: We train a multi-label emotion classifier and steer the generation process to generate text that reflects the target emotion specified by the user.
Parameter setup For the PPLM experiments, we use the official PyTorch implementation published by the authors with modifications and extensions. We use the same parameters as PPLM to fine-tune the GPT-2 model. For all PPLM experiments, we set the step size α to 0.04, the scaling coefficient for the normalization term γ to 1.0. Additionally, we keep the default values for the KL coefficient λ KL and the gamma scale γ gm , which are 0.01 and 0.95 respectively. The number of update steps m is 1 for all experiments, as we found that a larger number of update steps leads to more deterministic results. For ILM experiments, we train an ILM model with the default fine-tuning parameters specified in the Transformers library (see Wolf et al. [2019]), except that we use a batch size of 24 and a sequence length of 256. For all model experiments, we use a seed value of 0 and perform inference on a single GPU.
Quantitative Results
We use n-gram similarity and distance metrics (see Kondrak [2005]) to measure the similarity of the generated text to our reference cue corpus, which consists of 50,000 cue samples from our training set. We generate 600 samples with each model and determine the top 10 reference cues for each sample that yield the smallest Levenshtein distance to the generated text. The Levenshtein distance is defined as the minimum number of elementary edit operations required to transform one string to another. We then compute the unigram and bigram similarity (LCSR and BI-SIM) for each generated sample and closest reference cue pairs, and report the average similarity over all generated samples.
PPLM+Emotion and PPLM+LDA samples are generated using randomly selected target emotions and topics respectively. As shown in Table 2, PPLM with the Dialogue/Cue discriminator (denoted as PPLM+CueDisc) achieves the highest LCSR and BI-SIM scores, indicating that PPLM+CueDisc) can successfully generate cues. The PPLM+Emotion and PPLM+LDA approaches achieve the second and third best LCSR and BI-SIM scores respectively. Since the PPLM+Emotion approach aims to generate text solely based on target emotion rather than sentence type (dialogue/cue), the results suggest that the dataset relies heavily on cues to convey target emotions, and that PPLM+Emotion therefore generates cues rather than dialogue lines. The fine-tuned GPT-2 medium model (denoted GPT-2+ FT ) and the Infilling by Language Modeling (denoted ILM) have low LCRS and BI-SIM scores, suggesting that they are unable to generate relevant and complex cue structures. In addition, we measure the diversity of the text generated by each model by the number of distinct n-grams (normalized by the length of text) as in Li et al. [2016]. We report the Dist-1, Dist-2, and Dist-3 scores for the distinct 1-2-3-grams in Table 3. As can be seen in Table 3, PPLM+CueDisc and PPLM+Emotion are comparable or better than GPT-2+ FT in generating diverse text while ILM and PPLM+LDA perform worst. On closer inspection, we find that some of the extracted cue keywords do not refer to characters, but to stage directions such as scene changes, lighting and sound instructions. Therefore, using the keywords extracted with LDA sometimes leads to the generation of repetitive, non-character related text. Similarly, we note that ILM also tends to generate repetitive
Qualitative Results
We asked 20 human annotators to evaluate the performance of the models based on the coherence of the generated text and the accuracy of the cue generation. To create an evaluation dataset, we selected the best performing PPLM-based approach PPLM+CueDisc with the top competitor approach GPT-2+ FT and generated 50 random examples with each model. We then asked the evaluators to rate the generated examples based on coherence and cue accuracy in a binary manner. In the context of our work, we define coherence as both the independent plausibility of the generated text and the contextual coherence of the generated text with respect to the input sentence. Furthermore, we define cue accuracy as whether or not the text generated by the model contains a cue or not. As can be seen from Table 4, while GPT-2+ FT generates diverse text, it fails to generate cues given an input sentence. Since the majority of the dataset consists of dialogues, it is expected that the GPT-2+ FT approach is biased towards generates dialogues. On the other hand, our method achieves a high cue accuracy score while preserving the overall coherence of the conversation. However, we strongly believe that the coherence of the generated texts can be improved by better preprocessing steps and persona-based discriminators. We leave these ideas for future work.
Conclusion
In this paper, we use a large-scale play script dataset and propose the novel task of generating theatrical cues from dialogues. We approach the cue generation problem as a controlled text generation task and use a plug-and-play language model with a cue/dialogue discriminator, LDA-based topic keyword lists, and a multi-label emotion classifier to steer the language model to the desired attributes without re-training the model. Our experiments show that language models can be successfully used to generate plausible and attribute-controlled text in highly specialized domains such as plays. In the future, we plan to explore character and person-based cue and dialogue generation tasks with plug-and-play models.
Figure 1 :
1Histogram of number of unique character names (left). Histogram of number of verbs in cues (right).
Figure 2 :
2An illustration of the PPLM approach with cue/dialogue discriminator. Figure is modified fromDathathri et al. [2020].
Table 1 :
1Model output samples for GPT-2+FT and PPLM+CueDisc methods. MEG. Yeah. He is. He has the biggest smile on earth. . . OWEN. Oh.Input
GPT-2+FT
PPLM+CueDisc
OWEN. Yeah. He's awe-
some.
WADSWORTH.
You
think he likes me, too?
OWEN. (Laughs, but
then gets serious: )
VAL. He's been looking
at me.
STEWART. Who is he?
VAL. I don't know, never
seen him.
(VAL stares back at JOE,
who looks confused.)
VAL. I said, "What's his
problem?"
JACKIE. Okay, I can do
this. Thanks.
RENEE. You can do this?
You can do this!
(JACKIE starts to walk
away. The other girls, all
laughing, stop her.)
CAL. My mother is dead. MADELINE. She died of
a drug overdose. CAL.
That's a lie.
(She pulls back the
sheet.) I don't want to
know how.
STEWART. Yes! Yes, it
is. This is it!
The moment. You know
what I mean. We have to
get out of here right now!
MISS BLAINE. It's
not. STEWART. (Suspi-
ciously:) What do you
mean?
Table 2 :
2LCSR and BI-SIM scores of the models. PPLM+CueDisc shows the best performance in terms of LCSR and BI-SIM metrics.Method
LCSR ↑ BI-SIM ↑
GPT-2+FT
0.42
0.29
ILM
0.47
0.24
PPLM+CueDisc 0.72
0.60
PPLM+LDA
0.68
0.55
PPLM+Emotion 0.69
0.57
Table 3 :
3Dist-1, Dist-2, Dist-3 scores of the models.text that resembles stage directions. Some examples of scripts generated with (GPT-2+ FT and PPLM+CueDisc) can be found inTable 1. As can be seen from the examples, the GPT-2+ FT method is capable of generating plausible text, but not necessarily cues. In contrast, our method is able to generate cues with the characters that appear in the input text.Method
Dist-1 ↑ Dist-2 ↑ Dist-3 ↑
GPT-2+FT
0.32
0.71
0.82
ILM
0.18
0.62
0.72
PPLM+CueDisc 0.25
0.69
0.80
PPLM+LDA
0.20
0.58
0.72
PPLM+Emotion 0.34
0.74
0.87
Table 4 :
4Qualitative analysis with 20 human evaluators. The evaluators are asked whether the generated texts contain any cue (cue accuracy) and are coherent.Method
Cue Acc ↑ Coherence ↑
GPT-2+FT
32.4
66.3
PPLM+CueDisc 92.5
69.0
https://www.playscripts.com
AcknowledgementsThis publication has been produced benefiting from the 2232 International Fellowship for Outstanding Researchers Program of TUBITAK (Project No:118c321). We also acknowledge the support of NVIDIA Corporation through the donation of the TITAN X GPU.
Storium: A dataset and evaluation platform for machine-in-the-loop story generation. Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, Mohit Iyyer, arXiv:2010.01717arXiv preprintNader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. Storium: A dataset and evaluation platform for machine-in-the-loop story generation. arXiv preprint arXiv:2010.01717, 2020.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Latent dirichlet allocation. David M Blei, Andrew Y Ng, Michael I Jordan, J. Mach. Learn. Res. 3David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022, 2003.
Cue me in: Content-inducing approaches to interactive story generation. Faeze Brahman, Alexandru Petrusca, Snigdha Chaturvedi, arXiv:2010.09935arXiv preprintFaeze Brahman, Alexandru Petrusca, and Snigdha Chaturvedi. Cue me in: Content-inducing approaches to interactive story generation. arXiv preprint arXiv:2010.09935, 2020.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Neural text generation in stories using entity representations as context. Elizabeth Clark, Yangfeng Ji, Noah A Smith, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersElizabeth Clark, Yangfeng Ji, and Noah A Smith. Neural text generation in stories using entity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2250-2260, 2018.
Plug and play language models: A simple approach to controlled text generation. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric C Frank, Piero Molino, Jason Yosinski, Rosanne Liu, abs/1912.02164ArXiv. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric C. Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. Plug and play language models: A simple approach to controlled text generation. ArXiv, abs/1912.02164, 2020.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Enabling language models to fill in the blanks. ArXiv, abs. Chris Donahue, Mina Lee, Percy Liang, Chris Donahue, Mina Lee, and Percy Liang. Enabling language models to fill in the blanks. ArXiv, abs/2005.05339, 2020.
Conditioned text generation with transfer for closed-domain dialogue systems. Alice Stéphane D'ascoli, Francesco Coucke, Alexandre Caltagirone, Marc Caulier, Lelarge, International Conference on Statistical Language and Speech Processing. SpringerStéphane d'Ascoli, Alice Coucke, Francesco Caltagirone, Alexandre Caulier, and Marc Lelarge. Conditioned text generation with transfer for closed-domain dialogue systems. In International Conference on Statistical Language and Speech Processing, pages 23-34. Springer, 2020.
Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. A Bjarke Felbo, Anders Mislove, I Søgaard, S Rahwan, Lehmann, abs/1708.00524ArXiv. Bjarke Felbo, A. Mislove, Anders Søgaard, I. Rahwan, and S. Lehmann. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. ArXiv, abs/1708.00524, 2017.
How to generate (almost) anything project. HTGAAHTGAA. How to generate (almost) anything project. http://howtogeneratealmostanything. com, 2017.
Zhiting Hu, Zichao Yang, Xiaodan Liang, R Salakhutdinov, E Xing, abs/1703.00955Controllable text generation. ArXiv. Zhiting Hu, Zichao Yang, Xiaodan Liang, R. Salakhutdinov, and E. Xing. Controllable text generation. ArXiv, abs/1703.00955, 2017a.
Toward controlled generation of text. Zhiting Hu, Zichao Yang, Xiaodan Liang, R Salakhutdinov, E Xing, ICML. Zhiting Hu, Zichao Yang, Xiaodan Liang, R. Salakhutdinov, and E. Xing. Toward controlled generation of text. In ICML, 2017b.
Story generation from sequence of independent short descriptions. Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, Karthik Sankaranarayanan, abs/1707.05501ArXiv. Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, and Karthik Sankaranarayanan. Story generation from sequence of independent short descriptions. ArXiv, abs/1707.05501, 2017.
Ctrl: A conditional transformer language model for controllable generation. N Keskar, B Mccann, L R Varshney, Caiming Xiong, R Socher, abs/1909.05858ArXiv. N. Keskar, B. McCann, L. R. Varshney, Caiming Xiong, and R. Socher. Ctrl: A conditional transformer language model for controllable generation. ArXiv, abs/1909.05858, 2019.
N-gram similarity and distance. Grzegorz Kondrak, SPIRE. Grzegorz Kondrak. N-gram similarity and distance. In SPIRE, 2005.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, William B Dolan, NAACL. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and William B. Dolan. A diversity-promoting objective function for neural conversation models. In NAACL, 2016.
Learning to control the fine-grained sentiment for story ending generation. Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Z Sui, Xu Sun, ACL. Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Z. Sui, and Xu Sun. Learning to control the fine-grained sentiment for story ending generation. In ACL, 2019.
Effective approaches to attention-based neural machine translation. Thang Luong, Hieu Pham, Christopher D Manning, EMNLP. Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, ACL. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In ACL, 2011.
Lstm vs. gru vs. bidirectional rnn for script generation. Sanidhya Mangal, Poorva Joshi, Rahul Modak, abs/1908.04332ArXiv. Sanidhya Mangal, Poorva Joshi, and Rahul Modak. Lstm vs. gru vs. bidirectional rnn for script generation. ArXiv, abs/1908.04332, 2019.
Event representations for automated story generation with deep neural nets. Lara Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, Mark Riedl, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Lara Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark Riedl. Event representations for automated story generation with deep neural nets. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018a.
Event representations for automated story generation with deep neural nets. Lara J Martin, Prithviraj Ammanabrolu, W Hancock, S Singh, B Harrison, Mark O Riedl, AAAI. Lara J. Martin, Prithviraj Ammanabrolu, W. Hancock, S. Singh, B. Harrison, and Mark O. Riedl. Event representations for automated story generation with deep neural nets. In AAAI, 2018b.
Towards controllable story generation. Nanyun Peng, Marjan Ghazvininejad, Jonathan May, Kevin Knight, Proceedings of the First Workshop on Storytelling. the First Workshop on StorytellingNanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pages 43-49, 2018.
Emotion, a psychoevolutionary synthesis. R Plutchik, R. Plutchik. Emotion, a psychoevolutionary synthesis. 1980.
Exploring controllable text generation techniques. ArXiv, abs. A Shrimai Prabhumoye, R Black, Salakhutdinov, Shrimai Prabhumoye, A. Black, and R. Salakhutdinov. Exploring controllable text generation techniques. ArXiv, abs/2005.01822, 2020.
Improving language understanding by generative pre-training. Alec Radford, Alec Radford. Improving language understanding by generative pre-training. 2018.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
Interactive narrative: An intelligent systems approach. Mark Owen Riedl, Vadim Bulitko, 34Ai MagazineMark Owen Riedl and Vadim Bulitko. Interactive narrative: An intelligent systems approach. Ai Magazine, 34(1):67-67, 2013.
Rudolf Rosa, Ondrej Dusek, Tom Kocmi, David Marecek, Tomas Musil, Patricia Schmidtova, Dominik Jurko, Ondrej Bojar, Daniel Hrbek, David Kostak, arXiv:2006.14668Artificial intelligence to write a theatre play. arXiv preprintRudolf Rosa, Ondrej Dusek, Tom Kocmi, David Marecek, Tomas Musil, Patricia Schmidtova, Dominik Jurko, Ondrej Bojar, Daniel Hrbek, David Kostak, et al. Theaitre: Artificial intelligence to write a theatre play. arXiv preprint arXiv:2006.14668, 2020.
Towards controllable biases in language generation. ArXiv, abs. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng, Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. Towards controllable biases in language generation. ArXiv, abs/2005.00268, 2020.
Simple fusion: Return of the language model. Felix Stahlberg, James Cross, Veselin Stoyanov, abs/1809.00125ArXiv. Felix Stahlberg, James Cross, and Veselin Stoyanov. Simple fusion: Return of the language model. ArXiv, abs/1809.00125, 2018.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, abs/1409.3215ArXiv. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. ArXiv, abs/1409.3215, 2014a.
Ilya Sutskever, Oriol Vinyals, Quoc V Le, arXiv:1409.3215Sequence to sequence learning with neural networks. arXiv preprintIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215, 2014b.
Hierarchical multi-task natural language understanding for cross-domain conversational ai. Andrea Vanzo, Emanuele Bastianelli, Oliver Lemon, arXiv:1910.00912Hermit nlu. arXiv preprintAndrea Vanzo, Emanuele Bastianelli, and Oliver Lemon. Hierarchical multi-task natural language understanding for cross-domain conversational ai: Hermit nlu. arXiv preprint arXiv:1910.00912, 2019.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you need. ArXiv, abs/1706.03762Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. ArXiv, abs/1706.03762, 2017.
Sentigan: Generating sentimental texts via mixture adversarial networks. Ke Wang, Xiaojun Wan, IJCAI. Ke Wang and Xiaojun Wan. Sentigan: Generating sentimental texts via mixture adversarial networks. In IJCAI, 2018.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, Jamie Brew, abs/1910.03771ArXiv. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771, 2019.
Cg-bert: Conditional text generation with bert for generalized few-shot intent detection. Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, Philip Yu, arXiv:2004.01881arXiv preprintCongying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, and Philip Yu. Cg-bert: Con- ditional text generation with bert for generalized few-shot intent detection. arXiv preprint arXiv:2004.01881, 2020.
Scriptwriter: Narrative-guided script generation. Yutao Zhu, R Song, J Dou, Jin Nie, Zhou, ACL. Yutao Zhu, R. Song, Zhicheng Dou, J. Nie, and Jin Zhou. Scriptwriter: Narrative-guided script generation. In ACL, 2020.
Fine-tuning language models from human preferences. D Ziegler, Nisan Stiennon, Jeffrey Wu, T Brown, A Radford, Dario Amodei, Paul Christiano, Geoffrey Irving, abs/1909.08593ArXiv. D. Ziegler, Nisan Stiennon, Jeffrey Wu, T. Brown, A. Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. ArXiv, abs/1909.08593, 2019.
Evaluation of text generation: A survey. ArXiv, abs. A Çelikyilmaz, Elizabeth Clark, Jianfeng Gao, A. Çelikyilmaz, Elizabeth Clark, and Jianfeng Gao. Evaluation of text generation: A survey. ArXiv, abs/2006.14799, 2020.
| [] |
[
"Improving Logical-Level Natural Language Generation with Topic-Conditioned Data Augmentation and Logical Form Generation",
"Improving Logical-Level Natural Language Generation with Topic-Conditioned Data Augmentation and Logical Form Generation"
] | [
"Ao Liu liu.ao@nlp.c.titech.ac.jp \nTokyo Institute of Technology\n\n",
"Congjian Luo im.congjian@gmail.com \nUniversity of Electronic Science\nTechnology of China\n",
"Naoaki Okazaki okazaki@c.titech.ac.jp \nTokyo Institute of Technology\n\n"
] | [
"Tokyo Institute of Technology\n",
"University of Electronic Science\nTechnology of China",
"Tokyo Institute of Technology\n"
] | [] | Logical Natural Language Generation, i.e., generating textual descriptions that can be logically entailed by a structured table, has been a challenge due to the low fidelity of the generation.Chen et al. (2020b)have addressed this problem by annotating interim logical programs to control the generation contents and semantics, and presented the task of table-aware logical form to text (Logic2text) generation. However, although table instances are abundant in the real world, logical forms paired with textual descriptions require costly human annotation work, which limits the performance of neural models. To mitigate this, we propose topic-conditioned data augmentation (TopicDA), which utilizes GPT-2 to generate unpaired logical forms and textual descriptions directly from tables. We further introduce logical form generation (LG), a dual task of Logic2text that requires generating a valid logical form based on a text description of a table. We also propose a semisupervised learning approach to jointly train a Logic2text and an LG model with both labeled and augmented data. The two models benefit from each other by providing extra supervision signals through back-translation. Experimental results on the Logic2text dataset and the LG task demonstrate that our approach can effectively utilize the augmented data and outperform supervised baselines by a substantial margin. | 10.2197/ipsjjip.31.332 | [
"https://arxiv.org/pdf/2112.06240v1.pdf"
] | 245,123,731 | 2112.06240 | 15ecaa651aea142905c95b7a7027e9cdc74dbaf5 |
Improving Logical-Level Natural Language Generation with Topic-Conditioned Data Augmentation and Logical Form Generation
Ao Liu liu.ao@nlp.c.titech.ac.jp
Tokyo Institute of Technology
Congjian Luo im.congjian@gmail.com
University of Electronic Science
Technology of China
Naoaki Okazaki okazaki@c.titech.ac.jp
Tokyo Institute of Technology
Improving Logical-Level Natural Language Generation with Topic-Conditioned Data Augmentation and Logical Form Generation
Logical Natural Language Generation, i.e., generating textual descriptions that can be logically entailed by a structured table, has been a challenge due to the low fidelity of the generation.Chen et al. (2020b)have addressed this problem by annotating interim logical programs to control the generation contents and semantics, and presented the task of table-aware logical form to text (Logic2text) generation. However, although table instances are abundant in the real world, logical forms paired with textual descriptions require costly human annotation work, which limits the performance of neural models. To mitigate this, we propose topic-conditioned data augmentation (TopicDA), which utilizes GPT-2 to generate unpaired logical forms and textual descriptions directly from tables. We further introduce logical form generation (LG), a dual task of Logic2text that requires generating a valid logical form based on a text description of a table. We also propose a semisupervised learning approach to jointly train a Logic2text and an LG model with both labeled and augmented data. The two models benefit from each other by providing extra supervision signals through back-translation. Experimental results on the Logic2text dataset and the LG task demonstrate that our approach can effectively utilize the augmented data and outperform supervised baselines by a substantial margin.
Introduction
Natural language generation (NLG) from structured data has been a long-standing research problem. Traditional NLG datasets (Novikova, Dušek, and Rieser 2017;Lebret, Grangier, and Auli 2016) focused on surface-level realization of superficial facts in the structured data. Recently, Chen et al. (2020a) released the LogicNLG dataset, which requires the generation of textual descriptions that can be logically entailed by a table. However, deep models built on this dataset exhibited the problems of low fidelity and uncontrollable content selection. Wenqing et al. (2021) proposed variational models to enhance the logical fidelity of the generated sentences, but still presented low fidelity scores on human evaluation.
An effective remedy for this is to annotate high-quality mediators to guide the generation. Chen et al. (2020b) proposed the logical-level NLG task and released another dataset Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. referred to as Logic2text. This task requires generating a sentence based on both a table and a logical form, which promotes faithful and controllable generation compared with LogicNLG (Chen et al. 2020a). They annotated logical forms paired with corresponding textual descriptions, which resulted in an exciting boost in terms of the human-evaluated fidelity score from 20.2% to 80.4%. An example of the logicallevel NLG (Logic2text) is depicted in Figure 1, along with a comparison to the surface-level NLG.
Nevertheless, the labor-intensive human work of pairing logical forms and textual descriptions limited the scale of the Logic2text dataset (ca. 10.8k instances), which is much smaller than those of the common benchmarks on the surface-level NLG (Novikova, Dušek, and Rieser 2017;Lebret, Grangier, and Auli 2016). Pre-trained models such as GPT-2 (Radford et al. 2019) could work on the small amount of supervision data, using the rich contextual knowledge learnt from large-scale corpora; however, there is a lot of room for improvement in terms of the generation quality (Chen et al. 2020b). Moreover, we observe that each table in the Logic2text dataset is only associated with at most 3 examples. However, a table can contain abundant logicallevel facts derived by various logical operations, while the dataset only covers a limited part of them. Inspired by this, we propose topic-conditioned data augmentation (TopicDA) to bootstrap synthetic examples (i.e., unpaired logical forms (LFs) and texts) from existing supervised data. Specifically, we train auxiliary topic-conditioned table-to-logic and tableto-text models by fine-tuning GPT-2 on the supervised data to generate additional logical forms and texts directly from tables. By providing the models with pre-defined logic types as topics, we generate LFs and texts with diverse logic types even from tables appearing in the original training data. As depicted in Figure 2, when we assign different topics such as superlative and comparative to an input table, the DA models can generate new logical forms or texts consistent with the given topics. Finally, we are able to mine more logicallevel facts from existing tables for data augmentation without resorting to any additional resource.
Additionally, we introduce logical form generation (LG), a dual task of Logic2text that requires generating a valid logical form based on a text description and a corresponding table. Inspired by previous works on the joint learning of dual NLP tasks Marin 2021; Qader, Portet, and Labbé 2019;Guo et al. 2020;Schmitt et al. 2020), we propose the simultaneous solution of the Logic2text task and LG task by iteratively generating pseudo parallel data from the augmented data.
A subsequent challenge is that some of the augmented data can be noisy and impair the performance of the semi-supervised learning. We thus incorporate a roundtrip data weighting strategy to balance the weights of the different unpaired samples. We employ the round-trip BERTScore to evaluate the qualities of the augmented data and weight them during the joint training. We also adopt curriculum learning (Bengio et al. 2009) to further improve the joint training.
We evaluate the proposed methods on the Logic2text dataset and its dual task LG, conducting experiments under two different settings: (1) Full data: we exploit all the supervised data for both training DA models and joint training, (2) Few-shot: we randomly sample only 1000 instances for DA and joint training. Experimental results on both automatic and human evaluation demonstrate the effectiveness of the pro-posed framework in leveraging augmented data. Furthermore, analysis experiments on data augmentation demonstrate that the proposed TopicDA method can generate topic-diversified data with reasonable validity. Additionally, we find that the LG model can produce silver logical form annotations for the LogicNLG benchmark (Chen et al. 2020a), suggesting that the proposed LG task can promote future work on the development of new logical-level NLG benchmarks.
Task Formalization
In Logic2text, an input consists of a table d and a logical form l that can be executed on d, and an output is a sentence description t = [w 1 , w 2 , . . . , w n ]. We aim to train a model P θ (t|l, d) to generate t * that can be supported by both the table d and logical from l. For Logical Form Generation (LG), we train a model P φ (l|t, d) to estimate l * from the input description t, which is also supported by the table d. LG is the inverse task of Logic2text. Each d can have multiple associated l, t pairs and each pair has a pre-defined logic type c ∈ C indicating its logical operation, where C = {count, comparative, superlative, unique, ordinal, aggregation, ma-jority}. These logic types have different preferences on the patterns of logical forms (LFs) and texts. As shown in Figure 1, count-type LFs tend to contain eq and count functions; superlative-type texts usually have superlative words like most, least, highest, etc. For both tasks, we have the same supervi-
sion data S = {(c i , d i , l i , t i )} k i=1 ,
where k is the number of instances.
Proposed Approach
Our framework is composed of three main stages. (1) Topicconditioned data augmentation for augmenting logical forms and texts. (2) Round-trip data weighting for weighting the augmented data via round-trip reconstruction scores. (3) Joint training of Logic2text and LG, to utilize the augmented data to jointly train these models. Figure 2 depicts an overview of the framework.
Base Model
Following Chen et al. (2020b), we use a pre-trained GPT-2 model as the supervised base model. We employ the same data serialization as in (Chen et al. 2020b) to represent the tables and logical forms as text sequences, as shown in Figure 3. Thus, each instance has a serialized table d and logical form l. We additionally consider the logic type c as an explicit prior knowledge to prompt the generation. Therefore, the objective of a Logic2text model is to generate a sequence t * = {u 1 , u 2 , . . . , u N }:
t * = arg max N i=1 P (u i |u <i , [c; d; l]; θ),(1)
where [; ] denotes concatenation of multiple sequences. Similarly, for the LG task, the goal is to generate a logical form string l * = {v 1 , v 2 , . . . , v M }:
l * = arg max M j=1 P (v j |v <j , [c; d; t]; φ).(2)
Figure 2: An overview of the proposed frame work, including (1) Topic-conditioned data augmentation (TopicDA), (2) Round-trip data weighting and (3) joint training of Logic2text and LG models. The logical forms and texts augmented in (1) are weighted in (2) and fed to (3) as unpaired data for pseudo-parallel data construction. In (3), the solid arrows indicate that the L2T/LG models generate pseudo-parallel data with the augmented monolingual logical-forms/texts. The dashed arrows indicate that the generated pseudo instances are used to train the models via back-translation and self-training. Note that the table in stage (1) is the generation context for L2T/LG generation in stage (3), but we ignore it here for simplicity of illustration. The illustration of (3) is inspired by (Guo et al. 2021). .
Topic-Conditioned Data Augmentation
Researchers have used pre-trained language models such as GPT-2 to augment text samples for text classification (Papanikolaou and Pierleoni 2020; Kumar, Choudhary, and Cho 2020) and generation tasks ) by fine-tuning it on in-domain texts. The prior knowledge integrated in such pre-trained models leads to a considerable quality of the generated text. Different from previous work (Chang, Demberg, and Marin 2021) that bootstraps new text instances from the original texts, we seek to generate logical forms and texts from tables. To this end, we construct two conditional table-to-logic (P d2l (l|d, c)) and table-to-text (P d2t (t|d, c)) models, which require the generation of a logical form l or text t directly from a table d following a certain logic type c. A logic type serves as a topic to control the generation pattern. The objectives for these tasks are,
L d2l = E (c,d,l)∼S [− log P d2l (l|d, c)], L d2t = E (c,d,t)∼S [− log P d2t (t|d, c)].(3)
The GPT-2 models fine-tuned on these tasks can be utilized for data augmentation through inference. Specifically, we perform inference with the trained table-to-logic and tableto-text models to generate extra LFs and texts based on a table d u , where d u can be tables in the original training data or from other resources. We assign each of the seven logic types to d u to generate outputs with diverse logical patterns. As shown in Figure 2, the augmented data can be consistent with the assigned logic type.
A generated text or LF is filtered out if: (1) its length exceeds 200 tokens in Byte-Pair Encoding (BPE) (Sennrich, Haddow, and Birch 2016); (2) it is identical to an existing instance in the training data. In this work, we only consider the seven pre-defined logic types as topics. However, our method can be easily adapted to other domains and topics if it is provided with topic-dependent supervision data. For instance, we can use more fine-grained logic patterns like specific logical functions and words.
Round-trip Data Weighting
The augmented logical forms and texts can be noisy because they are generated from imperfect neural models. Therefore, we propose a round-trip data weighting (RT-Weight) strategy to assign per-example weights to the augmented data, which can balance their effects during unsupervised training. Particularly, we use a round-trip BERTScore metric. In round-trip translation, we first translate a logical form l into a text using a Logic2text model and then back-translates it to a reconstructedl with an LG model. We then compute the similarity between the l andl with BERTScore ) as the weight for this instance. The same method also applies to unlabeled text instances. Finally, we obtain a weight vector
W l = [w (1) l , w (2) l , . . . , w (k l ) l ] for U l and W t = [w (1) t , w (2) t , . . . , w (kt) t ] for U t ,
where k l and k t are the sizes of U l and U t , respectively. Our method is inspired by a line of works (Imankulova, Sato, and Komachi 2017; Khatri and Bhattacharyya 2020) on unsupervised machine translation, which use the round-trip BLEU (Papineni et al. 2002) score to filter out low-quality pseudo instances. However, a BLEU score relies on the overlapping n-grams between texts, not considering semanticlevel similarity. Instead, we use the BERTScore, a popular evaluation metric for text generation, which leverages the pre-trained contextual embeddings from BERT (Devlin et al. , table caption, table column headers, table content 2018) and compute cosine similarity between words in candidate and reference sentences. We adopt the F1-measure of BERTScore in practice.
Joint training of Logic2text and LG
After data augmentation, we obtain U l = {(c u , d u , l u )} and U t = {(c u , d u , t u )}, where l u and t u are unpaired logical forms and text descriptions, d u is a context table, and c u indicates the assigned logic type. They are not directly usable as supervised data for Logic2text and LG because l u and t u are generated independently and unaligned. Therefore, we aim to leverage these data to train models on both tasks.
Let P θ (t|l, d) denote a Logic2text (L2T) model and P φ (l|t, d) an LG model. Both models are first pre-trained on supervised data S for several epochs. We then employ unsupervised training as follows.
Back-translation Back-translation (BT) is commonly used in machine translation (MT) (Edunov et al. 2018) for augmenting pseudo-parallel data. Its core idea is to translate a target-side monolingual sentence y into a pseudo source language sentencex, with a target-to-source translation model M yx , which forms a pseudo parallel sentence (x, y) that can be used to train the source-to-target model M xy . In our case, P θ (t|l, d) and P φ (l|t, d) are conditionally inverse to each other, because the table d acts as a condition of the conversion between the logical form l and text t. Therefore, we optimize the following back-translation objectives,
L BT (θ) = E (d,t)∼Ut,wt∼Wt [−w t log P θ (t|l, d)], L BT (φ) = E (d,l)∼U l ,w l ∼W l [−w l log P φ (l|t, d)], t ∼ arg max t P θ (t|l, d), l ∼ arg max l P φ (l|t, d),(4)
where w l and w t are the corresponding weights for the augmented data.
Self-training A drawback of back-translation is that it can only improve a model with its target-side unpaired data. In our case, the L2T model is trained only on the pseudo instances constructed from U t and the LG model only leverages pseudo data from U l . To fully utilize the augmented data, we incorporate a self-training scheme, where each model predicts pseudo targets for its source-side unpaired data, constructing pseudo-parallel instances to re-train itself. These equations present self-training objectives.
L ST (θ) = E (d,l)∼U l ,w l ∼W l [−w l log P θ (t|l, d)] L ST (φ) = E (d,t)∼Ut,wt∼Wt [−w t log P φ (l|t, d)] t ∼ arg max t P θ (t|l, d) l ∼ arg max l P φ (l|t, d)(5)
Curriculum learning Furthermore, we adopt the curriculum learning (Bengio et al. 2009) strategy by sorting the augmented data in descending order of their corresponding weights. The motivation is that the models can learn from easy augmented data (in which the generated pseudo examples are cleaner) to harder ones. This setting encourages the models to learn from pseudo-parallel data of higher quality in the beginning, and gradually transit to more error-prone ones that have lower weights. We expect that this strategy can better balance the effects of the noisy augmented data.
Training scheme We adopt a teacher-student training scheme to optimize the two models. At each epoch, we have both a teacher copy and a student copy of each model. During the unsupervised training, the teacher models are frozen to generate pseudo-parallel data for the student models. At the end of each epoch, the teacher models are updated from the corresponding student models. This is similar to the epochlevel Iterative Back-Translation (IBT) scheme presented in (Xu, Niu, and Carpuat 2020;Zhang et al. 2018). We also perform teacher forcing, to fine-tune the models on the clean supervised data S at the end of each epoch. A formal description of the entire framework is illustrated in Appendix.
Experiments
Datasets
We conduct the main experiments on the Logic2text (L2T) dataset (Chen et al. 2020b), a crowd-sourced logicallevel NLG dataset containing 10.8k instances split into 8566/1095/1092 for train/val/test. The input of each instance is a database-like table and a logical form that describes a logical-level fact in the table, and the output is a text description. Each table is associated with 1 to 3 instances with different logic types. We reuse the L2T dataset to construct a logical form generation (LG) semantic parsing dataset, in which a logical form is generated from a table and a text description.
LG follows the same train/val/test split as L2T. We experiment on (1) the full data setting, where all the supervised data are used for data augmentation and joint training, and (2) the few-shot setting, in which we randomly choose 1k training samples (according to the ratio of logic types in the original dataset) from the original supervision data for training DA models and joint training. However, we use all the tables in the original supervision data for data augmentation, which simulates the scenario where additional tables are incorporated. Table 1 lists the statistics of the augmented data.
Evaluation Metrics
We use BLEU-4 1 , ROUGE-1,2,4,L 2 to evaluate the models on the Logic2text task, following (Chen et al. 2020b
Model Configuration
We implement our model on the Huggingface Transformers library (Wolf et al. 2020) and PyTorch (Paszke et al. 2019). All experiments are conducted on NVIDIA 2080ti GPUs. We use GPT-2-small as our base model and Adam (Kingma and Ba 2014) optimizer. For base models without semi-supervised learning, we set the batch size to 4 and the learning rate to 3 × 10 −5 . For semi-supervised experiments, we set the batch size to 2 and the learning rate to 2 × 10 −5 because of the limit of the GPU memory. We employ beam search with the size of 3 for the decoding in both data augmentation and semi-supervised learning. The hyperparameters and best checkpoints of the Logic2text models are chosen based on the BLEU score on the validation set, and for LG, they are chosen based on Logical Form Accuracy on the validation set. Refer to Appendix A for more detail.
Models for Comparison
We compare the proposed method with the previous supervised baselines (Chen et al. 2020b) on Logic2text. Template: Manually crafted generation templates for different logic types based on logical forms. Seq2seq+att: This is an adapted seq2seq model with an attention mechanism. Pointer generator: A copy mechanism is added to Seq2seq+att to allow the model to copy from the inputs, which is crucial 1 multi-bleu.pl 2 rouge-1.5.5 for the fidelity-preserving NLG tasks with abundant entity names and values. Graph2seq+copy: Graph2seq (Xu et al. 2018) builds a graph-based encoder to encode the logical forms with the copy mechanism. Transformer+copy: This approach is based on the vanilla Transformer (Vaswani et al. 2017) network with extra copy mechanism. GPT-2: A pretrained GPT-2 model is fine-tuned on the Logic2text, where the input tables and logical forms are represented as text sequences. This is the state-of-the-art of Logic2text. Moreover, we re-implement the GPT-2 model with the Huggingface Transformers library, showing higher training speed than theirs. We adapt the GPT-2 model to both Logic2text and LG. As described in Section 3.1, we add logic types as additional inputs in our GPT-2 implementation. Table 2 summarizes the main results on the test splits of Logic2text and LG. We can observe that the models based on pre-trained GPT-2 completely outperform previous neural models without pre-training. The GPT-2 model we implemented obtains generally better results than the GPT-2 model implemented by Chen et al. (2020b) on all the metrics of Logic2text. This demonstrates the additional merit of using logic type information. Our full model with augmented data outperforms the base model by almost 1 BLEU score and more than 0.5 points on all ROUGE scores. However, our full model has relatively trivial improvements over the base model on Logical Form Accuracy (LF. Acc.) and Execution Accuracy (Exer. Acc.). This is probably because LG is not as difficult as Logic2text and the full supervised data are already enough for training a powerful LG model. Moreover, we evaluated the models of GPT-2 and Ours on the few-shot subset of 1k training data. The results of Ours (1k) are remarkably better than GPT-2 (1k) on all metrics of Logic2text and LG, indicating the particular effects of our method in low-resource scenarios. It is worth noting that GPT-2 is already a powerful model pre-trained on large-scale corpora and that our approach can offer additional improvements with augmented in-domain data.
Main Results
Ablation study
To validate the effectiveness of our joint training method, we conduct ablation studies on four ablated variants of the full model with full supervised data: (i) -ST: remove the self-training part in Equation (5); (ii) -BT: remove the backtranslation part in Equation (4); (iii) -order: remove the curriculum learning setting; (iv) -weight: remove data weighting and curriculum learning, which means treating all augmented data equally.
Logic2text results As shown in Table 3, removing any of the components causes a drop of performance on Logic2text. Particularly, removing BT drastically hurts the performance, making it unable to compete the base model. It is reasonable because removing BT loses interactions between the Logic2text and LG models and they are only trained separately via self-training. This observation implies the importance of the joint training of Logic2text and LG. We also observe that self-training does not contribute to the final result as much as back-translation does. This is reasonable as self-training is based on the prediction of each model itself, making the model apt to learn its own mistakes, while BT enables each model to learn from supervision signals produced by an opposite model. Moreover, the curriculum learning setting also consistently contributes to the performance. The data weighting is also essential for achieving the best performance.
LG results All the components contribute to the LF Acc., whereas BT is the most important component. However, removing ST or weighting enhances Exec. Acc., possibly because Exec. Acc. is an approximated metric as defined in Section 4.2. The observations suggest the need for better evaluation metrics for LG.
Human Evaluation
We conduct human evaluation to further test the quality of Logic2text generation. We randomly sampled 100 instances from the generations of four models as described in Section Models Factual Acc. Semantical Acc. (1) GPT-2 (full), (2) Ours (full), (3) GPT-2 (1k) and (4) Ours (1k), along with (5) the gold references. We follow Chen et al. (2020b) to evaluate on two metrics: (1) factual correctness, i.e., whether the generated description is factually supported by the table, which is also referred to as logical fidelity;
(2) semantic correctness, i.e., whether the generated description is consistent with the meaning of the logical form. We ask 3 human experts (computer science graduate students who are familiar with the logical form schema) to evaluate each example and take votes of the results as the final decisions, i.e., a sample is judged as correct only if at least two people agree with it. We present the accuracies of both metrics in Table 4. As can be observed, our methods outperform the base model GPT-2 under both full-data and few-shot settings, which is generally consistent with the Automatic Evaluation results. In particular, Ours (1k) outperforms GPT-2 (1k) on both metrics by over 10%.
Quality of Data Augmentation
Here, we analyze whether TopicDA can generate high-quality data, based on two metrics evaluated on the augmented data in the full-data setting.
Topic Consistency The main motivation of our TopicDA method is to use logic types as topics to encourage the DA models to generate topic-diversified data. Therefore, we analyze whether the augmented data are consistent with the pre-assigned topic. To realize this evaluation, we train two auxiliary topic classifiers LF-CLR and Text-CLR for classifying the logic type of a logical form and a textual description, respectively. The two classifiers are pre-trained BERTbase (Devlin et al. 2018) models fine-tuned on the training set of Logic2text and then validated on the test set. The validation accuracy of LF-CLR reaches 100% and the Text-CLR achieves 97.7%. These classifier models are then used to evaluate whether the augmented data are consistent with their assigned logic types during augmentation. The evaluation results listed in Table 5 suggest that the augmented data are generally topically-consistent with accuracies of 98.52% and 94.30% for LFs and texts, respectively. This demonstrates the effectiveness of our TopicDA model.
Factual Correctness
The augmented logical forms and texts can be noisy as we described in Section 3.3. Similarly to the factual correctness accuracy defined in Section 4.7, we validate whether the augmented data can be exactly supported by their assigned tables. For logical forms, we directly execute them on the table and compute the execution accuracy. For texts, we back-translate them to logical forms with an LG model and then execute the logical forms on the tables. The accuracy of factual correctness is 11.75% and 21.03% for logical forms and texts, respectively. This result is reasonable because the factual correctness is evaluated with the exact match of the logical form execution, which allows no ambiguity in the natural language arguments.
Validity of Logical Forms
We also test the validity of the generated logical forms. Out of the 29232 augmented logical forms (string-type) in the full-data setting, 27,648 (94.58%) can be parsed into valid logical forms without explicit structural errors (e.g., incorrect functions, misplaced punctuations, and mismatch between the number of arguments and functions); 15,951 (54.57%) can be successfully executed on the corresponding tables regardless of the correctness of the outputs, i.e., we can obtain a Boolean result for a logical form, but the result may be correct or wrong. This demonstrates the effectiveness of our seq2seq model for logical form generation. Although the logical forms may not be factually supported by the context tables, we still find them beneficial to the logic-text conversion because of their general valid-
Effects of LG for Annotation
In this experiment, we analyze whether the LG task is beneficial to the annotation work of new logical-level NLG datasets. We choose the LogicNLG (Chen et al. 2020a) dataset, which aims to generate logical descriptions from tables without any intermediate logical forms. Neural models perform poorly on this dataset owing to the low fidelity and uncontrollable content selection, as described in Section 1. We use the LG model that we built to generate logical forms (LFs) based on the tables and textual statements in LogicNLG. 3 We adopt the same GPT-TabGen (sm) model in (Chen et al. 2020a), with the generated LFs as additional inputs similarly to the case of the Logic2text. The model with additional LFs shows substantial improvements on LogicNLG, as shown in Table 6. The results are not exactly comparable, because we introduce extra LFs for both training and test sets. However, they still reveal that such intermediate logical forms play an important role for a better logical NLG system. We expect future work on utilizing LG for annotating benchmark data.
Related Work
Back-translation is a popular method for semi-supervised natural language generation, which has been proven effective in machine translation (Edunov et al. 2018). Iterative backtranslation (IBT) (Hoang et al. 2018;Guo et al. 2021) is an extension of back-translation, in which forward and backward models are trained together to generate pseudo parallel instances with each other. A similar line of studies (Su, Huang, and Chen 2020;Tang et al. 2017) adopts dual learning, which incorporates the inference process of both models into training via reinforcement learning. IBT and dual learning are based on the same idea, to jointly solve tasks with duality used in machine translation. This idea is currently popular in data-to-text generation Chang, Demberg, and Marin 2021;Qader, Portet, and Labbé 2019;Guo et al. 2020;Schmitt et al. 2020). Our work adopts a similar idea to model the conditional duality between Logic2text (Chen et al. 2020b) and Logical Form Generation. We adopt an IBT-style joint training scheme without the back propagation through inference models. We also incorporate a self-training strategy to combine the advantage of both sides of unpaired data. Different from previous works (Qader, Portet, and Labbé 2019;Su, Huang, and Chen 2020;Guo et al. 2020) that directly exploited off-the-shelf unpaired data, posed the problem that it is unrealistic to have so much unpaired texts for data-to-text tasks, and adopt language models (LM), i.e., GPT-2 (Radford et al. 2019) to augment additional texts. The Logic2text task further toughens the problem because we must augment logical forms and textual statements supported by the tables, suggesting the difficulty to apply dual learning.
In this work, we realize the data augmentation from existing tables without the availability of additional resources. Our work is also related to (Dou, Anastasopoulos, and Neubig 2020) that provides useful insights for data selection in IBT.
Our work adopts a round-trip BERTScore metric that can measure the quality of both texts and logical forms.
Conclusion
We studied the logical-level NLG task with the limited supervision data. We herein proposed a topic-conditioned data augmentation method to generate logical forms and textual descriptions with GPT-2. We also introduced logical form generation as a dual task of logical-level NLG, and propose a joint semi-supervised learning approach to improve these two tasks with augmented data. The experimental results show the effectiveness of the proposed method, especially in low-resource settings. For future work, we seek to apply our method to annotate new logical-level NLG benchmarks.
A Configuration Details of Models
Here we provide detailed configurations of our models: our implementation is based on Huggingface Transformer v3.1.0's GPT-2-small model. The word embeddings and positional embeddings are frozen during training. The input sequence, i.e., concatenation of a logic type, caption, headers, table content and logical form, has a maximum of 800 BPE tokens, in which the maximum length for table content is 400, and 200 for logical form and 50 for textual descriptions. If the maximum lengths of a batch of data is smaller than the pre-defined ones, they are set to the smaller ones. We experimented on a set of combinations of batch size (bs) and learning rate (lr) by manually tuning on the validation set results. With trial and error on bs={1,2,4,8,20} and lr=[1e-5 :3e-4] with step size 1e-5. we found (bs=2, lr=2e-5), (bs=4,lr=3e-5), (bs=8,lr=5e-5) consistently good. We adopt (bs=2, lr=2e-5) for the full model due to the extra memory cost of pseudo data generation. We use a gradient clipping with L2-norm and threshold 5.0. We fix the random seed randomly to 42 for all experiments. Because of the large experimental cost of semi-supervised learning, we did not perform multiple runs on all models. In Table 7, we show the results based on 5 runs of random seed (42,10,11,101,111) to compare Ours(full) and Ours(1k).
For the full data setting, we pre-train our joint models on supervised data for 1 epoch and fine-tune for 1 epoch at the end of each unsupervised training epoch. For the fewshot setting, since supervised data are sparse, we pre-train for 5 epochs and fine-tune for 3 epochs. On an NVIDIA 2080ti GPU, it takes about 8 hours to finish one unsupervised epoch and around 30 min for a pre-training/fine-tuning epoch, including the time of validation.
B Effectiveness of Round-trip Data Weighting
In Section 4.8, we measured the factual accuracy of the augmented data, which showed 11.75% of the logical forms and 21.03% of the texts are logically supported by the tables. However, we adopted a soft data weighting approach instead of hard data filtering based on factual correctness. This is because factual correctness is evaluated based on exact match based execution of the augmented logical forms and the back-translated logical forms from the augmented texts. Therefore, it may strictly filter out some logically correct examples. Additionally, even if the logical forms and texts are not supported by the tables, they may still help models learn the conversion between LFs and texts. In comparison, our round-trip data weighting method tend to assign lower scores to low-quality augmented data while still considering their potential effects. To analyse whether round-trip data weighting can measure the quality of augmented data, we split the augmented dataset ("All") into "Correct" and "Incorrect" based on their exact factual correctness. In Table 8, we show that "correct" data tend to have higher round-trip weights.
Empirically, we also find it beneficial to use more softfiltered data instead of less hard-filtered data. We show some qualitative examples in Appendix D to present the potential effects of imperfect augmented data.
C A Formal Algorithm of Proposed Method
Algorithm 1: Training procedure 1: Input: Augmented logical form dataset U l ; augmented text dataset U t ; Supervised dataset S; 2: Output: A Logic2text model P θ ; an LG model P φ .
Pre-training 3: Initialize P θ and P φ with pre-trained GPT-2. 4: Train P θ and P φ on S.
Data weighting 5: Compute data weights W l for U l and W t for U t via roundtrip BERTScore. 6: Sort U l and U t in the decreasing order of W l and W t , respectively. Joint training 7: repeat 8:
Optimize P θ and P φ via back-translation on U l and U t , according to Equation (4).
9:
Optimize P θ and P φ via self-training on U l and U t , according to Equation (5).
10:
Fine-tuning P θ and P φ on S. 11: until CONVERGE
D Qualitative Examples of TopicDA
We list two examples of our TopicDA method, which show perfect quality in terms of topic consistency and logical fidelity, i.e., factually supported by the tables. In Figure 4, we show the augmented logical form generated by assigning Table 9: IAA w.r.t. the ratings of each entry in Table 4 logic type superlative to the given table, which describes correct information in the table and matches the given topic. We also list its forward and round-trip translations here, which were used to compute its round-trip data weight. These translations are of considerable quality, indicating a high data weight during semi-supervised training. Similarly, Figure 5 demonstrates a good example of text augmentation and its translations. However, sometimes the DA models can generate factually incorrect outputs. As shown in Figure 6, although the augmented text is consistent with the assigned topic "unique", the correct date should be "august 12" instead of "august 19". Nonetheless, its forward translation logical form is syntactically correct and semantically consistent with the text, which suggests that this augmented text may still benefit the models on the translation between logcial forms and texts. Our data weighting method will also assign a high score to this example because the round-trip translation perfectly matches the original text.
E Qualitative Examples of Logic2text
Here, we demonstrate some qualitative examples generated from random samples in the test set of Logic2text. We list the generated descriptions of the five models we compared in human evaluation: Gold, GPT-2(full), Ours(full), GPT-2(1k) and Ours(1k). In Figure 7, we observe exciting fluency and fidelity of most generations, except the generation of GPT-2(1k) that mistakenly replaces "1995" with "1991", possibly because GPT-2(1k) was confused by the multiple entities named "1991" appearing in the complicated logical form. In Figure 8, the logical form belongs to the ordinal logic type, which is more difficult for a model to identify its logical semantics. As a result, GPT-2 (full) failed to generate a factually correct sentence because it mistakenly treated "stephin merritt discography" as an album. In contrast, the generation of Ours(full) is both factually and semantically correct. Ours (1k) failed to detect the key column name "year" and generated hallucinated words "charting single". However, it still interpreted the meaning of function "nth argmax" in the logical form as "second highest", while GPT-2 (1k) totally misunderstood the meaning of the logical form and generated a hallucinated sentence. We can observe that although these GPT-2 based models can basically generate highly fluent sentences, they are not robust enough to generate descriptions faithful to both the table and the logical form. However, these examples demonstrate the effectiveness of our data augmentation framework and using more training examples. We can also find that the logical forms make it easier to interpret the errors made by the models than direct table-to-text generation. It is also interesting how the models translate the composi- tional logical forms into natural language, and it could be a promising direction to study the compositional generalization problem of Logic2text.
F Details of Human Evaluation
In Table 9, we report the Inter-Annotator Agreement (IAA) scores (Fleiss' Kappa) among our three annotators w.r.t. their ratings of each entry in Table 4. The averaged IAA scores are 0.52 w.r.t. the rating of factual correctness and 0.53 w.r.t. semantical correctness, showing a reasonable agreement. The IAA scores are much lower for the rating of Gold References, probably because the errors in gold references are rare and subtler to judge.
Figure 1 :
1Comparison between surface-level NLG and logical-level NLG (Logic2text). Logic2text generates logical descriptions from a table based on an annotated logical form with diverse logic types. The two examples here are with the logic type count and superlative. Function nodes are shown in blue. This example was modified from Chen et al. (2020a).
Figure 3 :
3Base model for Logic2text and LG. The input of Logic2text is the concatenation of the logic type
and linearized logical form. The output is the textual description. For LG, the positions of the logical form and text are switched.
Figure 4 :
4An example of an augmented logical form generated via TopicDA.
Figure 5 :
5An example of an augmented text generated via TopicDA.
Figure 6 :
6An incorrect example of an augmented text generated via TopicDA.
Figure 7 :
7Example one of Logic2text. The incorrect contents are marked as red.
Figure 8 :
8Example two of Logic2text. The incorrect contents are marked as red.
Acc. relaxes the criterion of LF Acc.: if the generated logical form can be successfully executed on the table, it counts for a correct prediction. Exec. Acc. has the downside that the generated logical form may not be consistent with the input text, but happens to be supported by the table.). For the
LG task, we adopt the Logical Form Accuracy (LF Acc.) and
Execution Accuracy (Exec. Acc.) metrics in a similar setting
to a semantic parsing dataset WikiSQL (Zhong, Xiong, and
Socher 2017). LF Acc. is the accuracy of the generated logical
forms that have the exact string match with gold references.
Exec.
Table 1 :
1Statistics of data augmentation (DA). Note that we used all 4554 tables in the original training set for DA inference even for the few-shot setting.
Table 2 :
2Main results on the test splits of Logic2text and LG.Logic2text
LG
Models
BLEU-4 ROUGE-1 ROUGE-2 ROUGE-4 ROUGE-L LF Acc. Exec. Acc.
Full model
32.68
65.74
41.54
19.16
55.50
67.95
89.01
-ST
32.53
65.35
41.23
18.82
55.16
67.40
89.38
-BT
31.74.
64.67
40.60
18.45
54.78
64.56
87.09
-order
32.48
65.21
41.45
19.11
55.14
66.67
87.73
-weight
31.93
65.02
40.60
18.49
54.45
66.85
89.19
Table 3 :
3Ablation results on the test sets of Logic2text and LG.
Table 4 :
4Human evaluation on 100 sampled instances from the test set of Logic2text.4.5:
Topic Acc. Factual Acc.Logical forms
98.52%
11.75%
Texts
94.30%
21.03%
Table 5 :
5Evaluation on the quality of the augmented data.
Table 6 :
6Results on the test split of LogicNLG.ity. Some qualitative examples of TopicDA are provided in
Appendix D.
Table 7 :
7Results on the test sets of Logic2text and LG, where mean and standard deviation are computed on 5 runs .Zhang, Z.; Liu, S.; Li, M.; Zhou, M.; and Chen, E. 2018.
Joint training for neural machine translation models with
monolingual data. In Proceedings of the AAAI Conference
on Artificial Intelligence, volume 32.
Zhong, V.; Xiong, C.; and Socher, R. 2017. Seq2SQL: Gen-
erating Structured Queries from Natural Language using Re-
inforcement Learning. CoRR, abs/1709.00103.
65±0.20 0.70±0.19 0.64±0.20 Logical forms 0.86±0.16 0.92±0.12 0.87±0.13All
Correct
Incorrect
Texts
0.
Table 8 :
8Mean and standard deviations of the data weights for different subsets of the augmented data.
ModelsIAA w.r.t. Factual ratings. IAA w.r.t. Semantical ratingsGold
0.15
0.28
GPT-2 (full)
0.52
0.47
Ours (full)
0.63
0.58
GPT-2 (1k)
0.64
0.66
Ours (1k)
0.66
0.68
Avg.
0.52
0.53
Note that although Logic2text and LogicNLG have many common tables, the texts in Logic2text are separately annotated and do not overlap with those of LogicNLG.
AcknowledgementsThis paper is based on results obtained from a project, JPNP18002, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
Curriculum learning. Y Bengio, J Louradour, R Collobert, J Weston, Proceedings of the 26th annual international conference on machine learning. the 26th annual international conference on machine learningBengio, Y.; Louradour, J.; Collobert, R.; and Weston, J. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, 41-48.
Jointly Improving Language Understanding and Generation with Quality-Weighted Weak Supervision of Automatic Labeling. E Chang, V Demberg, A Marin, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeChang, E.; Demberg, V.; and Marin, A. 2021. Jointly Improv- ing Language Understanding and Generation with Quality- Weighted Weak Supervision of Automatic Labeling. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 818-829.
Neural Data-to-Text Generation with LM-based Text Augmentation. E Chang, X Shen, D Zhu, V Demberg, H Su, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeChang, E.; Shen, X.; Zhu, D.; Demberg, V.; and Su, H. 2021. Neural Data-to-Text Generation with LM-based Text Aug- mentation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, 758-768.
Logical Natural Language Generation from Open-Domain Tables. W Chen, J Chen, Y Su, Z Chen, W Y Wang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsChen, W.; Chen, J.; Su, Y.; Chen, Z.; and Wang, W. Y. 2020a. Logical Natural Language Generation from Open-Domain Tables. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 7929-7942.
Logic2Text: High-Fidelity Natural Language Generation from Logical Forms. Z Chen, W Chen, H Zha, X Zhou, Y Zhang, S Sundaresan, W Y Wang, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsChen, Z.; Chen, W.; Zha, H.; Zhou, X.; Zhang, Y.; Sundare- san, S.; and Wang, W. Y. 2020b. Logic2Text: High-Fidelity Natural Language Generation from Logical Forms. In Pro- ceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, 2096-2111.
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805.
Dynamic Data Selection and Weighting for Iterative Back-Translation. Z.-Y Dou, A Anastasopoulos, G Neubig, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingDou, Z.-Y.; Anastasopoulos, A.; and Neubig, G. 2020. Dy- namic Data Selection and Weighting for Iterative Back- Translation. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), 5894-5904.
Understanding Back-Translation at Scale. S Edunov, M Ott, M Auli, D Grangier, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingEdunov, S.; Ott, M.; Auli, M.; and Grangier, D. 2018. Un- derstanding Back-Translation at Scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 489-500.
CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training. Q Guo, Z Jin, X Qiu, W Zhang, D Wipf, Z Zhang, abs/2006.04702CoRRGuo, Q.; Jin, Z.; Qiu, X.; Zhang, W.; Wipf, D.; and Zhang, Z. 2020. CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training. CoRR, abs/2006.04702.
Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization. Y Guo, H Zhu, Z Lin, B Chen, J.-G Lou, D Zhang, Guo, Y.; Zhu, H.; Lin, Z.; Chen, B.; Lou, J.-G.; and Zhang, D. 2021. Revisiting Iterative Back-Translation from the Per- spective of Compositional Generalization.
Iterative back-translation for neural machine translation. V C D Hoang, P Koehn, G Haffari, T Cohn, Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. the 2nd Workshop on Neural Machine Translation and GenerationHoang, V. C. D.; Koehn, P.; Haffari, G.; and Cohn, T. 2018. Iterative back-translation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Trans- lation and Generation, 18-24.
Improving low-resource neural machine translation with filtered pseudoparallel corpus. A Imankulova, T Sato, M Komachi, Proceedings of the 4th Workshop on Asian Translation (WAT2017). the 4th Workshop on Asian Translation (WAT2017)Imankulova, A.; Sato, T.; and Komachi, M. 2017. Improving low-resource neural machine translation with filtered pseudo- parallel corpus. In Proceedings of the 4th Workshop on Asian Translation (WAT2017), 70-78.
Filtering Back-Translated Data in Unsupervised Neural Machine Translation. J Khatri, P Bhattacharyya, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsKhatri, J.; and Bhattacharyya, P. 2020. Filtering Back- Translated Data in Unsupervised Neural Machine Translation. In Proceedings of the 28th International Conference on Com- putational Linguistics, 4334-4339.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintKingma, D. P.; and Ba, J. 2014. Adam: A method for stochas- tic optimization. arXiv preprint arXiv:1412.6980.
Data Augmentation using Pre-trained Transformer Models. V Kumar, A Choudhary, E Cho, Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems. the 2nd Workshop on Life-long Learning for Spoken Language SystemsKumar, V.; Choudhary, A.; and Cho, E. 2020. Data Augmen- tation using Pre-trained Transformer Models. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Lan- guage Systems, 18-26.
Neural Text Generation from Structured Data with Application to the Biography Domain. R Lebret, D Grangier, M Auli, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingLebret, R.; Grangier, D.; and Auli, M. 2016. Neural Text Generation from Structured Data with Application to the Bi- ography Domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 1203- 1213.
The E2E Dataset: New Challenges For End-to-End Generation. J Novikova, O Dušek, V Rieser, Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueNovikova, J.; Dušek, O.; and Rieser, V. 2017. The E2E Dataset: New Challenges For End-to-End Generation. In Pro- ceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, 201-206.
Dare: Data augmented relation extraction with gpt-2. Y Papanikolaou, A Pierleoni, arXiv:2004.13845arXiv preprintPapanikolaou, Y.; and Pierleoni, A. 2020. Dare: Data augmented relation extraction with gpt-2. arXiv preprint arXiv:2004.13845.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsPapineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine trans- lation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 311-318.
A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, arXiv:1912.01703Pytorch: An imperative style, high-performance deep learning library. arXiv preprintPaszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703.
Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models. R Qader, F Portet, C Labbé, Proceedings of the 12th International Conference on Natural Language Generation. the 12th International Conference on Natural Language GenerationQader, R.; Portet, F.; and Labbé, C. 2019. Semi-Supervised Neural Text Generation by Joint Learning of Natural Lan- guage Generation and Natural Language Understanding Mod- els. In Proceedings of the 12th International Conference on Natural Language Generation, 552-562.
Language Models are Unsupervised Multitask Learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language Models are Unsupervised Mul- titask Learners.
An unsupervised joint system for text generation from knowledge graphs and semantic parsing. M Schmitt, S Sharifzadeh, V Tresp, H Schütze, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingSchmitt, M.; Sharifzadeh, S.; Tresp, V.; and Schütze, H. 2020. An unsupervised joint system for text generation from knowl- edge graphs and semantic parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7117-7130.
Neural Machine Translation of Rare Words with Subword Units. R Sennrich, B Haddow, A Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Sennrich, R.; Haddow, B.; and Birch, A. 2016. Neural Ma- chine Translation of Rare Words with Subword Units. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1715- 1725.
Towards Unsupervised Language Understanding and Generation by Joint Dual Learning. S.-Y Su, C.-W Huang, Y.-N Chen, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSu, S.-Y.; Huang, C.-W.; and Chen, Y.-N. 2020. Towards Unsupervised Language Understanding and Generation by Joint Dual Learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 671-680.
Question answering and question generation as dual tasks. D Tang, N Duan, T Qin, Z Yan, M Zhou, arXiv:1706.02027arXiv preprintTang, D.; Duan, N.; Qin, T.; Yan, Z.; and Zhou, M. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsVaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, 6000- 6010.
De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation. C Wenqing, T Jidong, L Yitian, H Hao, J Yaohui, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. the 59th Annual Meeting of the Association for Computational LinguisticsWenqing, C.; Jidong, T.; Yitian, L.; Hao, H.; and Yaohui, J. 2021. De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 5532-5542.
Transformers: State-of-the-Art Natural Language Processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T L Scao, S Gugger, M Drame, Q Lhoest, A M Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnline: Association for Computational LinguisticsWolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; Davi- son, J.; Shleifer, S.; von Platen, P.; Ma, C.; Jernite, Y.; Plu, J.; Xu, C.; Scao, T. L.; Gugger, S.; Drame, M.; Lhoest, Q.; and Rush, A. M. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing: System Demonstrations, 38-45. Online: Association for Com- putational Linguistics.
SQL-to-Text Generation with Graph-to-Sequence Model. K Xu, L Wu, Z Wang, Y Feng, V Sheinin, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingXu, K.; Wu, L.; Wang, Z.; Feng, Y.; and Sheinin, V. 2018. SQL-to-Text Generation with Graph-to-Sequence Model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 931-936.
Dual Reconstruction: a Unifying Objective for Semi-Supervised Neural Machine Translation. W Xu, X Niu, M Carpuat, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsXu, W.; Niu, X.; and Carpuat, M. 2020. Dual Reconstruction: a Unifying Objective for Semi-Supervised Neural Machine Translation. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: Findings, 2006-2020.
BERTScore: Evaluating Text Generation with BERT. T Zhang, V Kishore, F Wu, K Q Weinberger, Y Artzi, International Conference on Learning Representations. Logic2text LG Models BLEU-4 ROUGE-L LF Acc. Exec. Acc. Zhang, T.; Kishore, V.; Wu, F.; Weinberger, K. Q.; and Artzi, Y. 2020. BERTScore: Evaluating Text Generation with BERT. In International Conference on Learning Representations. Logic2text LG Models BLEU-4 ROUGE-L LF Acc. Exec. Acc.
| [] |
[
"Few-shot Conformal Prediction with Auxiliary Tasks",
"Few-shot Conformal Prediction with Auxiliary Tasks"
] | [
"Adam Fisch ",
"Tal Schuster ",
"Tommi Jaakkola ",
"Regina Barzilay "
] | [] | [] | We develop a novel approach to conformal prediction when the target task has limited data available for training. Conformal prediction identifies a small set of promising output candidates in place of a single prediction, with guarantees that the set contains the correct answer with high probability. When training data is limited, however, the predicted set can easily become unusably large. In this work, we obtain substantially tighter prediction sets while maintaining desirable marginal guarantees by casting conformal prediction as a meta-learning paradigm over exchangeable collections of auxiliary tasks. Our conformalization algorithm is simple, fast, and agnostic to the choice of underlying model, learning algorithm, or dataset. We demonstrate the effectiveness of this approach across a number of few-shot classification and regression tasks in natural language processing, computer vision, and computational chemistry for drug discovery. | null | [
"https://arxiv.org/pdf/2102.08898v2.pdf"
] | 231,942,682 | 2102.08898 | 36a2c27ffa72c05c2a17dc90b7c54e492b88ba01 |
Few-shot Conformal Prediction with Auxiliary Tasks
Adam Fisch
Tal Schuster
Tommi Jaakkola
Regina Barzilay
Few-shot Conformal Prediction with Auxiliary Tasks
We develop a novel approach to conformal prediction when the target task has limited data available for training. Conformal prediction identifies a small set of promising output candidates in place of a single prediction, with guarantees that the set contains the correct answer with high probability. When training data is limited, however, the predicted set can easily become unusably large. In this work, we obtain substantially tighter prediction sets while maintaining desirable marginal guarantees by casting conformal prediction as a meta-learning paradigm over exchangeable collections of auxiliary tasks. Our conformalization algorithm is simple, fast, and agnostic to the choice of underlying model, learning algorithm, or dataset. We demonstrate the effectiveness of this approach across a number of few-shot classification and regression tasks in natural language processing, computer vision, and computational chemistry for drug discovery.
Introduction
Accurate estimates of uncertainty are important for difficult or sensitive prediction problems that have variable accuracy (Amodei et al., 2016;Jiang et al., 2012;Angelopoulos et al., 2021). Few-shot learning problems, in which training data for the target task is severely limited, pose a discouragingly compounded challenge: in general, not only is (1) making accurate predictions with little data hard, but also (2) rigorously quantifying the uncertainty in these few-shot predictions is even harder.
In this paper, we are interested in creating confident prediction sets that provably contain the correct answer with high probability (e.g., 95%), while only relying on a few in-task examples. Specifically, we focus on conformal prediction (CP)-a model-agnostic and distribution-free methodology for creating confidence-based set predictions (Vovk et al., 2005). Concretely, suppose we have been given n examples, (X j , Y j ) ∈ X × Y, j = 1, . . . , n, as training data, that have been drawn exchangeably from some underlying distribution P . Let X n+1 ∈ X be a new exchangeable test example for which we would like to predict Y n+1 ∈ Y. The aim of conformal prediction is to construct a set-valued output, C (X n+1 ), that contains Y n+1 with distribution-free marginal coverage at a significance level ∈ (0, 1), i.e., P (Y n+1 ∈ C (X n+1 )) ≥ 1 − .
(1)
A conformal model is considered to be valid if the frequency of error, Y n+1 ∈ C (X n+1 ), does not exceed . The challenge for few-shot learning, however, is that as n → 0, standard CP methods quickly result in outputs C (X n+1 ) so large that they lose all utility (e.g., a trivially valid classifier that returns all of Y). A conformal model is only considered to be efficient if E[|C (X n+1 )|] is relatively small.
In this work, we approach this frustrating data sparsity issue by casting conformal prediction as a meta-learning paradigm over exchangeable collections of tasks. By being exposed to a set of similar, auxiliary tasks, our model can learn to learn quickly on the target task at hand. As a result, we can increase the data efficiency of our procedure, and are able to produce more precise-and confident-outputs.
Specifically, we use the auxiliary tasks to meta-learn both a few-shot model and a quantile predictor. The few-shot model provides relevance scores (i.e., nonconformity scores, see §3.1) for each possible label candidate y ∈ Y, and the quantile predictor provides a threshold rule for including the candidate y in the prediction set, C (X n+1 ), or not. A good few-shot model should provide scores that clearly separate correct labels from incorrect labels-much like a maximum-margin model. Meanwhile, a good quantile predictor-which is intrinsically linked to the specific fewshot model used-should quantify what few-shot scores correspond to relatively "high" or relatively "low" values for that task (i.e., as the name suggests, they infer the target quantile of the expected distribution of few-shot scores).
Both of these models must be able to operate effectively given only a few examples from the target task, hence how they are meta-learned over auxiliary tasks becomes crucial.
Consider the example of image classification for novel cate-arXiv:2102.08898v2 [cs.LG] 20 Jul 2021 Figure 1. A demonstration of our conformalized few-shot learning procedure. Given a base model (e.g., a prototypical network for classification tasks (Snell et al., 2017)) and a few demonstrations of a new task, our method produces a prediction set that carries desirable guarantees that it contains the correct answer with high probability. Like other meta-learning algorithms, our approach leverages information gained from t other, similar tasks-here to make more precise and confident predictions on the new task, Tt+1.
gories (see Figure 1 for an illustration). The goal is to predict the class of a new test image out of several never-beforeseen categories-while only given a handful of training examples per category. In terms of auxiliary tasks, we are given access to similarly-framed image classification tasks (e.g., cat classes instead of dog classes as in Figure 1). In this case, we can compute relevance by using a prototypical network (Snell et al., 2017) to measure the Euclidean distance between the test image's representation and the average representation of the considered candidate class's support images (i.e., prototype). Our quantile predictor then computes a "distance cut-off" that represents the largest distance between a label prototype and the test example that just covers the desired percentage of correct labels. Informally, on the auxiliary tasks, the prototypical network will learn efficient features, while the quantile predictor will learn what typically constitutes expected prototypical distances for correct labels when using the trained network.
We demonstrate that these two meta-learned components combine to make an efficient and simple-yet-effective approach to few-shot conformal prediction, all while retaining desirable theoretical performance guarantees. We empirically validate our approach on image classification, relation classification for textual entities, and chemical property prediction for drug discovery. Our code is publicly available. 1
In summary, our main contributions are as follows:
• A novel theoretical extension of conformal prediction to include few-shot prediction with auxiliary tasks • A principled meta-learning framework for constructing confident set-valued classifiers for new target tasks • A demonstration of the practical utility of our framework 1 https://github.com/ajfisch/few-shot-cp.
across a range of classification and regression tasks.
Related Work
Uncertainty estimation. In recent years, there has been a growing research interest in estimating uncertainty in model predictions. A large amount of work has been dedicated towards calibrating the model posterior, p θ (ŷ n+1 |x n+1 ), such that the true accuracy, y n+1 =ŷ n+1 , is indeed equal to the estimated probability (Niculescu-Mizil & Caruana, 2005;Lakshminarayanan et al., 2017;Lee et al., 2018). In theory, these estimates could be used to create confident prediction sets C (X n+1 ). Unlike CP, however, these methods are not guaranteed to be accurate, and often suffer from miscalibration in practice-and this is especially true for modern neural networks (Guo et al., 2017;Ashukha et al., 2020;Hirschfeld et al., 2020). In a similar vein, Bayesian formalisms underlie several popular approaches to quantifying predictive uncertainty via computing the posterior distribution over model parameters (Neal, 1996;Graves, 2011;Hernández-Lobato & Adams, 2015;Gal & Ghahramani, 2016). The quality of these methods, however, largely hinges on both (1) the degree of approximation required in computing the posterior, and (2) the suitability, or "correctness", of the presumed prior distribution.
Conformal prediction. As introduced in §1, conformal prediction (Vovk et al., 2005) provides a model-agnostic and finite-sample, distribution-free method for obtaining prediction sets with marginal coverage guarantees. Most pertinent to our work, Linusson et al. (2014) carefully analyze the effects of calibration set size on CP performance. For precise prediction sets, they recommend using at least a few hundred examples for calibration-much larger than the few-shot settings considered here. When the amount of available data is severely restricted, the predicted sets typically become unusably large. Johansson et al. (2015) and Carlsson et al. (2015) introduce similarly motivated approximations to CP with small calibration sets via interpolating calibration instances or using modified p-value definitions, respectively. Both methods are heuristics, however, and fail to provide finite-sample guarantees. Our work also complements several recent directions that explore conformal prediction in the context of various validity conditions, such as conditional, risk-controlling, admissible, or equalized coverage (Chernozhukov et al., 2019;Cauchois et al., 2020;Kivaranovic et al., 2020;Romano et al., 2019;Bates et al., 2020;Fisch et al., 2021, inter alia).
Few-shot learning. Despite the many successes of machine learning models, learning from limited data is still a significant challenge (Bottou & Bousquet, 2008;Lake et al., 2015;Wang et al., 2020). Our work builds upon the extensive few-shot learning literature by introducing a principled way of obtaining confidence intervals via metalearning. Meta-learning has become a popular approach to transferring knowledge gained from auxiliary tasks-e.g., via featurizations or statistics (Edwards & Storkey, 2017)to a target task that is otherwise resource-limited (Vinyals et al., 2016;Finn et al., 2017;Snell et al., 2017;Bertinetto et al., 2019;Bao et al., 2020). We leverage the developments in this area for our models (see Appendix B.1).
Background
We begin with a review of conformal prediction (see Shafer & Vovk, 2008). Here, and in the rest of the paper, uppercase letters (X) denote random variables; lower-case letters (x) denote scalars, and script letters (X ) denote sets, unless otherwise specified. A list of notation definitions is given in Table A.1. All proofs are deferred to Appendix A.
Nonconformity measures
Given a new example x, for every candidate label y ∈ Y, conformal prediction applies a simple test to either accept or reject the null hypothesis that the pairing (x, y) is correct. The test statistic for this hypothesis test is a nonconformity measure, S ((x, y), D), where D is a dataset of exchangeable, correctly labeled examples. Informally, a lower value of S reflects that point (x, y) "conforms" to D, whereas a higher value of S reflects that (x, y) is atypical relative to D. A practical choice for S is model-based likelihood, e.g., − log p θ (y|x), where θ is a model fit to D using some learning algorithm A (such as gradient descent). It is also important that S preserves exchangeability of its inputs. Let Z j := (X j , Y j ), j = 1, . . . , n be the training data. Then, for test point x ∈ X and candidate label y ∈ Y, we calculate the nonconformity scores for (x, y) as: (2)
V (x,
Note that this formulation, referred to as full conformal prediction, requires running the learning algorithm A that underlies S potentially many times for every new test point (i.e., |Y| times). "Split" conformal prediction (Papadopoulos, 2008)-which uses a held-out training set to learn S, and therefore also preserves exchangeability-is a more computationally attractive alternative, but comes at the expense of predictive efficiency when data is limited. 2
Conformal prediction
To construct the final prediction for the new test point x, the classifier tests the nonconformity score for each label y, V (x,y) n+1 , against a desired significance level , and includes all y for which the null hypothesis-that the candidate pair (x, y) is conformal-is not rejected. This is achieved by comparing the nonconformity score of the test candidate to the scores computed over the first n labeled examples. This comparison leverages the quantile function, where for a random variable V sampled from distribution F we define
Quantile(β; F ) := inf{v : F (v) ≥ β}.(3)
In our case, F is the distribution over the n + 1 nonconformity scores, denoted V 1:n+1 . However, as we do not know V (x,y) n+1 for the true y * , we use an "inflated" quantile: Lemma 3.1 (Inflated quantile). Assume that V j , j = 1, . . . , n+1 are exchangeable random variables. Then for any β ∈ (0, 1), P (V n+1 ≤ Quantile(β, V 1:n ∪ {∞})) ≥ β.
Conformal prediction then guarantees marginal coverage by including all labels y for which V (x,y) n+1 is below the inflated quantile of the n training points, as summarized:
Theorem 3.2 (CP, Vovk et al. (2005)). Assume that examples (X j , Y j ), j = 1, . . . , n + 1 are exchangeable. For any nonconformity measure S and ∈ (0, 1), define the conformal set (based on the first n examples) at x ∈ X as
C (x) := y ∈ Y : V (x,y) n+1 ≤ Quantile(1 − ; V (x,y) 1:n ∪ {∞}) .
Then C (X n+1 ) satisfies Eq. (1).
Though Theorem 3.2 provides guarantees for any training set size n, in practice n must be fairly large (e.g., 1000) to achieve reasonable, stable performance-in the sense that C will not be too large on average (Lei et al., 2018;Bates et al., 2020). This is a key hurdle for few-shot conformal prediction, where n = k is assumed to be small (e.g., 16).
Few-shot Meta Conformal Prediction
We now propose a general meta-learning paradigm for training efficient conformal predictors, while relying only on a very limited number of in-task examples.
At a high level, like other meta-learning algorithms, our approach leverages information gained from t other, similar tasks in order to perform better on task t + 1. In our setting we achieve this by learning a more statistically powerful nonconformity measure and quantile estimator than would otherwise be possible using only the limited data available for the target task. Our method uses the following recipe:
1. We meta-learn (and calibrate) a nonconformity measure and quantile predictor over a set of auxiliary tasks;
2. We adapt our meta nonconformity measure and quantile predictor using the examples we have for our target task;
3. We compute a conformal prediction set for a new input x ∈ X by including all labels y ∈ Y whose meta-learned nonconformity score is below the predicted 1− quantile.
Pseudo-code for our meta CP procedure is given in Algorithm 1. This skeleton focuses on classification; regression follows similarly. Our framework is model agnostic, in that it allows for practically any meta-learning implementation for both nonconformity and quantile prediction models.
In the following sections, we break down our approach in detail. In §4.1 we precisely formulate our few-shot learning setup with auxiliary tasks. In §4.2 and §4.3 we describe our meta-learning and meta-calibration setups, respectively. Finally, in §4.4 we discuss further theoretical extensions. For a complete technical description of our modeling choices and training strategy for our experiments, see Appendix B.
Task formulation
In this work, we assume access to t auxiliary tasks, T i , i = 1, . . . , t, that we wish to leverage to produce tighter uncertainty sets for predictions on a new task, T t+1 . Furthermore, we assume that these t + 1 tasks are exchangeable with respect to some task distribution, P T . Here, we treat P T as a distribution over random distributions, where each task T i ∈ T defines a task-specific distribution,
P XY ∼ P T , over examples (X, Y ) ∈ X × Y.
The randomness is in both the task's relation between X and Y, and the task's data.
For each of the t auxiliary tasks, we do not make any assumptions on the amount of data we have (though, in general, we expect them to be relatively unrestricted). On the new task T t+1 , however, we only assume a total of k exchangeable training examples. Our goal is then to develop a task-agnostic uncertainty estimation strategy that generalizes well to new examples from the task's unseen test
Algorithm 1 Meta conformal prediction with auxiliary tasks.
Definitions: T1:t+1 are exchangeable tasks. Itrain ∪ I cal are the t tasks used for meta-training and meta-calibration. z 1:k ∈ (X × Y) k are the k support examples for target task Tt+1. x ∈ X is the target task input. Y is the label space. is the significance.
1: function PREDICT(x, z 1:k , T1:t, ) 2: # Learn S and P on meta-training tasks ( §4.2).
3:
# S and P are meta nonconformity/quantile models.
4:
S, P1− ← TRAIN(Ti, i ∈ Itrain) 5:
# Predict the 1 − quantile.
6:
Qt+1 ← P1− (z 1:k ; φmeta) 7:
# Initialize empty output set. 8:
M ← {} 9:
# (Note that for regression tasks, where |Y| = ∞, for 10:
# certain S the following simplifies to a closed-form 11:
# interval, making it tractable-see §3.1, footnote 2.) 12:
for y ∈ Y do 13:
# Compute the nonconformity score for label y. 14:
V
(x,y) t+1,k+1 ← S((x, y), z 1:k ; θmeta) 15: # Compare to the calibrated quantile ( §4.3). 16: if V (x,y) t+1,k+1 ≤ Qt+1 + Λ(1 − , I cal ) then 17: M ← M ∪ {y} 18: return M set, (X test t+1 , Y test t+1 )
. 3 Specifically, we desire finite-sample marginal task coverage, as follows:
Definition 4.1 (Task validity). Let M be a set-valued predictor. M is considered to be valid across tasks if for any task distribution P T and ∈ (0, 1), we have
P Y test t+1 ∈ M X test t+1 ≥ 1 − .(4)
Note that we require the marginal coverage guarantee above to hold on average across tasks and their examples.
Meta-learning conformal prediction models
Given our collection of auxiliary tasks, we would like to meta-learn both (1) an effective nonconformity measure that is able to adapt quickly to a new task using only k examples; and (2) a quantile predictor that is able to robustly identify the 1 − quantile of that same meta nonconformity measure, while only using the same k examples.
Prior to running our meta-learning algorithm of choice, we split our set of t auxiliary tasks into disjoint sets of training tasks, I train , and calibration tasks, I cal , where |I train | + |I cal | = t. See Table 1 for an overview of the different splits. We use I train to learn our meta nonconformity measures and quantile predictors, which we discuss now. Additional technical details are contained in Appendix B.1. Table 1. An overview of the data assumptions for a single test task "episode". We use |Itrain| + |I cal | = t total auxiliary tasks to create more precise uncertainty estimates for the (t + 1)th test task. This is repeated for each test task ( §5). mi k is the number of extra examples per calibration task that are used to compute an empirical CDF when finding Λ(β; I cal )-it may vary per task.
Task Split # Tasks # Examples / Task
Auxiliary Meta-training |Itrain| k Meta-calibration |I cal | k + mi Test 1 k
Meta nonconformity measure. Let S ((x, y), D; θ meta ) be a meta nonconformity measure, where θ meta are meta parameters learned over the auxiliary tasks in I train . Since θ meta is fixed after the meta training period, S preserves exchangeability over new collections of exchangeable tasks (i.e., I cal ) and task examples. Let Z i,j := (X i,j , Y i,j ), j = 1, . . . , k be the few-shot training data for a task T i (here i is the task index, while j is the example index). Given a new test point x ∈ X and candidate pairing (x, y), the meta nonconformity scores for (x, y) are
V (x,y) i,j := S(Z i,j , Z i,1:k ∪ {(x, y)}; θ meta ), V (x,y) i,k+1 := S((x, y), Z i,1:k ∪ {(x, y)}; θ meta ).(5)
As an example, Figure 2 demonstrates how we compute V (x,y) i,k+1 using the distances from a meta-learned prototypical network following the setting in Figure 1.
Computing all k + 1 scores |Y| times is typically tractable due to the few number of examples (e.g., k ≈ 16) and the underlying properties of the meta-learning algorithm driving S. For example, prototypical networks only require a forward pass. A naive approach to few-shot conformal prediction is to exploit this efficiency, and simply run full CP using all k +1 data points. Nevertheless, though a strong baseline, using only k + 1 points to compute an empirical quantile is still suboptimal. As we discuss next, instead we choose to regress the desired quantile directly from Z i,1:k , and disregard the empirical quantile completely.
Since we predict the quantile instead of relying on the empirical quantile, we do not have to retain exchangeability for Z i,1:k . As a result, we switch to "split" CP ( §3.1), and do not include (x, y) when calculating V (x,y) i,j , as this is faster.
Meta quantile predictor. Let P β (D; φ meta ) be a meta β-quantile predictor, where φ meta are the meta parameters learned over the auxiliary tasks in I train . P β is trained to predict the β-quantile of F -where F is the underlying task-specific distribution of nonconformity scores-given D, a dataset of Z = (X, Y ) pairs sampled from that task.
As some intuition for this approach, recall that in calculating (Snell et al., 2017) to compute meta nonconformity scores. If S is well-trained, the distance between the test point and the correct class prototype should be small, and the distance to incorrect prototypes large, even when the number of in-task training examples is limited.
Quantile(β; F ) given exchangeable samples v 1:n ∼ F , we implicitly need to estimate P(V n+1 ≤ v | v 1:n ).
For an appropriate parametrization ψ of F , de Finetti's theorem for exchangeable sequences allows us to write
P(V n+1 ≤ v | v 1:n ) ∝ v −∞ Ψ p(v | ψ) n i=1 p(v i | ψ)p(ψ)dψdv.
In this sense, meta-learning over auxiliary task distributions may help us learn a better prior over latent parametrizations ψ-which in turn may help us better model the β-quantile than we could have, given only k samples and nothing else.
We develop a simple approach to modeling and learning P β . Given the training examples Z i,1:k , we use a deep sets model (Zaheer et al., 2017) parameterized by φ meta to predict the β-quantile of V test i,k+1 , the random variable representing the nonconformity score of the test point,
Z i,k+1 := (X i,k+1 , Y i,k+1 ). We optimize φ meta as min φ i∈Itrain P β Z i,1:k ; φ − Quantile β; V test i,k+1 2 ,(6)
where we estimate the target, Quantile β; V test i,k+1 , using m
k extra examples sampled from the training task.
In practice, we found that choosing to first transform Z i,1:k to leave-one-out meta nonconformity scores,
L i,j := S Z i,j , Z i,1:k \ Z i,j ; θ meta ,(7)
and providing P β with these scalar leave-one-out scores as inputs, performs reasonably well and is lightweight to implement. Inference using P β is illustrated in Figure 3.
Training strategy. The meta nonconformity measure S and meta quantile predictor P β are tightly coupled, as given a fixed S, P β learns to model its behavior on new data. A Figure 3. An illustration of using our meta-learned quantile predictor P β to infer the β-quantile of the distribution of V test i,k+1 , given the few examples from Ti's training set. The numbers above each image reflect the leave-one-out scores we use as inputs, see Eq. (7).
straightforward, but data inefficient, approach to training S and P β is to split the collection of auxiliary tasks in I train in two, i.e., I train = I (1) train , followed by training P β on S's predictions over I
( 2) train . The downside of this strategy is that both S and P β may be sub-optimal, as neither can take advantage of all of I train .
We employ a slightly more involved, but more data efficient approach, where we split I train into k f folds, i.e.,
I train = k f f =1 I (f ) train .
We then train k f separate meta nonconformity measures S f , where we leave out fold f from the training data. Using S f , we compute nonconformity scores on fold f 's data, aggregate these nonconformity scores across all k f folds, and train the meta quantile predictor on this union. Finally, we train another nonconformity measure on all of I train , which we use as our ultimate S. This way we are able to use all of I train for training both S and P β . This process is illustrated in Figure B.1. Note that it is not problematic for P β to be trained on the collection of S instances trained on k f − 1 folds, but then later used to model one S trained on all the data, since it will be calibrated (next, in §4.3).
Calibrating meta-learned conformal prediction
Though P β may obtain low empirical error after training, it does not have any inherent rigorous guarantees out-of-thebox. Given our held-out set of auxiliary tasks I cal , however, we can quantify the uncertainty in P β (i.e., how far off it may be from the true quantile), and calibrate it accordingly. The following lemma formalizes our meta calibration procedure: Lemma 4.2 (Meta calibration). Assume Q i , i ∈ I cal are the (exchangeable) meta β-quantile predictions produced by P β for tasks T i , i ∈ I cal . Let V test i,k+1 be the meta nonconformity score for a new sample from task T i , where F i is its distribution function. Define the correction Λ(β; I cal ) as
Λ(β; I cal ) := inf λ : 1 |I cal | + 1 i∈Ical F i Q i + λ ≥ β . (8)
We then have that P V test t+1,k+1 ≤ Q t+1 + Λ(β; I cal ) ≥ β.
It is important to pause to clarify at this point that calculating Λ(β; I cal ) requires knowledge of the true meta nonconformity distribution functions, F i , for all calibration tasks. For simplicity, we write Lemma 4.2 and the following Theorem 4.3 as if these distribution functions are indeed known (again, only for calibration tasks). In practice, however, we typically only have access to an empirical distribution function over m i task samples. In this case, Lemma 4.2 holds in expectation over task samples Z i,k:k+mi , as for an empirical distribution function of m points, F m , we have E[ F m ] = F . Furthermore, for large enough m i , concentration results suggest that we can approximate F i with little error given a particular sample (this is the focus of §4.4).
That said, in a nutshell, Lemma 4.2 allows us to probabilistically adjust for the error in P β , such that it is guaranteed to produce valid β-quantiles on average. We can then perform conformal inference on the target task by comparing each meta nonconformity score for a point x ∈ X and candidate label y ∈ Y to the calibrated meta quantile, and keep all candidates with nonconformity scores that fall below it. Theorem 4.3 (Meta CP). Assume that tasks T i , i ∈ I cal and T t+1 are exchangeable, and that their nonconformity distribution functions F i are known. For any meta quantile predictor P 1− , meta nonconformity measure S, and ∈ (0, 1), define the meta conformal set (based on the tasks in I cal and the k training examples of task T t+1 ) at x ∈ X as
M (x) := y ∈ Y : V (x,y) t+1,k+1 ≤ Q t+1 + Λ(1 − ; I cal ) ,
where Q t+1 is the result of running P 1− on the k training examples of task T t+1 . Then M (X test t+1 ) satisfies Eq. (4).
It should be acknowledged that Theorem 4.3 guarantees coverage marginally over tasks, as specified in Eq. (4). Given appropriate assumptions on the quantile predictor P 1− , we can achieve task-conditional coverage asymptotically: Definition 4.4 (Consistency). We say P 1− is an asymptotically consistent estimator of the 1 − quantile if
P 1− (Z i,1:k ; φ meta ) − Quantile(1 − , F i ) = o P (1)
as k → ∞, where F i is the CDF of nonconformity scores for any task t i ∈ T . In other words, P 1− converges in probability to the true quantile given enough in-task data. Proposition 4.5 (Asymptotic meta CP). If P 1− is asymptotically consistent, then as k → ∞ the meta conformal set M achieves asymptotic conditional coverage, where
1 P Y test t+1 ∈ M X test t+1 | T t+1 = t t+1 ≥ 1 − = 1 − o P (1).
This result simply claims that as the number of in-task samples k increases, our meta CP will converge towards valid coverage for all tasks, not just on average. By itself, this is not particularly inspiring: after all, standard CP also becomes viable as k → ∞. Rather, the key takeaway is that this desirable behavior is nicely preserved in our meta setup as well. In Figure 5 we demonstrate that our P 1− indeed progresses towards task-conditional coverage as k grows.
Meta-learned approximate conformal prediction
Recall that a key assumption in the theoretical results established in the previous section is that the distribution functions of our calibrations tasks, F i where i ∈ I cal , are known. In this section we turn to analyze the (much more common) setting where these F i must instead be estimated empirically.
In this case, Theorem 4.3 holds in expectation over the samples chosen for the calibration tasks. Furthermore, standard concentration results suggest that we can approximate F i with little error, given enough empirical samples (which, in general, we assume we have for our calibration tasks). We now further adapt Theorem 4.3 to be conditionally valid with respect to the labeled examples that are used when replacing each task F i with its plug-in estimate, F mi .
First, we formalize a PAC-type 2-parameter validity definition (similar to training conditional CP in Vovk (2012)):
Definition 4.6 ((δ, ) task validity). M is (δ, ) task valid if for any task distribution P T , ∈ (0, 1), and δ ∈ (0, 1),
P P Y test t+1 ∈ M X test t+1 ≥ 1 − ≥ 1 − δ.(9)
The outer probability is taken with respect to the data samples used for calibration. The basic idea here is to include a secondary confidence level δ that allows us to control how robust we are to sampling variance in our estimation of calibration tasks quantiles when computing Λ(β; I cal ), our conformal prediction correction factor. We define a sampleconditional approach that is (δ, ) task valid, as follows:
Proposition 4.7 (Sample-conditional meta CP). Assume that all |I cal | = l calibration tasks are i.i.d., where for each task we have a fixed dataset that is also i.i.d. That is, for task T i , we have drawn m i i.i.d. training examples, x i,j , y i,j , j = 1, . . . , m i . For any δ ∈ (0, 1), ∈ (0, 1), and α ∈ 0, 1 − (1 − δ) 1 n , define the adjusted as
≤ − −2 l 2 i∈I cal γ 2 i log 1 − 1 − δ (1 − α) l(10)
where γ i = log(2/α) 2mi . Then M (X test t+1 ) satisfies Eq. (9). Remark 4.8. We are free to choose α so as to optimize .
Increasing the number of auxiliary tasks or samples per task make closer to . In §6 we show that we can achieve tight prediction sets in practice, even with small tolerances.
Experimental Setup
Evaluation tasks
Image classification (CV). As introduced in §1, the goal of few-shot image classification is to train a computer vision model that generalizes to entirely new image classes at test time. We use the miniImageNet dataset (Vinyals et al., 2016), a downsampled version of a subset of classes from ImageNet (Deng et al., 2009). miniImageNet contains 100 classes that are divided into training, validation, and test class splits. Within each class partition, we construct Kshot N -way tasks, where K examples per class are used to discriminate between a sample of N distinct, novel classes. We use K = 16 and N = 10 in our experiments, for a total of k = 160 training examples. In order to avoid label imbalanced accuracy, however, we choose to focus on Mondrian CP (Vovk et al., 2005), where validity is guaranteed across class type. Our meta nonconformity measure consists of a prototypical network on top of a CNN encoder.
Relation classification (NLP). Relation classification focuses on identifying the relationship between two entities mentioned in a given natural language sentence. In few-shot relation classification, the goal is to train an NLP model that generalizes to entirely new entity relationship types at test time. We use the FewRel 1.0 dataset (Han et al., 2018), which consists of 100 relations derived from 70k Wikipedia sentences. Like miniImageNet, the relation types are divided into training, validation, and test splits. 4 Within each partition, we sample K-shot N -way classification episodes (again with K = 16 and N = 10 and Mondrian CP, as in our CV task). Our meta nonconformity measure consists of a prototypical network on top of a CNN encoder with GloVe embeddings (Pennington et al., 2014).
Chemical property prediction (Chem). In-silico screening of chemical compounds is an important task for drug discovery. Given a new molecule, the goal is to predict its activity for a target chemical property. We use the ChEMBL dataset (Mayr et al., 2018), and regress the pChEMBL value (a normalized log-activity metric) for individual moleculeproperty pairs. We select a subset of 296 assays from ChEMBL, and divide them into training (208), validation (44), and test (44)
Evaluation metrics
For each experiment, we use proper training, validation, and test meta-datasets of tasks. We use the meta-training tasks to learn all meta nonconformity measures S and meta quantile predictors P. We perform model selection for CP on the meta-validation tasks, and report final numbers on the metatest tasks. For all methods, we report marginalized results over 5000 random trials, where in each trial we partition the . Few-shot CP results as a function of . The size of the prediction set of our meta CP approach is significantly better (i.e., smaller) than that of our full CP baseline. Furthermore, our meta CP approach's average accuracy level is close to the diagonal-allowing it to remain valid in the sense of Eq. (4), but also less conservative when making predictions. Note that we care more about the right-hand-side behavior of the above graphs (i.e., larger 1 − ), as they correspond to higher coverage guarantees. data into l calibration tasks (T 1:l ) and one target task (T t+1 ). In all plots, shaded regions show +/-the standard deviation across trials. We use the following metrics:
Prediction accuracy. We measure accuracy as the rate at which the target label y ∈ Y is contained within the predicted label set. For classification problems, the prediction is a discrete set, whereas in regression the prediction is a continuous interval. To be valid, a conformal model should have an average accuracy rate ≥ 1 − .
Prediction size ( ). We measure the average size of the output (i.e., |C |) as a proxy for how precise the model's predictions are. The goal is to make the prediction set as small as possible while still maintaining the desired accuracy.
Baselines
For all experiments, we compare our methods to full conformal prediction, in which we use a meta-learned nonconformity scores-as defined in Eq. (5). Though still a straightforward application of standard conformal calibration, meta-learning S with auxiliary tasks already adds significant statistical power to the model over an approach that would attempt to learn S from scratch for each new task.
In addition to evaluating improvement over full CP, we compare our approach to other viable heuristics for making set valued predictions: Top-k and Naive. In Top-k we always take the k-highest ranked predictions. In Naive we select likely labels until the cumulative softmax probability exceeds 1 − . While seeming related to our CP approach, we emphasize that these are only heuristics, and do not give the same theoretical performance guarantees.
Experimental Results
In the following, we present our main conformal few-shot results. We evaluate both our sample-conditional and unconditional meta conformal prediction approaches.
Predictive efficiency. We start by testing how our meta CP approach affects the size of the prediction set. Smaller prediction set sizes correspond to more efficient conformal models. We plot prediction set size as a function of ∈ (0, 1) in Figure 4. Table 2 shows results for specific values of , and also shows results for our sample-conditional meta CP approach, where we fix 1 − δ at 0.9 for all trials (note that the other meta results in Figure 4 and Table 2 are unconditional). Across all tasks and values of , our meta CP performs the best in terms of efficiency. Moreover, the average size of the meta CP predictions increases smoothly as a function of , while full CP suffers from discrete jumps in performance. Finally, we see that our sample-conditional (δ = 0.1, ) approach is only slightly more conservative than our unconditional meta CP method. This is especially true for domains with a higher number of auxiliary tasks and examples per auxiliary task (i.e., CV and NLP). Table 2. Few-shot CP results for values. We report the empirical accuracy and raw prediction set size for our two meta CP methods, and compare to our baseline CP model (full CP with meta-learned S). For our sample-conditional meta CP approach, we fix δ = 0.1. Note that CP can produce empty sets if no labels are deemed conformal, hence the average classification size may fall below 1 for high . Table 3. Non-conformal baseline heuristics (for classification tasks only). Top-k takes a target size (k), and yields statically sized outputs. Naive takes a target accuracy of 1 − , and yields dynamically sized outputs according to softmax probability mass.
Task validity. As per Theorem 4.3, we observe that our meta CP approach is valid, as the average accuracy always matches or exceeds the target performance level. Typically, meta CP is close to the target 1 − level for all , which indicates that it is not overly conservative at any point (which improves the predictive efficiency). On the other hand, our full CP baseline is only close to the target accuracy when 1 − is near a multiple of 1 k+1 . This is visible from its "staircase"-like accuracy plot in Figure 4. We see that our sample-conditional approach is slightly conservative, as its accuracy typically exceeds 1 − . This is more pronounced for domains with smaller amounts of auxiliary data.
Conditional coverage. Figure 5 shows the accuracy of our meta quantile predictor P 1− as a function of k. As expected, as k grows, P 1− becomes more accurate. This lessens the need for large correction factors Λ(1 − , I cal ), and leads to task-conditional coverage, per Proposition 4.5.
Baseline comparisons. Table 3 gives the results for our non-conformal heuristics, Top-k and Naive. We see that both approaches under-perform our CP method in terms of efficiency. Comparing to Table 2, we see that we achieve similar accuracy to Top-k with smaller average sets (while also being able to set ). Similarly, Naive is uncalibrated and gives conservative results: for a target we obtain tighter prediction sets with our meta CP approach. Figure 5. We measure the error in our quantile predictor S β (for β = 0.8) on the CV task as a function of k. As k increases, the predictor begins to converge on an accurate β-quantile.
Conclusion
The ability to provide precise performance guarantees and make confidence-aware predictions is a critical element for many machine learning applications in the real world. Conformal prediction can afford remarkable finite-sample theoretical guarantees, but will suffer in practice when data is limited. In this paper, we introduced a novel and theoretically grounded approach to meta-learning few-shot conformal predictor using exchangeable collections of auxiliary tasks. Our results show that our method consistently improves performance across multiple diverse domains, and allow us to obtain meaningful and confident conformal predictors when using only a few in-task examples.
A Proofs
A.1 Proof of Lemma 3.1
Proof. This is a well-known result; we prove it for completeness (see also Tibshirani et al. (2019) for an identical proof). Given support points v 1 , . . . , v n ∈ R for a discrete distribution F , let q = Quantile(β; F ). Any points v i > q do not affect this quantile, i.e., if we consider a new distributioñ F where all points v i > q are mapped to arbitrary values also larger than q. then Quantile(β; F ) = Quantile(β;F ). Accordingly, for the nonconformity scores V i , we have that
V n+1 > Quantile(β; V 1:n ∪ {∞}) ⇐⇒ V n+1 > Quantile(β; V 1:(n+1) ).
Equivalently, we also have that
V n+1 ≤ Quantile(β; V 1:n ∪ {∞}) ⇐⇒ V n+1 ≤ Quantile(β; V 1:(n+1) ).
Given the discrete distribution over the n + 1 V i , V n+1 ≤ Quantile(β; V 1:(n+1) ) implies that V n+1 is among the β(n + 1) smallest of V 1:(n+1) . By exchangeability, this event occurs with probability at least β(n+1) n+1 ≥ β.
A.2 Proof of Theorem 3.2
Proof. This is also a well-known result; we prove it here for completeness (and see Tibshirani et al. (2019) for an identical proof). For notational convenience, let
V i := V (Xn+1,Yn+1) i . Y n+1 is included in C (X n+1 ) iff V n+1 ≤ Quantile(1 − ; V 1:n ∪ {∞}).
As the nonconformity measure S preserves exchangeability by construction, if (X i , Y i ) for i = 1, . . . , n + 1 are exchangeable, then so to are the nonconformity scores V i , i = 1, . . . , n + 1. We can then apply Lemma 3.1 to complete the proof.
A.3 Proof of Lemma 4.2
Proof. Let the event {T i = t i } indicate that task i has a quantile prediction { Q i = q i ) and distribution function {F i = f i } over meta nonconformity scores given S.
For notational convenience, assume tasks T i , i ∈ I cal and T t+1 are indexed contiguously as i = 1, . . . , n+1. Next, denote by E t the event that {T 1 , . . . , T n+1 } = {t 1 , . . . , t n+1 }, i.e., we observe an unordered set of task values. Exchangeability of tasks T i implies that
P(T n+1 = t i | E t ) = 1 n + 1 ,
and, accordingly, that the distribution of T n+1 | E t is uniform on the set {t 1 , . . . , t n+1 }.
Again for notational convenience, let
V i := V (X test i ,Y test i ) i,k+1
i.e., we use V i to denote the meta nonconformity score for task i's random test point.
For any scalar λ ∈ R, we can then write
P( V n+1 ≤ Q n+1 + λ | E t ) = n+1 i=1 P( V n+1 ≤ Q n+1 + λ, T n+1 = t i | E t ) = n+1 i=1 P( V n+1 ≤ Q n+1 + λ | T n+1 = t i )P(T n+1 = t i | E t ) = 1 n + 1 n+1 i=1 P( V n+1 ≤ Q n+1 + λ | T n+1 = t i ). Since the event {T n+1 = t i } implies { Q n+1 = q i , F n+1 = f i }, we can reduce this to P( V n+1 ≤ Q n+1 + λ | E t ) = 1 n + 1 n+1 i=1 f i (q i + λ).
Furthermore, on the event E t , we have {T 1 , . . . , T n+1 } = {t 1 , . . . , t n+1 }, so (with slight abuse of notation)
P( V n+1 ≤ Q n+1 + λ | E t ) = 1 n + 1 n+1 i=1 F i ( Q i + λ).
As F i is a distribution function with range [0, 1], we can remove T n+1 from the summation to get a lower bound,
P( V n+1 ≤ Q n+1 + λ | E t ) ≥ 1 n + 1 n i=1 F i ( Q i + λ).
For a fixed β, substitute Λ(β; I cal ) for λ to derive
P( V n+1 ≤ Q n+1 + Λ(β; I cal ) | E t ) ≥ 1 n + 1 n i=1 F i ( Q i + Λ(β; I cal )) ≥ β.
Because this is true for any E t , we can marginalize to obtain
P( V n+1 ≤ Q n+1 + Λ(β; I cal )) = Et P( V n+1 ≤ Q n+1 + Λ(β; I cal ) | E t ) dP(E t ) ≥ β Et dP(E t ) = β.
Few-shot Conformal Prediction with Auxiliary Tasks
Symbol Meaning k The number of in-task examples used for few-shot learning. The stipulated performance tolerance. δ
The stipulated secondary confidence tolerance for calibration conditional validity.
T
The space of potential tasks to be solved in a few-shot learning setting.
X × Y
The joint input (X's) and output (Y 's) space.
Itrain
The set of auxiliary tasks used for meta-learning nonconformity scores and quantile predictors.
I cal
The set of auxiliary tasks used to calibrate the quantile predictor.
Tt+1
The target few-shot test task to be solved. S
A meta-learned nonconformity measure.
P1−
A meta-learned regressor of the 1 − quantile of S's scores on Tt+1 given k in-task samples. V
(x,y) i,j
The meta nonconformity score for example j of task i, given the current candidate output (x, y).
Fi, Fm i
The true vs. mi-sample empirical distribution function over nonconformity scores of task i. Qi, Quantile(1 − , Fi) The predicted vs. true nonconformity score 1 − quantiles for task i.
Λ(1 − , I cal )
The 1 − meta quantile correction factor, computed using calibration tasks.
C , M Output label sets for standard and meta conformal prediction, respectively, at level 1 − .
A.4 Proof of Theorem 4.3
Proof. Again, for notational convenience, let
V i := V (X test i ,Y test i ) i,k+1 Y test t+1 is included in M (X test t+1 ) iff V t+1 ≤ Q t+1 + Λ(1 − ; I cal )
. As S and P are trained on the disjoint proper training set I train , they preserve exchangeability, and produce exchangeable Q i . We can then apply Lemma 4.2.
A.5 Proof of Proposition 4.5
Proof. Again, for notational convenience, let
V i := V (X test i ,Y test i ) i,k+1 .
As stated in the claim, assume that as k → ∞,
P 1− (Z i,1:k ; φ meta ) − Quantile(1 − , F i ) = o P (1),
where F i is the distribution of V i . That is, the quantile converges in probability to the true quantile where ∀α, µ there exists K α,µ such that
P P 1− (Z i,1:k ; φ meta ) − Quantile(1 − , F i ) ≥ µ ≤ α,
∀k > K α,µ . This is a standard property of consistent estimators (e.g., see Lei et al. (2018) for similar assumptions).
As Λ(1− ; I cal ) ≥ 0, for any target task t t+1 ∈ T , we have that the corrected quantile, Q t+1 = Q t+1 + Λ(1 − ; I cal ), is always conservative for large enough k, i.e.,
1 Q t+1 ≥ Quantile(1 − , F t+1 ) | T t+1 = t t+1 = 1 − o P (1).(11)
In other words, this is to say that if P 1− converges in probability to the true quantile, then P 1− + some nonzero factor converges in probability to at least the true quantile.
Next, if V t+1 ≤ Q t+1 , then Y test t+1 is included in M (X test t+1 ) (according to the definition of M ). Furthermore, by the definition of Quantile, the event that V t+1 ≤ Quantile(1 − , F t+1 ) happens with probability at least 1 − . Therefore, if Q t+1 ≥ Quantile(1 − , F t+1 ), then Y test t+1 is included in M (X test t+1
) with probability at least 1 − . Combining with Eq. (11) completes our proof.
A.6 Proof of Proposition 4.7
Proof. Let F mi be the m i -sample ECDF for T i . Define the empirical correction, Λ (β, I cal ), when plugging in F mi as inf λ :
1
|I cal | + 1 i∈Ical F mi V test i,k+1 ≤ Q i + λ ≥ β (12)
where the ECDF is calculated as
F mi := mi j=1 1{ V (j) i,k+1 ≤ Q i + λ}, where V (j) i,k+1 are i.i.d. and V (j) i,k+1 d = V test i,k+1
. We proceed in two parts. First, we prove that if the approximation error incurred by using Λ (β, I cal ) is bounded by τ with probability 1 − δ, then M −τ is (δ, ) valid. Second, we prove that the error is bounded according to Eq. (10).
(1) Following the proof of Lemma 4.2, we have that
P( V t+1 ≤ Q t+1 + Λ (β; I cal ) | E t ) ≥ 1 |I cal | + 1 i∈I cal F i ( Q i + Λ (β; I cal )).(13)
For ease of notation, let A := 1 |I cal | + 1 i∈I cal F mi ( Q i + Λ (β; I cal )),
B := 1 |I cal | + 1 i∈I cal F i ( Q i + Λ (β; I cal )).
Next, assume that (to be proved) for some τ > 0
P(A − B < τ ) ≥ 1 − δ.(14)
By construction-see Eq. (12)-we have A ≥ β. Then by Eq. (14), we have that with probability 1 − δ,
B > A − τ ≥ β − τ.
Choose β ≥ 1 − + τ . Then B ≥ 1 − . By convention, this corresponds to β := 1 − ≥ 1 − ( − τ ), or ≤ − τ as in Eq. (10). Combining this with Eq. (13), we have
P( V t+1 ≤ Q t+1 + Λ (β; I cal ) | E t ) ≥ 1 − .
This is true for all E t , so we can marginalize to obtain
P( V t+1 ≤ Q t+1 + Λ (β; I cal )) ≥ 1 − .
(2) We now prove the assumption stated in Eq. (14). Given an m-sample ECDF, F m (u), for some random variable U , the Dvoretsky-Kiefer-Wolfowitz inequality allows us to build a confidence interval for the value of the true distribution function,
F (u), where P sup u∈R | F m (u) − F (u)| > γ ≤ 2e −2nγ 2 .
Alternatively stated, with probability at least 1 − α, F (u) ∈
[ F m (u) − γ, F m (u) + γ], where γ = log 2 α 2n . We combine this result with Hoeffding's inequality.
Let Y i := F mi ( V i ≤ Q i + λ) − F i ( V i ≤ Q i + λ).
Once again for notational convenience, assume tasks T i , i ∈ I cal and T t+1 are indexed contiguously as i = 1, . . . , n + 1.
The difference, A − B, is then equivalent to 1 n+1 n i=1 Y i . According to our assumptions, Y i 's are i.i.d., E[Y i ] = E[ F mi ] − E[F i ] = 0, and Y i ∈ [−γ i , γ i ] w.p. 1 − α. As above, we define γ i = log 2 α 2mi .
Applying Hoeffding's inequality gives
P 1 n + 1 n i=1 Y i < τ ≥ P n i=1 Y i < nτ, n i=1 Y i ∈ [−γ i , γ i ] ≥ P n i=1 Y i < nτ n i=1 Y i ∈ [−γ i , γ i ] P n i=1 Y i ∈ [−γ i , γ i ] ≥ 1 − e − 2n 2 τ 2 n i=1 (2γ i ) 2 1 − α n .
Solving for τ given the target 1 − δ error probability yields
1 − δ = 1 − e − 2n 2 τ 2 n i=1 (2γ i ) 2 1 − α n τ = −2 n 2 n i=1 γ 2 i log 1 − 1 − δ (1 − α) n
This is valid for any choice of α (as long as the log term is defined), so we are free to choose α that minimizes τ .
B Meta Conformal Prediction Details
B.1 Meta-Learning algorithms Prototypical networks (Snell et al., 2017). We use prototypical networks for our classification tasks. We assume that for each task we have N total classes with K examples per class (for a total of k = N × K training examples). In this model, an encoder, h = enc(x; θ) is trained to produce vector representations. Thereafter, a "prototype" for each class is computed by averaging the representations of all instances of that class. Let S j denote the support set of training examples for class j. Then the prototype c j is
c j := 1 |S j | (xi,yi)∈Sj enc(x i ; θ).
The likelihood of each class is then calculated using a softmax over the euclidean distance to each prototype:
p θ (y = j | x) := exp(−d(c j , enc(x; θ))) j exp(−d(c j , enc(x; θ))) ,(15)
where d(·, ·) denotes the euclidean distance.
During training, random training "episodes" are created by sampling N classes from the training set. For each class, K examples are randomly sampled to construct the prototypes. An additional Q examples are then sampled to simulate queries. The optimization objective is to then minimize the cross entropy loss across queries.
After training, we use −p θ (y = j | x) as defined in Eq. (15) as the nonconformity measure for label y = j.
Differentiable ridge regression (Bertinetto et al., 2019). We use differentiable ridge regression networks for our regression tasks. We assume that for each task we have k labeled (x i , y i ) pairs, where y ∈ R. In this model, like the prototypical networks, an encoder, h = enc(x; θ), is trained to produce vector representations of dimension d. We then solve a least-squares regression to obtain our prediction,ŷ = w · enc(x; θ), where w = X (XX + λI) −1 Y with X ∈ R k×d , Y ∈ R k , and λ a meta regularization parameter that we optimize. We optimize MSE by backpropagating through the least-squares operator to the encoder. We train using the same episode-based procedure that we described for the prototypical networks.
After training, we use the absolute error, |ŷ − y|, as the nonconformity score for candidate y ∈ R.
Deep sets (Zaheer et al., 2017). We use a simple deep sets architecture for all of our quantile predictors. Deep sets are of the form f (X) := dec x∈X enc(x; φ 1 ); φ 2 where X is an input set of elements, enc is an elementwise encoder, and dec is a decoder that operates on the aggregated encoded set elements. Importantly, the deep sets model f is invariant to permutations of the elements in X.
B.2 Implementation details
Image classification. Each image is first resized to 84 × 84 pixels. We use a CNN encoder with 4 layers. Each layer contains a 3 × 3 convolution kernel with 64 channels and a padding of size 1, followed by batch normalization layer, ReLU activation, and a 2 × 2 max pooling filter. The final output is of size 1600, which we use to compute the prototypes and as the query representations. We train the model for 100 epochs with an Adam optimizer and a batch size of 256. In each epoch, we run 100 episodes in which we sample 10 support images and 15 query images per class.
Relation classification. We use GloVe (Pennington et al., 2014) word embeddings of size 50 to convert the sentence into vectors. To each word embedding, we also concatenate two learned position embeddings of size 5, where the positions are relative to the location of the two entities in the sentence. Thereafter, a 1D convolution is applied with 230 output channels, a kernel size of 3 and padding size 1, followed by a ReLU activation. Finally, a max pooling filter is applied. The resultant sentence representation of size 230 is used to compute the prototypes and query representations. We train the model for a total of 20k episodes with a SGD optimizer and a batch size of 32. In each episode, we sample 10 support sentences and 5 query sentences per class.
Chemical property prediction. Our ridge regression network uses directed message passing networks (Yang et al., 2019) to compute enc(x; θ). The message passing network uses graph convolutions to learn a deep molecular representation that is shared across property predictions. We also include additional RDKit features as inputs. 6 We map inputs with a FFNN with hidden size 200, and then apply 3 layers of graph convolutions with a hidden size of 256. Finally, we 6 www.rdkit.org map the output representation to a hidden size of 16, and apply least-squares regression. We train the network using an Adam optimizer for 15 epochs with 8 meta episodes per batch, each with 32 queries (for a total batch size of 256).
Quantile prediction. For all of our quantile predictors, we use a 2-layer FFNN for both the element-wise encoder, enc(·; φ 1 ), and the aggregated set decoder, dec(·; φ 2 ). Each FFNN has a hidden size of 256 and uses ReLU activations. We train the network using an Adam optimizer for 15 epochs with batch size 64.
B.3 Training strategy
We adopt a cross-fold procedure for training our meta nonconformity measure S and meta quantile predictor P 1− in a data efficient way, as outlined in §4.2. Figure B.1 illustrates this cross-fold process, in which we train a metanonconformity measure on each training fold and aggregate their predictions as input data for the quantile predictor.
Since we train in a cross-fold manner but ultimately use a meta-nonconformity measure S that is trained on all of the training data, there is a train-test mismatch in the data supplied to the quantile predictor. Nevertheless, any error induced by this discrepancy (and any other sources of error, for that matter) is handled during meta-calibration ( §4.3).
All experiments took 1-5 hours to run on an Nvidia 2080 Ti GPU. As absolute performance is not the primary goal of this work, little hyperparameter tuning was done (most hyperparameters were taken from prior work). Datasets are available for miniImageNet 7 , FewRel 1.0 8 , and ChEMBL 9 . 1. An illustration of our strategy for learning meta nonconformity measures, S, and meta quantile predictors, P1− . As P1− is trained on the outputs of S, we adopt a cross-fold procedure where we first train S on a fraction of the data, and evaluate nonconformity scores on the held-out fold. We repeat this process for all k f folds, and then aggregate them all for training P1− .
Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s).
y) j := S(Z j , Z 1:n ∪ {(x, y)}), V (x,y) n+1 := S((x, y), Z 1:n ∪ {(x, y)}).
Figure 2 .
2An example of using a prototypical network
and then train S on I
splits. Within each partition, each assay's pChEMBL values are treated as a regression task. We use k = 16 training samples per task. Our meta nonconformity measure consists of a few-shot, closed-form ridge regressor(Bertinetto et al., 2019) on top of a directed Message Passing Network molecular encoder(Yang et al., 2019). 5
Figure 4
4Figure 4. Few-shot CP results as a function of . The size of the prediction set of our meta CP approach is significantly better (i.e., smaller) than that of our full CP baseline. Furthermore, our meta CP approach's average accuracy level is close to the diagonal-allowing it to remain valid in the sense of Eq. (4), but also less conservative when making predictions. Note that we care more about the right-hand-side behavior of the above graphs (i.e., larger 1 − ), as they correspond to higher coverage guarantees.
Figure B.
Figure B.1. An illustration of our strategy for learning meta nonconformity measures, S, and meta quantile predictors, P1− . As P1− is trained on the outputs of S, we adopt a cross-fold procedure where we first train S on a fraction of the data, and evaluate nonconformity scores on the held-out fold. We repeat this process for all k f folds, and then aggregate them all for training P1− .
Table A . 1 .
A1Definitions of selected common notations used in this paper.
Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA. Correspondence to: Adam Fisch <fisch@csail.mit.edu>.
Split conformal prediction also allows for simple nonconformity score calculations for regression tasks. For example, assume that a training set has been used to train a fixed regression model, f θ (x). The absolute error nonconformity measure, |y − f θ (x)|, can then be easily evaluated for all y ∈ R. Furthermore, as the absolute error monotonically increases away from f θ (x), the conformal prediction C simplifies to a closed-form interval.
For ease of notation, we write X test t+1 to denote the (k + 1)th example of task Tt+1, i.e., the new test point after observing k training points. This is equivalent to test point Xn+1 from §3.
We only use training/validation splits (the test set is hidden).5 We apply RRCM(Nouretdinov et al., 2001) for full CP.
https://github.com/yaoyao-liu/ mini-imagenet-tools 8 https://thunlp.github.io/1/fewrel1.html 9 https://github.com/chemprop/chemprop
AcknowledgementsWe thank Kyle Swanson, the MIT NLP group, and anonymous reviewers for valuable feedback. AF is supported in part by the NSF GRFP. TS is supported in part by DSO grant DSOCL18002. This work is also supported in part by MLPDS and the DARPA AMD project.
D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman, D Mané, arXiv:1606.06565Concrete problems in ai safety. arXiv preprintAmodei, D., Olah, C., Steinhardt, J., Christiano, P., Schul- man, J., and Mané, D. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
Uncertainty sets for image classifiers using conformal prediction. A N Angelopoulos, S Bates, J Malik, Jordan , M I , International Conference on Learning Representations (ICLR. 2021Angelopoulos, A. N., Bates, S., Malik, J., and Jordan, M. I. Uncertainty sets for image classifiers using conformal prediction. In International Conference on Learning Representations (ICLR), 2021.
Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. A Ashukha, A Lyzhov, D Molchanov, D Vetrov, International Conference on Learning Representations (ICLR. 2020Ashukha, A., Lyzhov, A., Molchanov, D., and Vetrov, D. Pitfalls of in-domain uncertainty estimation and ensem- bling in deep learning. In International Conference on Learning Representations (ICLR), 2020.
Few-shot text classification with distributional signatures. Y Bao, M Wu, S Chang, R Barzilay, International Conference on Learning Representations (ICLR. 2020Bao, Y., Wu, M., Chang, S., and Barzilay, R. Few-shot text classification with distributional signatures. In Interna- tional Conference on Learning Representations (ICLR), 2020.
Distribution free, risk controlling prediction sets. S Bates, A N Angelopoulos, L Lei, J Malik, Jordan , M I , arXiv 2101.02703arXiv preprintBates, S., Angelopoulos, A. N., Lei, L., Malik, J., and Jordan, M. I. Distribution free, risk controlling prediction sets. arXiv preprint: arXiv 2101.02703, 2020.
Meta-learning with differentiable closed-form solvers. L Bertinetto, J F Henriques, P Torr, A Vedaldi, International Conference on Learning Representations (ICLR). Bertinetto, L., Henriques, J. F., Torr, P., and Vedaldi, A. Meta-learning with differentiable closed-form solvers. In International Conference on Learning Representations (ICLR), 2019.
The tradeoffs of large scale learning. L Bottou, O Bousquet, Advances in Neural Information Processing Systems (NeurIPS). Bottou, L. and Bousquet, O. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems (NeurIPS), 2008.
Modifications to p-values of conformal predictors. L Carlsson, E Ahlberg, H Boström, U Johansson, H Linusson, Statistical Learning and Data Sciences. Carlsson, L., Ahlberg, E., Boström, H., Johansson, U., and Linusson, H. Modifications to p-values of conformal predictors. In Statistical Learning and Data Sciences, 2015.
Knowing what you know: valid confidence sets in multiclass and multilabel prediction. M Cauchois, S Gupta, J Duchi, arXiv preprintCauchois, M., Gupta, S., and Duchi, J. Knowing what you know: valid confidence sets in multiclass and multilabel prediction. arXiv preprint: arXiv 2004.10181, 2020.
. V Chernozhukov, K Wuthrich, Y Zhu, Distributional conformal prediction. arXiv preprint: arXiv 1909.07889Chernozhukov, V., Wuthrich, K., and Zhu, Y. Distributional conformal prediction. arXiv preprint: arXiv 1909.07889, 2019.
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, Conference on Computer Vision and Pattern Recognition (CVPR). Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recogni- tion (CVPR), 2009.
Towards a neural statistician. H Edwards, A Storkey, International Conference on Learning Representations (ICLR. Edwards, H. and Storkey, A. Towards a neural statistician. In International Conference on Learning Representations (ICLR), 2017.
Model-agnostic metalearning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, International Conference on Machine Learning (ICML). Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta- learning for fast adaptation of deep networks. In Interna- tional Conference on Machine Learning (ICML), 2017.
Efficient conformal prediction via cascaded inference with expanded admission. A Fisch, T Schuster, T Jaakkola, R Barzilay, International Conference on Learning Representations (ICLR. 2021Fisch, A., Schuster, T., Jaakkola, T., and Barzilay, R. Effi- cient conformal prediction via cascaded inference with expanded admission. In International Conference on Learning Representations (ICLR), 2021.
Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Y Gal, Z Ghahramani, International Conference on Machine Learning (ICML). Gal, Y. and Ghahramani, Z. Dropout as a bayesian approx- imation: Representing model uncertainty in deep learn- ing. In International Conference on Machine Learning (ICML), 2016.
Practical variational inference for neural networks. A Graves, Advances in Neural Information Processing Systems (NeurIPS). Graves, A. Practical variational inference for neural net- works. In Advances in Neural Information Processing Systems (NeurIPS), 2011.
On calibration of modern neural networks. C Guo, G Pleiss, Y Sun, K Q Weinberger, International Conference on Machine Learning (ICML). Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On calibration of modern neural networks. In International Conference on Machine Learning (ICML), 2017.
FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. X Han, H Zhu, P Yu, Z Wang, Y Yao, Z Liu, M Sun, Conference on Empirical Methods in Natural Language Processing. Han, X., Zhu, H., Yu, P., Wang, Z., Yao, Y., Liu, Z., and Sun, M. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
Probabilistic backpropagation for scalable learning of bayesian neural networks. J M Hernández-Lobato, R P Adams, International Conference on Machine Learning (ICML). Hernández-Lobato, J. M. and Adams, R. P. Probabilistic backpropagation for scalable learning of bayesian neu- ral networks. In International Conference on Machine Learning (ICML), 2015.
Uncertainty quantification using neural networks for molecular property prediction. L Hirschfeld, K Swanson, K Yang, R Barzilay, C W Coley, arXiv preprintHirschfeld, L., Swanson, K., Yang, K., Barzilay, R., and Coley, C. W. Uncertainty quantification using neural net- works for molecular property prediction. arXiv preprint: arXiv 2005.10036, 2020.
To trust or not to trust a classifier. H Jiang, B Kim, M Guan, M Gupta, Advances in Neural Information Processing Systems (NeurIPS). Jiang, H., Kim, B., Guan, M., and Gupta, M. To trust or not to trust a classifier. In Advances in Neural Information Processing Systems (NeurIPS), pp. 5541-5552. 2018.
Calibrating predictive model estimates to support personalized medicine. X Jiang, M Osl, J Kim, L Ohno-Machado, Journal of the American Medical Informatics Association. 192Jiang, X., Osl, M., Kim, J., and Ohno-Machado, L. Calibrat- ing predictive model estimates to support personalized medicine. Journal of the American Medical Informatics Association, 19(2):263-274, Mar-Apr 2012.
Handling small calibration sets in mondrian inductive conformal regressors. U Johansson, E Ahlberg, H Boström, L Carlsson, H Linusson, C Sönströd, Statistical Learning and Data Sciences. Johansson, U., Ahlberg, E., Boström, H., Carlsson, L., Li- nusson, H., and Sönströd, C. Handling small calibration sets in mondrian inductive conformal regressors. In Sta- tistical Learning and Data Sciences, 2015.
Adaptive, distribution-free prediction intervals for deep networks. D Kivaranovic, K D Johnson, H Leeb, International Conference on Artificial Intelligence and Statistics (AISTATS). 2020Kivaranovic, D., Johnson, K. D., and Leeb, H. Adaptive, distribution-free prediction intervals for deep networks. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
Human-level concept learning through probabilistic program induction. B M Lake, R Salakhutdinov, J B Tenenbaum, 0036-8075Science. 3506266Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. Human-level concept learning through probabilistic pro- gram induction. Science, 350(6266):1332-1338, 2015. ISSN 0036-8075.
Simple and scalable predictive uncertainty estimation using deep ensembles. B Pritzel, A Blundell, C , Few-shot Conformal Prediction with Auxiliary Tasks Lakshminarayanan. Advances in Neural Information Processing Systems (NeurIPS)Few-shot Conformal Prediction with Auxiliary Tasks Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Process- ing Systems (NeurIPS). 2017.
Training confidencecalibrated classifiers for detecting out-of-distribution samples. K Lee, H Lee, K Lee, J Shin, International Conference on Learning Representations (ICLR). Lee, K., Lee, H., Lee, K., and Shin, J. Training confidence- calibrated classifiers for detecting out-of-distribution sam- ples. In International Conference on Learning Represen- tations (ICLR), 2018.
Distribution-free predictive inference for regression. J Lei, M Sell, A Rinaldo, R J Tibshirani, L Wasserman, Journal of the American Statistical Association. 113523Lei, J., G'Sell, M., Rinaldo, A., Tibshirani, R. J., and Wasserman, L. Distribution-free predictive inference for regression. Journal of the American Statistical Associa- tion, 113(523):1094-1111, 2018.
Efficiency comparison of unstable transductive and inductive conformal classifiers. H Linusson, U Johansson, H Boström, T Löfström, Artificial Intelligence Applications and Innovations. Linusson, H., Johansson, U., Boström, H., and Löfström, T. Efficiency comparison of unstable transductive and inductive conformal classifiers. In Artificial Intelligence Applications and Innovations, 2014.
Large-scale comparison of machine learning methods for drug target prediction on chembl. A Mayr, G Klambauer, T Unterthiner, M Steijaert, J Wegner, H Ceulemans, D.-A Clevert, S Hochreiter, 10.1039/C8SC00148KChemical Science. 9Mayr, A., Klambauer, G., Unterthiner, T., Steijaert, M., Weg- ner, J., Ceulemans, H., Clevert, D.-A., and Hochreiter, S. Large-scale comparison of machine learning methods for drug target prediction on chembl. Chemical Science, 9, 06 2018. doi: 10.1039/C8SC00148K.
Bayesian Learning for Neural Networks. R M Neal, Springer-VerlagISBN 0387947248Neal, R. M. Bayesian Learning for Neural Networks. Springer-Verlag, 1996. ISBN 0387947248.
Predicting good probabilities with supervised learning. A Niculescu-Mizil, R Caruana, International Conference on Machine Learning (ICML). Niculescu-Mizil, A. and Caruana, R. Predicting good prob- abilities with supervised learning. In International Con- ference on Machine Learning (ICML), 2005.
Ridge regression confidence machine. I Nouretdinov, T Melluish, V Vovk, International Conference on Machine Learning (ICML). Nouretdinov, I., Melluish, T., and Vovk, V. Ridge regression confidence machine. In International Conference on Machine Learning (ICML), 2001.
Inductive conformal prediction: Theory and application to neural networks. H Papadopoulos, Tools in Artificial Intelligence, chapter 18. IntechOpen, RijekaPapadopoulos, H. Inductive conformal prediction: Theory and application to neural networks. In Tools in Artificial Intelligence, chapter 18. IntechOpen, Rijeka, 2008.
GloVe: Global vectors for word representation. J Pennington, R Socher, C Manning, Conference on Empirical Methods in Natural Language Processing. Pennington, J., Socher, R., and Manning, C. GloVe: Global vectors for word representation. In Conference on Empir- ical Methods in Natural Language Processing (EMNLP), 2014.
Conformalized quantile regression. Y Romano, E Patterson, E Candes, Advances in Neural Information Processing Systems (NeurIPS). Romano, Y., Patterson, E., and Candes, E. Conformalized quantile regression. In Advances in Neural Information Processing Systems (NeurIPS). 2019.
With malice toward none: Assessing uncertainty via equalized coverage. Y Romano, R F Barber, C Sabatti, E Candès, Harvard Data Science Review. 42020Romano, Y., Barber, R. F., Sabatti, C., and Candès, E. With malice toward none: Assessing uncertainty via equalized coverage. Harvard Data Science Review, 4 2020.
A tutorial on conformal prediction. G Shafer, V Vovk, Journal of Machine Learning Research (JMLR). 9Shafer, G. and Vovk, V. A tutorial on conformal predic- tion. Journal of Machine Learning Research (JMLR), 9: 371-421, June 2008.
Prototypical networks for few-shot learning. J Snell, K Swersky, R Zemel, Advances in Neural Information Processing Systems (NeurIPS). Snell, J., Swersky, K., and Zemel, R. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
Conformal prediction under covariate shift. R J Tibshirani, R Barber, E Candes, A Ramdas, Advances in Neural Information Processing Systems (NeurIPS). Tibshirani, R. J., Foygel Barber, R., Candes, E., and Ram- das, A. Conformal prediction under covariate shift. In Advances in Neural Information Processing Systems (NeurIPS). 2019.
Matching networks for one shot learning. O Vinyals, C Blundell, T Lillicrap, D Wierstra, Advances in Neural Information Processing Systems (NeurIPS). Vinyals, O., Blundell, C., Lillicrap, T., kavukcuoglu, k., and Wierstra, D. Matching networks for one shot learning. In Advances in Neural Information Processing Systems (NeurIPS), 2016.
Conditional validity of inductive conformal predictors. V Vovk, Proceedings of the Asian Conference on Machine Learning. the Asian Conference on Machine LearningVovk, V. Conditional validity of inductive conformal predic- tors. In Proceedings of the Asian Conference on Machine Learning, 2012.
Algorithmic Learning in a Random World. V Vovk, A Gammerman, G Shafer, Springer-VerlagBerlin, HeidelbergVovk, V., Gammerman, A., and Shafer, G. Algorithmic Learning in a Random World. Springer-Verlag, Berlin, Heidelberg, 2005.
Generalizing from a few examples: A survey on few-shot learning. Y Wang, Q Yao, J T Kwok, L M Ni, Association for Computing Machinery53Wang, Y., Yao, Q., Kwok, J. T., and Ni, L. M. Generalizing from a few examples: A survey on few-shot learning. volume 53. Association for Computing Machinery, 2020.
Analyzing learned molecular representations for property prediction. K Yang, K Swanson, W Jin, C Coley, P Eiden, H Gao, A Guzman-Perez, T Hopper, B Kelley, M Mathea, A Palmer, V Settels, T Jaakkola, K Jensen, R Barzilay, Journal of Chemical Information and Modeling. 598Yang, K., Swanson, K., Jin, W., Coley, C., Eiden, P., Gao, H., Guzman-Perez, A., Hopper, T., Kelley, B., Mathea, M., Palmer, A., Settels, V., Jaakkola, T., Jensen, K., and Barzilay, R. Analyzing learned molecular representations for property prediction. Journal of Chemical Information and Modeling, 59(8):3370-3388, 2019.
Deep sets. M Zaheer, S Kottur, S Ravanbakhsh, B Poczos, R R Salakhutdinov, A J Smola, Advances in Neural Information Processing Systems (NeurIPS). Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. Deep sets. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
| [
"https://github.com/ajfisch/few-shot-cp.",
"https://github.com/yaoyao-liu/",
"https://github.com/chemprop/chemprop"
] |
[
"SUBJQA: A Dataset for Subjectivity and Review Comprehension",
"SUBJQA: A Dataset for Subjectivity and Review Comprehension"
] | [
"Johannes Bjerva \nDepartment of Computer Science\nUniversity of Copenhagen\n\n",
"Nikita Bhutani \nMegagon Labs\nMountain View\n",
"Behzad Golshan \nMegagon Labs\nMountain View\n",
"Wang-Chiew Tan wangchiew@megagon.ai ",
"Isabelle Augenstein augenstein@di.ku.dk \nDepartment of Computer Science\nUniversity of Copenhagen\n\n\nMegagon Labs\nMountain View\n"
] | [
"Department of Computer Science\nUniversity of Copenhagen\n",
"Megagon Labs\nMountain View",
"Megagon Labs\nMountain View",
"Department of Computer Science\nUniversity of Copenhagen\n",
"Megagon Labs\nMountain View"
] | [] | Subjectivity is the expression of internal opinions or beliefs which cannot be objectively observed or verified, and has been shown to be important for sentiment analysis and wordsense disambiguation. Furthermore, subjectivity is an important aspect of user-generated data. In spite of this, subjectivity has not been investigated in contexts where such data is widespread, such as in question answering (QA). We therefore investigate the relationship between subjectivity and QA, while developing a new dataset. We compare and contrast with analyses from previous work, and verify that findings regarding subjectivity still hold when using recently developed NLP architectures. We find that subjectivity is also an important feature in the case of QA, albeit with more intricate interactions between subjectivity and QA performance. For instance, a subjective question may or may not be associated with a subjective answer. We release an English QA dataset (SUBJQA) based on customer reviews, containing subjectivity annotations for questions and answer spans across 6 distinct domains. | 10.18653/v1/2020.emnlp-main.442 | [
"https://arxiv.org/pdf/2004.14283v1.pdf"
] | 216,642,239 | 2004.14283 | bd7c269bd90658f7c48ace403adf04eeadae2e05 |
SUBJQA: A Dataset for Subjectivity and Review Comprehension
Johannes Bjerva
Department of Computer Science
University of Copenhagen
Nikita Bhutani
Megagon Labs
Mountain View
Behzad Golshan
Megagon Labs
Mountain View
Wang-Chiew Tan wangchiew@megagon.ai
Isabelle Augenstein augenstein@di.ku.dk
Department of Computer Science
University of Copenhagen
Megagon Labs
Mountain View
SUBJQA: A Dataset for Subjectivity and Review Comprehension
Subjectivity is the expression of internal opinions or beliefs which cannot be objectively observed or verified, and has been shown to be important for sentiment analysis and wordsense disambiguation. Furthermore, subjectivity is an important aspect of user-generated data. In spite of this, subjectivity has not been investigated in contexts where such data is widespread, such as in question answering (QA). We therefore investigate the relationship between subjectivity and QA, while developing a new dataset. We compare and contrast with analyses from previous work, and verify that findings regarding subjectivity still hold when using recently developed NLP architectures. We find that subjectivity is also an important feature in the case of QA, albeit with more intricate interactions between subjectivity and QA performance. For instance, a subjective question may or may not be associated with a subjective answer. We release an English QA dataset (SUBJQA) based on customer reviews, containing subjectivity annotations for questions and answer spans across 6 distinct domains.
Introduction
Subjectivity is ubiquitous in our use of language (Banfield, 1982;Quirk et al., 1985;Wiebe et al., 1999;Benamara et al., 2017), and is therefore an important aspect to consider in Natural Language Processing (NLP). For example, subjectivity can be associated with different senses of the same word. BOILING is objective in the context of hot water, but subjective in the context of a person boiling with anger (Wiebe and Mihalcea, 2006). The same applies to sentences in discourse contexts (Pang and Lee, 2004). While early work has shown subjectivity to be an important feature for low-level * JB and NB contributed equally to this work. tasks such as word-sense disambiguation and sentiment analysis, subjectivity in NLP has not been explored in many contexts where it is prevalent.
In recent years, there is renewed interest in areas of NLP for which subjectivity is important, and a specific topic of interest is question answering (QA). This includes work on aspect extraction (Poria et al., 2016), opinion mining (Sun et al., 2017) and community question answering (Gupta et al., 2019). Many of these QA systems are based on representation learning architectures. However, it is unclear whether findings of previous work on subjectivity still apply to such architectures, including transformer-based language models (Devlin et al., 2018;Radford et al., 2019).
The interactions between QA and subjectivity are even more relevant today as users' natural search criteria have become more subjective. Their questions can often be answered by online customer reviews, which tend to be highly subjective as well. Although QA over customer reviews have gained traction recently with the availability of new datasets and architectures (Gupta et al., 2019;Grail and Perez, 2018;Fan et al., 2019;Xu et al., 2019b;, these are agnostic with respect to how subjectivity is expressed in the questions and the reviews. Furthermore, the datasets are either too small (< 2000 questions) or have target-specific question types (e.g., yes-no). Consequently, most QA systems are only trained to find answers from factual data, such as Wikipedia articles and News (Rajpurkar et al., 2018;Reddy et al., 2019;Joshi et al., 2017;Trischler et al., 2017).
In this work, we investigate the relation between subjectivity and question answering (QA) in the context of customer reviews. As no such QA dataset exists, we construct a new dataset, SUB-JQA. In order to capture subjectivity, our data collection method builds on the recent developments in opinion extraction and matrix factorization, in-stead of relying on the linguistic similarity between the questions and the reviews (Gupta et al., 2019). SUBJQA includes over 10,000 English examples spanning 6 domains that cover both products and services. We find that a large percentage of the questions and, respectively, answers in SUBJQA are subjective. In our dataset, we found 73% of the questions are subjective and 74% of the answers are subjective. Experiments show that existing QA systems trained to find factual answers struggle to understand subjective questions and reviews. For instance, fine-tuning BERT (Devlin et al., 2018), a state-of-the-art QA model, yields 92.9% F 1 on SQuAD (Rajpurkar et al., 2016), but only achieves an average score of 74.1% F 1 across the different domains of SUBJQA.
We develop a subjectivity-aware QA model by extending an existing model in a multi-task learning paradigm. The model is trained to predict the subjectivity label and answer span simultaneously, and does not require subjectivity labels at test time.
We found our QA model achieves 76.3% F 1 on an average over different domains of SUBJQA.
Contributions
• We release a challenging QA dataset with subjectivity labels for questions and answers, spanning 6 domains; • We investigate the relationship between subjectivity and a modern NLP task; • We develop a subjectivity-aware QA model; • We verify the findings of previous work on subjectivity, using recent NLP architectures;
Subjectivity
Written text, as an expression of language, contains information on several linguistic levels, many of which have been thoroughly explored in NLP. 1 For instance, both the semantic content of text and the (surface) forms of words and sentences, as expressed through syntax and morphology, have been at the core of the field for decades. However, another level of information can be found when trying to observe or encode the so-called private states of the writer (Quirk et al., 1985). Examples of private states include the opinions and beliefs of a writer, and can concretely be said to not be available for verification or objective observation. It is Figure 1: Our data collection pipeline this type of state which is referred to as subjectivity (Banfield, 1982;Banea et al., 2011).
Whereas subjectivity has been investigated in isolation, it can be argued that subjectivity is only meaningful given sufficient context. In spite of this, most previous work has focused on annotating words (Heise, 2001), word senses (Durkin and Manning, 1989;Wiebe and Mihalcea, 2006), or sentences (Pang and Lee, 2004), with the notable exception of Wiebe et al. (2005), who investigate subjectivity in phrases in the context of a text or conversation. The absence of work investigating broader contexts can perhaps be attributed to the relatively recent emergence of models in NLP which allow for contexts to be incorporated efficiently, e.g. via architectures based on transformers (Vaswani et al., 2017).
As subjectivity relies heavily on context, and we have access to methods which can encode such context, what then of access to data which encodes subjectivity? We argue that in order to fully investigate research questions dealing with subjectivity in contexts, a large-scale dataset is needed. We choose to frame this as a QA dataset, as it not only offers the potential to investigate interactions in a single contiguous document, but also allows interactions between contexts, where parts may be subjective and other parts may be objective. Concretely, one might seek to investigate the interactions between an objective question and a subjective answer.
Data Collection
We found two limitations of existing datasets and collection strategies that motivated us to create a new QA dataset to understand subjectivity in QA.
First, data collection methods (Gupta et al., 2019;Xu et al., 2019b) often rely on the linguistic similarity between the questions and the reviews (e.g. information retrieval). However, subjective questions may not always use the same words/phrases as the review. Consider the examples below. The answer span 'vegan dishes' is semantically similar to the question Q 1 . The answer to the more subjective question Q 2 has little linguistic similarity to the question.
Example 1 Q 1 : Is the restaurant vegan friendly? Review: ...many vegan dishes on its menu. Q 2 : Does the restaurant have a romantic vibe? Review: Amazing selection of wines, perfect for a date night.
Secondly, existing review-based datasets are small and not very diverse in terms of question topics and types (Xu et al., 2019a;Gupta et al., 2019). We, therefore, consider reviews about both products and services from 6 different domains, namely TripAdvisor, Restaurants, Movies, Books, Electronics and Grocery. We use the data of Wang et al. (2010) for TripAdvisor, and Yelp 2 data for Restaurants. We use the subsets for which an opensource opinion extractor was available . We use the data of McAuley and Yang (2016) that contains reviews from product pages of Amazon.com spanning multiple categories. We target categories that had more opinion expressions than others, determined by an opinion extractor. Figure 1 depicts our data collection pipeline which builds upon recent developments in opinion extraction and matrix factorization. An opinion extractor is crucial to identify subjective or opinionated expressions, which other IR-based methods cannot. On the other hand, matrix factorization helps identify which of these expressions are related based on their co-occurrence in the review corpora, instead of their linguistic similarities. To the best of our knowledge, we are the first to explore such a method to construct a challenging subjective QA dataset.
Given a review corpus, we extract opinions about various aspects of the items being reviewed (Opinion Extraction). Consider the following review snippets and extractions.
Example 2 Review: ..character development was quite impressive. e 1 :‹'impressive', 'character development'› 2 https://www.yelp.com/dataset In the next (Neighborhood Model Construction) step, we characterize the items being reviewed and their subjective extractions using latent features between two items. In particular, we use matrix factorization techniques (Riedel et al., 2013) to construct a neighborhood model N via a set of weights w e,e , where each corresponds to a directed association strength between extraction e and e . For instance, e 1 and e 2 in Example 2 could have a similarity score 0.93. This neighborhood model forms the core of data collection. We select a subset of extractions from N as topics (Topic Selection) and ask crowd workers to translate them to natural language questions (Question Generation). For each topic, a subset of its neighbors from N and reviews which mention them are selected (Review Selection). In this manner, question-review pairs are generated based on the neighborhood model.
Finally, we present each question-review pair to crowdworkers who highlight an answer span in the review. Additionally, they provide subjectivity scores for both the questions and the answer span.
Opinion Extraction
An opinion extractor processes all reviews and finds extractions ‹X,Y› where X represents an opinion expressed on aspect Y. Table 1 shows sample extractions from different domains. We use OpineDB , a state-of-the-art opinion extractor, for restaurants and hotels. For other domains where OpineDB was not available, we use the syntactic extraction patterns of Abbasi Moghaddam (2013).
Neighborhood Model Construction
We rely on matrix factorization to learn dense representations for items and extractions, and identify similar extractions. As depicted in Figure 2, we organize the extractions into a matrix M where each row i corresponds to an item being reviewed and
M F ij = K F k x i,k e j,k .(1)
For each extraction, we find its neighbors based on the cosine similarity of their embeddings. 3
Topic and Review Selection
We next identify a subset of extractions to be used as topics for the questions. In order to maximize the diversity and difficulty in the dataset, we use the following criteria developed iteratively based on manual inspection followed by user experiments. 1. Cosine Similarity: We prune neighbors of an extraction which have low cosine similarity (< 0.8). Irrelevant neighbors can lead to noisy topic-review pairs which would be marked nonanswerable by the annotators. 2. Semantic Similarity: We prune neighbors that are linguistically similar (> 0.975 similarity 4 ) as they yield easy topic-review pairs. 3. Diversity: To promote diversity in topics and reviews, we select extractions which have many ( > 5) neighbors. 4. Frequency: To ensure selected topics are also popular, we select a topic if: a) its frequency is higher than the median frequency of all extractions, and b) it has at least one neighbor that is more frequent than the topic itself. We pair each topic with reviews that mention one of its neighbors. The key benefit of a factorizationbased method is that it is not only based on linguistic similarity, and forces a QA system to understand subjectivity in questions and reviews.
3 Details about hyper-parameters are included in the Appendix. 4 using GloVe embeddings provided by Spacy
Question Generation
Each selected topic is presented to a human annotator together with a review that mentions that topic. We ask the annotator to write a question about the topic that can be answered by the review. For example, ‹'good', 'writing'› could be translated to "Is the writing any good?" or "How is the writing?".
Answer-Span and Subjectivity Labeling
Lastly, we present each question and its corresponding review to human annotators (crowdworkers), who provides a subjectivity score to the question on a 1 to 5 scale based on whether it seeks an opinion (e.g., "How good is this book?") or factual information (e.g., "is this a hard-cover?"). Additionally, we ask them to highlight the shortest answer span in the review or mark the question as unanswerable. They also provide subjectivity scores for the answer spans. We provide details of our neighborhood model construction and crowdsourcing experiments in the Appendix.
Dataset Analysis
In this section, we analyze the questions and answers to understand the properties of our SUBJQA dataset. We present the dataset statistics in Section 4.1. We then analyze the diversity and difficulty of the questions. We also discuss the distributions of subjectivity and answerability in our dataset. Additionally, we manually inspect 100 randomly chosen questions from the development set in Section 4.3 to understand the challenges posed by subjectivity of the questions and/or the answers.
Data Statistics
Difficulty and Diversity of Questions
As can be seen in Table 3, reviews in different domains tend to vary in length. Answer spans tend to be 6-7 tokens long, compared to 2-3 tokens in SQuAD. Furthermore, the average linguistic similarity of the questions and the answer spans was low: 0.7705 computed based on word2vec. These characteristics of SUBJQA contribute to making it an interesting and challenging QA dataset. Table 4 shows the number of distinct questions and topics in each domain. On average we collected 1500 questions covering 225 aspects. We also automatically categorize the boolean questions based on a lexicon of question prefixes. Unlike other review-based QA datasets (Gupta et al., 2019), SUBJQA contains more diverse questions, the majority of which are not yes/no questions. The questions are also linguistically varied, as indicated by the trigram prefixes of the questions (Figure 3). Most of the frequent trigram prefixes in SUBJQA (e.g., how is the, how was the,how do you) are almost missing in SQuAD and Gupta et al. (2019). The diversity of questions in SUBJQA demonstrate challenges unique to the dataset.
Data Quality Assessment
We randomly sample 100 answerable questions to manually categorize them according to their reasoning types. Table 5 shows the distribution of the reasoning types and representative examples. As expected, since a large fraction of the questions are subjective, they cannot be simply answered using a keyword-search over the reviews or by paraphras- Figure 3: The distribution of prefixes of questions. The outermost ring shows unigram prefixes (e.g., 57.9% questions start with how). The middle and innermost rings correspond to bigrams and trigrams, respectively.
ing the input question. Answering such questions requires a much deeper understanding of the reviews. Since the labels are crowdsourced, a small fraction of the answer spans are noisy.
We also categorized the answers based on answer-types. We observed that 64% of the answer spans were independent clauses (e.g., the staff was very helpful and friendly), 25% were noun phrases (e.g., great bed) and 11% were incomplete clauses/spans (e.g., so much action). This supports our argument that often subjective questions cannot be answered simply by an adjective or noun phrase.
Answerability and Subjectivity
The dataset construction relies on a neighborhood model generated automatically using factorization. It captures co-occurrence signals instead of linguistic signals. Consequently, the dataset generated is not guaranteed to only contain answerable questions. As expected, about 65% of the questions in the dataset are answerable from the reviews (see Table 7). However, unlike Gupta et al. (2019), we do not predict answerability using a classifier. The answerability labels are provided by the crowdworkers instead, and are therefore more reliable. Table 7 shows the subjectivity distribution in questions and answer spans across different domains. A vast majority of the questions we collected are subjective, which is not surprising since we selected topics from opinion extractions. A large fraction of the subjective questions (∼70%) were also answerable from their reviews. We also compare the subjectivity of questions with the subjectivity of answers. As can be seen in Table 6, the subjectivity of an answer is strongly correlated with the subjectivity of the question. Subjective questions often have answers that are also subjective. Similarly, factual questions, with few exceptions, have factual answers. This indicates that a QA system must understand how subjectivity is expressed in a question to correctly find its answer. Most domains have 75% subjective questions on average. However, the BERT-QA model fine-tuned on each domain achieves 80% F1 on subjective questions in movies and books, but only achieves 67-73% F1 on subjective questions in grocery and electronics. Future QA systems for user-generated content, such as for customer support, should therefore model subjectivity explicitly.
Subjectivity Modeling
We now turn to experiments on subjectivity, first investigating claims made by previous work, and whether they still hold when using recently developed architectures, before investigating how to model subjectivity in QA. Pang and Lee (2004) have shown that subjectivity is an important feature for sentiment analysis. Sorting sentences by their estimated subjectivity scores, and only using the top n such sentences, allows for a more efficient and better-performing sentiment analysis system, than when considering both subjective and objective sentences equally. We first investigate whether the same findings hold true when subjectivity is estimated using transformer-based architectures. Our setup is based on a pre-trained BERT-based uncased model. 5 Following the approach of Devlin et al. (2018), we take the final hidden state corresponding to the special [CLS] token of an input sequence as its representation. We then predict the subjectivity of the sentence by passing its representation through a feed-forward neural network, optimized with SGD. We compare this with using subjectivity scores of TEXTBLOB 6 , a sentiment lexicon-based method, as a baseline. We consider sentences with a high TextBlob subjectivity score (> 0.5) as subjective.
Subjectivity in Sentiment Analysis
We evaluate the methods on subjectivity data from Pang and Lee (2004) 7 and the subjectivity labels made available in our dataset (SUBJQA). Unsurprisingly, a contextually-aware classifier vastly outperforms a word-based classifier, highlighting the importance of context in subjectivity analysis (see Table 8). Furthermore, predicting subjectivity in SUBJQA is more challenging than in IMDB, because SUBJQA spans multiple domains.
We further investigate if our subjectivity classifier helps with the sentiment analysis task. We implement a sentiment analysis classifier which takes (Pang and Lee, 2004) and our dataset (SUBJQA). the special [CLS] token of an input sequence as the representation. We train this classifier by replicating conditions described in Pang and Lee (2004). As shown in Figure 4, giving a contextually-aware subjectivity classifier access to N subjective sentences improves the performance on sentiment analysis, outperforming a baseline of using all sentences, and N objective sentences.
Subjectivity-Aware QA Model
Given our importance of subjectivity in other NLP tasks, we investigate whether it is also an important feature for QA using SUBJQA. We approach this by implementing a subjectivity-aware QA model, as an extension of one of our baseline models in a multitask learning (MTL) paradigm (Caruana, 1997). One advantage of using MTL is that we do not need to have access to subjectivity labels at test time, as would be the case if we required subjectivity labels as a feature for each answer span. We base our model on FastQA (Weissenborn et al., 2017). Each input paragraph is encoded with a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) over a sequence of word embeddings and contextual features (X). This encoding, H , is passed through a hidden layer and a non-linearity: We extend this implementation by adding two hidden layers of task-specific parameters (W n ) associated with a second learning objective:
H = Bi-LSTM(X)(2)H = tanh(BH )(3)S = ReLU(W 1 H) (4) S = softmax(W 2 S )(5)
In training, we randomly sample between the two tasks (QA and Subjectivity classification).
Baselines
We use four pre-trained models to investigate how their performances on SUBJQA compare with a factual dataset, SQuAD (Rajpurkar et al., 2016), created using Wikipedia. Specifically, we evaluate BiDaF (Seo et al., 2017), FastQA (Weissenborn et al., 2017), JackQA (Weissenborn et al., 2018) 8 and BERT (Devlin et al., 2018), 9 all pre-trained on SQuAD. Additionally, we fine tune the models on each domain in SUBJQA. Figure 5 shows the F1 scores of the pre-trained models. We report the Exact match scores in Appendix A.1. Pre-trained models achieve F1 scores Figure 6 shows the absolute gains in F1 scores of models fine-tuned on specific domains, over the pre-trained model. After fine-tuning on each domain, the best model achieves an average F1 of 74.1% across the different domains, with a minimum of 63.3% and a maximum of 80.5% on any given domain. While fine-tuning significantly boosts the F1 scores in each domain, they are still lower than the F1 scores on the SQuAD dataset. We argue that this is because the models are agnostic about subjective expressions in questions and reviews. To validate our hypothesis, we compare the gain in F1 scores of the BERT model on subjective questions and factual questions. We find that the difference in F1 gains is as high as 23.4% between factual and subjective questions. F1 gains differ by as much as 23.0% for factual vs. subjective answers.
Subjectivity-Aware Modeling
After fine-tuning over each domain in the MTL setting, the subjectivity-aware model achieves an average F1 of 76.3% across the different domains, with a minimum of 58.8% and a maximum of 82.0% on any given domain. Results from the subjectivityaware model are shown in Table 9. Under both the F1 and the Exact match metrics, incorporating subjectivity in the model as an auxiliary task boosts performance across all domains. Although there are gains also for subjective questions and answers, it is noteworthy that the highest gains can be found for factual questions and answers. This can be explained by the fact that existing techniques already are tuned for factual questions. Our MTL extension helps in identifying factual questions, which further improves the results. However, even if subjective questions are identified, the system is still not tuned to adequately deal with this input.
Related Work
We are witnessing an exponential rise in usergenerated content. Much of this data contains subjective information ranging from personal experiences to opinions about a specific aspects of a product. This information is useful for supporting decision making in product purchases. However, subjectivity has largely been studied in the context of sentiment analysis (Hu and Liu, 2004) and opinion mining (Blair-Goldensohn et al., 2008), with a focus on text polarity. There is a renewed interested in incorporating subjective opinion data into a general data management system Kobren et al., 2019) and providing an interface for querying subjective data. These systems employ trained components for extracting opinion data, labeling it and even responding to user questions.
In this work, we revisit subjectivity in the context of review QA. McAuley and Yang (2016); Yu et al. (2012) also use review data, as they leverage question types and aspects to answer questions. However, no prior work has modeled subjectivity explicitly using end-to-end architectures.
Furthermore, none of the existing reviewbased QA datasets are targeted at understanding subjectivity. This can be attributed to how these datasets are constructed. Large-scale QA datasets, such as SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2017), CoQA (Reddy et al., 2019) are based on factual data. We are the first to attempt to create a review-based QA dataset for the purpose of understanding subjectivity.
A Appendices
A.1 Additional Experimental Results Figure 7 shows the exact scores achieved by the pretrained out-of-the-box models on various domains in SUBJQA. Figure 8 shows the exact scores of the models fine-tuned on each domain in SUBJQA.
A.2 Neighborhood Model Construction
For constructing the matrix for factorization, we focus on frequently reviewed items and frequent extractions. In particular, we consider items which have more than 10,000 reviews and extractions that were expressed in more than 5000 reviews. Once the matrix is constructed, we factorize it using nonnegative factorization method using 20 as the dimension of the extraction embedding vector.
In the next step, we construct the neighborhood model by finding top-10 neighbors for each extraction based on cosine similarity of the extraction and the neighbor. We further select topics from the extractions, and prune the neighbors based on the criteria we described earlier.
A.3 Crowdsourcing Details Figure 9 illustrates the instructions that were shown to the crowdworkers for the question generation task. Figure 10 shows the interface for the answerspan collection and subjectivity labeling tasks. The workers assign subjectivity scores (1-5) to each question and the selected answer span. They can also indicate if a question cannot be answered from the given review.
Figure 2 :
2Learning representations of extractions via non-negative matrix factorization each column j corresponds to an extraction. The value M ij denotes the frequency of extraction e j in reviews of item x i . Given M and a latent feature model F , we obtain extraction embeddings using non-negative matrix factorization. Concretely, each value M ij is obtained from the dot product of two extractions of size K F :
Figure 4 :
4Sentiment Analysis accuracy using top N subj. sentences (blue), top N fact. sentences (orange dashed), compared to the all sentences baseline (black).
Figure 5 :Figure 6 :
56F1 scores of pre-trained out-of-the-box models on different domains in SUBJQA. Gain in F1 with models fine-tuned on different domains over the pre-trained model.
Figure 7 :Figure 8 :
78Exact scores of pre-trained out-of-the-box models on different domains. Gain in Exact scores with models fine-tuned on different domains.
Figure 9 :
9The instructions shown to crowdworkers for the question writing task.
Figure 10 :
10The interface for the answer-span collection and subjectivity labeling tasks.
Table 1 :
1Example extractions from different domainsReview: 3 stars for good power and good writing.
e 2 :‹'good', 'writing'›
Table 2 :
2No. of examples in each domain split.
Table 2
2summarizes the number of examples we
collected for different domains. To generate the
train, development, and test splits, we partition
the topics into training (80%), dev (10%) and test
(10%) sets. We partition the questions and reviews
based on the partitioning of the topics.
Table 3 :
3Domain statistics. Len denotes n tokens.Domain
# questions # aspects % boolean Q
TripAdvisor
1411
171
16.13
Restaurants
1553
238
17.29
Movies
1556
228
15.56
Books
1517
231
16.90
Electronics
1535
314
14.94
Grocery
1333
163
14.78
Table 4 :
4Diversity of questions and topics
The ending was absolutely awesome, it makes the experience not so ... For a show that I think was broadcast in HighDef, it seems impossible that the...Reasoning Percent. Example
Lexical
18%
Q: How small was the hotel bathroom?
R: ...Bathroom on the small side with older fixtures...
Paraphrase 28%
Q: How amazing was the end?
R: ...Indirect
43%
Q: How was the plot of the movie?
R: ...simply because there's so much going on, so much action, so many complex ..
Insufficient 11%
Q: How do you like the episode?
R:
Table 5 :
5Types of reasoning required for the various domains.subj. Q fact. Q
subj. A
79.8%
1.31%
fact. A
1.29% 17.58%
Table 6 :
6Subjectivity distribution in SUBJQA.Domain
% subj. Q % answerable % subj. A
TripAdvisor
74.49
83.20
75.20
Restaurants
76.11
65.72
76.29
Movies
74.41
62.09
74.59
Books
75.77
58.86
75.35
Electronics
69.80
65.37
69.98
Grocery
73.21
70.22
73.15
Table 7 :
7Statistics on subjective Q, answerability, and subjective A per domain in SUBJQA.
Table 8 :
8Subjectivity prediction accuracies on IMDB data
Table 9 :
9MTL gains/losses over the fine-tuning condition (F1 and Exact match), across subj./fact. QA.
as high as 92.9% on the SQuAD. On the other hand,
the best model achieves an average F1 of 30.5%
across all domains and 36.5% F1 at best on any
given domain in SUBJQA. The difference in per-
formance can be attributed to both differences in
domain (Wikipedia vs. customer reviews) and how
subjectivity is expressed across different domains.
Subjectivity is not restricted to written texts, although we focus on this modality here.
https://huggingface.co/transformers/ 6 https://textblob.readthedocs.io/ 7 http://www.cs.cornell.edu/people/pabo/ movie-review-data/
https://github.com/uclnlp/jack 9 BERT-Large, Cased (Whole Word Masking)
ConclusionIn this paper, we investigate subjectivity in question answering, by leveraging end-to-end architectures. We release SUBJQA, a question-answering corpus which contains subjectivity labels for both questions and answers. The dataset allows i) evaluation and development of architectures for subjective content, and ii) investigation of subjectivity and its interactions in broad and diverse contexts. We further implement a subjectivity-aware model and evaluate it, along with 4 strong baseline models. We hope this dataset opens new avenues for research on end-to-end architectures for querying subjective content, and for research into subjectivity in NLP in general.
Aspect-based opinion mining in online reviews. Moghaddam Samaneh Abbasi, Applied Sciences: School of Computing Science. Ph.D. thesisSamaneh Abbasi Moghaddam. 2013. Aspect-based opinion mining in online reviews. Ph.D. thesis, Ap- plied Sciences: School of Computing Science.
Multilingual sentiment and subjectivity analysis. Multilingual natural language processing. Carmen Banea, Rada Mihalcea, Janyce Wiebe, 6Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2011. Multilingual sentiment and subjectivity analy- sis. Multilingual natural language processing, 6:1- 19.
Unspeakable Sentences: The sentence representing non-reflective consciousness and the absence of the narrator. Ann Banfield, RoutledgeAnn Banfield. 1982. Unspeakable Sentences: The sen- tence representing non-reflective consciousness and the absence of the narrator. Routledge.
Evaluative language beyond bags of words: Linguistic insights and computational applications. Farah Benamara, Maite Taboada, Yannick Mathieu, Computational Linguistics. 431Farah Benamara, Maite Taboada, and Yannick Mathieu. 2017. Evaluative language beyond bags of words: Linguistic insights and computational applications. Computational Linguistics, 43(1):201-264.
Building a sentiment summarizer for local service reviews. Sasha Blair-Goldensohn, Kerry Hannan, Ryan Mcdonald, Tyler Neylon, George Reis, Jeff Reynar, Sasha Blair-Goldensohn, Kerry Hannan, Ryan McDon- ald, Tyler Neylon, George Reis, and Jeff Reynar. 2008. Building a sentiment summarizer for local ser- vice reviews.
Multitask learning. Rich Caruana, Machine Learning. 28Rich Caruana. 1997. Multitask learning. Machine Learning, 28 (1):41-75.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv:1810.04805.
Polysemy and the subjective lexicon: Semantic relatedness and the salience of intraword senses. Kevin Durkin, Jocelyn Manning, Journal of Psycholinguistic Research. 186Kevin Durkin and Jocelyn Manning. 1989. Polysemy and the subjective lexicon: Semantic relatedness and the salience of intraword senses. Journal of Psy- cholinguistic Research, 18(6):577-612.
Reading customer reviews to answer product-related questions. Chao Miao Fan, Mingming Feng, Ping Sun, Haifeng Li, Wang, 10.1137/1.9781611975673.64Proceedings of the 2019 SIAM International Conference on Data Mining, SDM 2019. the 2019 SIAM International Conference on Data Mining, SDM 2019Calgary, Alberta, CanadaMiao Fan, Chao Feng, Mingming Sun, Ping Li, and Haifeng Wang. 2019. Reading customer reviews to answer product-related questions. In Proceedings of the 2019 SIAM International Conference on Data Mining, SDM 2019, Calgary, Alberta, Canada, May 2-4, 2019, pages 567-575.
Reviewqa: a relational aspect-based opinion reading dataset. Quentin Grail, Julien Perez, abs/1810.12196CoRRQuentin Grail and Julien Perez. 2018. Reviewqa: a relational aspect-based opinion reading dataset. CoRR, abs/1810.12196.
Amazonqa: A review-based question answering task. Mansi Gupta, Nitish Kulkarni, Raghuveer Chanda, Anirudha Rayasam, Zachary C Lipton, 10.24963/ijcai.2019/694Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019. the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019Macao, ChinaMansi Gupta, Nitish Kulkarni, Raghuveer Chanda, Anirudha Rayasam, and Zachary C. Lipton. 2019. Amazonqa: A review-based question answering task. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4996-5002.
Project Magellan: Collecting cross-cultural affective meanings via the internet. David R Heise, Electronic Journal of Sociology. David R Heise. 2001. Project Magellan: Collecting cross-cultural affective meanings via the internet. Electronic Journal of Sociology.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. the tenth ACM SIGKDD international conference on Knowledge discovery and data miningACMMinqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. Mandar Joshi, Eunsol Choi, Daniel S Weld, Luke Zettlemoyer, 10.18653/v1/P17-1147Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Vol- ume 1: Long Papers, pages 1601-1611.
Constructing high precision knowledge bases with subjective and factual attributes. Ari Kobren, Pablo Bario, Oksana Yakhnenko, Johann Hibschman, Ian Langmore, arXiv:1905.12807arXiv preprintAri Kobren, Pablo Bario, Oksana Yakhnenko, Johann Hibschman, and Ian Langmore. 2019. Constructing high precision knowledge bases with subjective and factual attributes. arXiv preprint arXiv:1905.12807.
Subjective databases. Yuliang Li, Aaron Feng, Jinfeng Li, Saran Mumick, Alon Y Halevy, Vivian Li, Wang-Chiew Tan, PVLDB. 1211Yuliang Li, Aaron Feng, Jinfeng Li, Saran Mumick, Alon Y. Halevy, Vivian Li, and Wang-Chiew Tan. 2019. Subjective databases. PVLDB, 12(11):1330- 1343.
Addressing complex and subjective product-related queries with customer reviews. J Julian, Alex Mcauley, Yang, Proceedings of the 25th International Conference on World Wide Web. the 25th International Conference on World Wide WebMontreal, CanadaWWWJulian J. McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proceedings of the 25th In- ternational Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 -15, 2016, pages 625-635.
A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. Bo Pang, Lillian Lee, Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. the 42nd Annual Meeting of the Association for Computational LinguisticsBarcelona, SpainBo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summa- rization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Com- putational Linguistics, 21-26 July, 2004, Barcelona, Spain, pages 271-278.
Aspect extraction for opinion mining with a deep convolutional neural network. Soujanya Poria, Erik Cambria, Alexander F , Knowl.-Based Syst. 108GelbukhSoujanya Poria, Erik Cambria, and Alexander F. Gel- bukh. 2016. Aspect extraction for opinion mining with a deep convolutional neural network. Knowl.- Based Syst., 108:42-49.
A Comprehensive Grammar of the English Language. R Quirk, S Greenbaum, G Leech, J Svartvik, LongmanNew YorkR. Quirk, Greenbaum S., Leech G., and Svartvik J. 1985. A Comprehensive Grammar of the English Language. Longman, New York.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, , and Ilya Sutskever. 2019. Lan- guage models are unsupervised multitask learners.
Know what you don't know: Unanswerable questions for squad. Pranav Rajpurkar, Robin Jia, Percy Liang, 10.18653/v1/P18-2124Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018. the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018Melbourne, AustraliaShort Papers2Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics, ACL 2018, Melbourne, Australia, July 15- 20, 2018, Volume 2: Short Papers, pages 784-789.
Squad: 100, 000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383-2392.
Coqa: A conversational question answering challenge. Siva Reddy, Danqi Chen, Christopher D Manning, TACL. 7Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answering challenge. TACL, 7:249-266.
Relation extraction with matrix factorization and universal schemas. Sebastian Riedel, Limin Yao, Andrew Mccallum, Benjamin M Marlin, Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings. Westin Peachtree Plaza Hotel, Atlanta, Georgia, USASebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Geor- gia, USA, pages 74-84.
Bidirectional attention flow for machine comprehension. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsMin Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Con- ference Track Proceedings.
A review of natural language processing techniques for opinion mining systems. Shiliang Sun, Chen Luo, Junyu Chen, 10.1016/j.inffus.2016.10.004Information Fusion. 36Shiliang Sun, Chen Luo, and Junyu Chen. 2017. A review of natural language processing techniques for opinion mining systems. Information Fusion, 36:10-25.
Newsqa: A machine comprehension dataset. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman, Proceedings of the 2nd Workshop on Representation Learning for NLP. the 2nd Workshop on Representation Learning for NLPVancouver, CanadaAdam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP@ACL 2017, Vancouver, Canada, August 3, 2017, pages 191-200.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Latent aspect rating analysis on review text data: a rating regression approach. Hongning Wang, Yue Lu, Chengxiang Zhai, Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningWashington, DC, USAHongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washing- ton, DC, USA, July 25-28, 2010, pages 783-792.
Jack the reader -A machine reading framework. Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bosnjak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel, 10.18653/v1/P18-4005Proceedings of ACL 2018. ACL 2018Melbourne, AustraliaSystem DemonstrationsDirk Weissenborn, Pasquale Minervini, Isabelle Au- genstein, Johannes Welbl, Tim Rocktäschel, Matko Bosnjak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, and Sebastian Riedel. 2018. Jack the reader -A machine reading frame- work. In Proceedings of ACL 2018, Melbourne, Australia, July 15-20, 2018, System Demonstrations, pages 25-30.
Fastqa: A simple and efficient neural architecture for question answering. Dirk Weissenborn, Georg Wiese, Laura Seiffe, abs/1703.04816CoRRDirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efficient neu- ral architecture for question answering. CoRR, abs/1703.04816.
Development and use of a goldstandard data set for subjectivity classifications. Janyce Wiebe, Rebecca F Bruce, Thomas P O'hara, 27th Annual Meeting of the Association for Computational Linguistics. Maryland, USAUniversity of Maryland, College ParkJanyce Wiebe, Rebecca F. Bruce, and Thomas P. O'Hara. 1999. Development and use of a gold- standard data set for subjectivity classifications. In 27th Annual Meeting of the Association for Com- putational Linguistics, University of Maryland, Col- lege Park, Maryland, USA, 20-26 June 1999.
Word sense and subjectivity. Janyce Wiebe, Rada Mihalcea, ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. Sydney, AustraliaJanyce Wiebe and Rada Mihalcea. 2006. Word sense and subjectivity. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006.
Annotating expressions of opinions and emotions in language. Language Resources and Evaluation. Janyce Wiebe, Theresa Wilson, Claire Cardie, 39Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language Resources and Evalua- tion, 39(2-3):165-210.
Bert post-training for review reading comprehension and aspect-based sentiment analysis. Hu Xu, Bing Liu, Lei Shu, Philip S Yu, arXiv:1904.02232arXiv preprintHu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2019a. Bert post-training for review reading comprehension and aspect-based sentiment analysis. arXiv preprint arXiv:1904.02232.
Review conversational reading comprehension. Hu Xu, Bing Liu, Lei Shu, Philip S Yu, abs/1902.00821CoRRHu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019b. Re- view conversational reading comprehension. CoRR, abs/1902.00821.
Answering opinion questions on products by exploiting hierarchical organization of consumer reviews. Jianxing Yu, Zheng-Jun Zha, Tat-Seng Chua, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaJianxing Yu, Zheng-Jun Zha, and Tat-Seng Chua. 2012. Answering opinion questions on products by exploit- ing hierarchical organization of consumer reviews. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, EMNLP-CoNLL 2012, July 12-14, 2012, Jeju Island, Korea, pages 391-401.
| [
"https://github.com/uclnlp/jack"
] |
[
"Accuracy of the Uzbek stop words detection: a case study on \"School corpus\"",
"Accuracy of the Uzbek stop words detection: a case study on \"School corpus\""
] | [
"Khabibulla Madatov \nUrgench state university\n14, Kh. Alimdjan str\n\nUrgench city\n220100Uzbekistan\n",
"Shukurla Bekchanov \nUrgench state university\n14, Kh. Alimdjan str\n\nUrgench city\n220100Uzbekistan\n",
"Jernej Vičič \nResearch Centre of the Slovenian Academy of Sciences and Arts\nThe Fran Ramovš Institute\nNovi trg 21000Ljubljana, Slovenija\n\nFAMNIT\nUniversity of Primorska\nGlagoljaska 86000KoperSlovenia\n"
] | [
"Urgench state university\n14, Kh. Alimdjan str",
"Urgench city\n220100Uzbekistan",
"Urgench state university\n14, Kh. Alimdjan str",
"Urgench city\n220100Uzbekistan",
"Research Centre of the Slovenian Academy of Sciences and Arts\nThe Fran Ramovš Institute\nNovi trg 21000Ljubljana, Slovenija",
"FAMNIT\nUniversity of Primorska\nGlagoljaska 86000KoperSlovenia"
] | [] | Stop words are very important for information retrieval and text analysis investigation tasks of natural language processing. Current work presents a method to evaluate the quality of a list of stop words aimed at automatically creating techniques. Although the method proposed in this paper was tested on an automatically-generated list of stop words for the Uzbek language, it can be, with some modifications, applied to similar languages either from the same family or the ones that have an agglutinative nature. Since the Uzbek language belongs to the family of agglutinative languages, it can be explained that the automatic detection of stop words in the language is a more complex process than in inflected languages. Moreover, we integrated our previous work on stop words detection in the example of the "School corpus" by investigating how to automatically analyse the detection of stop words in Uzbek texts. This work is devoted to answering whether there is a good way of evaluating available stop words for Uzbek texts, or whether it is possible to determine what part of the Uzbek sentence contains the majority of the stop words by studying the numerical characteristics of the probability of unique words. The results show acceptable accuracy of the stop words lists. | 10.48550/arxiv.2209.07053 | [
"https://export.arxiv.org/pdf/2209.07053v1.pdf"
] | 252,280,528 | 2209.07053 | 9c3b3353e2ac529ee2443b394d66ad4e544bfd34 |
Accuracy of the Uzbek stop words detection: a case study on "School corpus"
Khabibulla Madatov
Urgench state university
14, Kh. Alimdjan str
Urgench city
220100Uzbekistan
Shukurla Bekchanov
Urgench state university
14, Kh. Alimdjan str
Urgench city
220100Uzbekistan
Jernej Vičič
Research Centre of the Slovenian Academy of Sciences and Arts
The Fran Ramovš Institute
Novi trg 21000Ljubljana, Slovenija
FAMNIT
University of Primorska
Glagoljaska 86000KoperSlovenia
Accuracy of the Uzbek stop words detection: a case study on "School corpus"
1 stop word detectionUzbek languageaccuracyagglutinative language
Stop words are very important for information retrieval and text analysis investigation tasks of natural language processing. Current work presents a method to evaluate the quality of a list of stop words aimed at automatically creating techniques. Although the method proposed in this paper was tested on an automatically-generated list of stop words for the Uzbek language, it can be, with some modifications, applied to similar languages either from the same family or the ones that have an agglutinative nature. Since the Uzbek language belongs to the family of agglutinative languages, it can be explained that the automatic detection of stop words in the language is a more complex process than in inflected languages. Moreover, we integrated our previous work on stop words detection in the example of the "School corpus" by investigating how to automatically analyse the detection of stop words in Uzbek texts. This work is devoted to answering whether there is a good way of evaluating available stop words for Uzbek texts, or whether it is possible to determine what part of the Uzbek sentence contains the majority of the stop words by studying the numerical characteristics of the probability of unique words. The results show acceptable accuracy of the stop words lists.
Introduction
The application of Natural Language Processing (NLP) tasks in real-life scenarios are getting more frequent than ever before, and there is huge research getting involved with different approaches to enhance the quality of such tasks. An important aspect of many NLP tasks that make use of tasks, such as information retrieval, text summarization, context-embedding, etc., relies on a task of removing unimportant tokens and words from the context under focus. Such data are known as stop words. Therefore, it is desired that some automatic method should be developed to identify stop words that either make no change in the meaning of the context (or do very little) and remove them. from the context.
In this work, we are addressing the problem of automatic detection of stop words for the lowresource agglutinative Uzbek language, and evaluate the proposed methods. The existing literature that deal with stop words removal task for the Uzbek language [7] [8] [10] focus on the creation process, the importance, as well as the availability of the proposed data, leaving a gap for further investigation, which we discuss in this paper.
The scientific term "stop words" is popular in the field of natural language processing, and its definition we focus in this work is as follows: If the removal of those words from the text not only does not change the context meaning but also leaves the minimum number of words possible that can still hold the meaning of the context, then such words can be called stop words for this work.
For instance, the following examples are shown to better explain what words would be considered in given sentences, and what the final context would become after removing those stop words: • "Men bu maqolani qiynalib yozdim". (I wrote this article with difficulty). After removing the stop words ("men", "bu", "qiynalib") the context becomes: "Maqolani yozdim".(I wrote the article.); • "Har bir inson baxtli bo'lishga haqlidir" (Every person has the right to be happy). After removing the stop words ("har ", "bir"), the context becomes: "Inson baxtli bo'lishga haqlidir" (Person has right to be happy).
Such definition is an extension of the traditional definition of stop words by including more words than the actual expectations but still including the traditional stop words.
The Term Frequency -Inverse Document Frequency (TF-IDF) method [15] was used to detect stop words in Uzbek texts. TF-IDF is a numerical statistic that is intended to reflect how important a word is to a document in a corpus, the method acknowledges words with the lowest TF-IDF values as less important to the semantic meaning of the document and proposes these words as stop word candidates.
In our previous work [8], we discuss the methods and algorithms for automatic detection and extraction of Uzbek stop words from previously collected text forming a new corpus called the "School corpus". The stop words detection method based on TF-IDF was applied to the aforementioned corpus collected from 25 textbooks used for teaching at primary schools of Uzbekistan, consisting of 731,156 words, of which 47,165 are unique words. To perform our technique, for each word from the set of unique words, its frequency was determined (the number of occurrences in the texts of the School corpus), and the inverse document frequency IDF(word) = ln(n/m) where n = 25number of documents and m is the number of documents, containing the unique word among 25 documents.
The existing fundamental papers that deal with stop words in general, let alone for the Uzbek language, barely address the quality of the automatically detected list of stop words. This statement also applies to our previous work, where a preliminary manual expert observation of a part of the lists (only unigrams) was done. To the authors' knowledge, there was no in-depth observation of the accuracy of the automatically constructed lists of stop words for agglutinative languages. For instance, [7][8] [9] [10] are mostly focusing on Uzbek texts' stop words and methods for automatic extraction of stop words. But none of them discusses the accuracy of the presented methods. The article is devoted to answering whether there is a good way of evaluating available stop words for Uzbek texts, or whether it is possible to determine what part of the Uzbek sentence contains the majority of the stop words by studying the numerical characteristics of the probability of unique words.
The words were sorted by the TF-IDF value in descending order and the lowest 5 percent of them were tagged as stop words. We used this method to automatically detect stop words in the corpus [8]. Using this information, the article focuses on the followings:
• To create a probability distributions model of the TF-IDF of unique words in order to determine the position of stop words along with the corpus; • To establish the accuracy of the detection method for stop words; • To conclude on automatic position detection of stop words for the given text.
The rest of the paper is structured as follows: We start by explaining the related works in the field of stop word removal, as well as the Uzbek language itself in Section 2, followed by the main methodology of the paper in Section 3, which includes the creation of probability distribution law of TF-IDF of unique words (Section 3.1), the numerical characteristics of the probability of unique words (Section 3.2), and the evaluation of the created method using a small selected chunk (Section3.3). The accuracy of the method for automatic detection of stop words in Uzbek texts, which is based on TF-IDF, is presented in Section 4. The last section of the paper presents conclusions and future work (Section 5).
Related works
Uzbek language belongs to the family of Turkic languages. There has been some research on the Uzbek language mostly in the last few years. Most of the research done on Turkic languages can be applied to the Uzbek language as well, using cross-lingual learning and mapping approaches, alongside some language-specific additions. The paper [1] presents a viability study of established techniques to align monolingual embedding spaces for Turkish, Uzbek, Azeri, Kazakh, and Kyrgyz, members of the Turkic family which is heavily affected by the low-resource constraint. Several authors present experiment and propose techniques for stopwords extraction from text for agglutinative languages such as [2] that bases the stopword detection problem as a binary classification problem and the evaluation shows that classification methods improve stopword detection with respect to frequency-based methods for agglutinative languages but fails for English. Ladani and Desai [5] present an overview of stopwords removal techniques for Indian and Non-Indian Languages. Jayaweera et al. [2] proposes a dynamic approach to find Sinhala stopwords, the cutoff point is subjective to the dataset. Wijeratne and de Silva [17] collected the data from patent documents and listed the stopwords using term frequency. Rakholia et al. [14] proposed a rule-based approach to detect stopwords for the Gujarati language dynamically. They developed 11 static rules and used them to generate a stopword list at runtime. Fayaza et al. [1] presents a list of stopwords for Tamil language and reports improvement in text clustering using removal.
The paper ¡Error! No se encuentra el origen de la referencia. provides the first annotated corpus for polarity classification for the Uzbek language. Three lists of stop words for the Uzbek language are presented in [7] that were constructed using automatic detection of stop words by applying algorithms and methods presented in [8]. Paper [9] focuses on the automatic discovery of stop words in the Uzbek language and its importance. Articles [12] and [13] are also mainly concentrated on the creation of stop words in Uzbek.
Matlatipov et. al [10] propose the first electronic dictionary of Uzbek words' endings invariants for morphological segmentation pre-processing useful for neural machine translation.
The article [11] presents the algorithm of cosine similarity of Uzbek texts, based on TF-IDF to determine similarity. Another work on similarity in Uzbek, but this time on semantic similarity of words, a decent amount of work went on the creation and evaluation of a semantic evaluation dataset that possesses both similarity and relatedness scores ¡Error! No se encuentra el origen de la referencia..
Methodology
The scientific novelty of the methodology used in this work can be shown as follows:
• The creation of probability distributions law based on TF-IDF scores of unique words;
• Thorough investigation of numerical characteristics of the probability of unique words;
• Better evaluation of the stop words detection method's accuracy; Summarising the automatic detection of the position of stop words in given Uzbek texts.
In our previous work [8], we proposed the usage of TF-IDF [15] to automatically extract stop words from a corpus of documents. The stop words are discovered based on the Term Frequency Inverse Document Frequency -TF-IDF. The number of times a word occurs in a text is defined by Term Frequency --TF. Inverse Document Frequency --IDF is defined as the number of texts (documents) being viewed and the presence of a given word in chosen texts (documents). TF-IDF is one of the popular methods of knowledge discovery.
Madatov et. al [8] propose the usage of TF-IDF [15] to automatically extract stop words from a corpus of documents. The stop words are discovered based on the frequency of the word and the frequency of the inverse document Term Frequency -Inverse Document Frequency -TF-IDF. The number of times a word occurs in a text is defined by Term Frequency --TF. Inverse Document Frequency --IDF is defined as the number of texts (documents) being viewed and the presence of a given word in chosen texts (documents). TF-IDF is one of the popular methods of knowledge discovery.
Probability distribution
In order to determine the position of the stop words throughout the school corpus, we investigate the probability distribution law of TF-IDF scores of stop words.
Word weight and its probability. Select a word ; ∈ [1. .47165] from the set of unique words extracted from a corpus. For future references these two assumptions are valid: a word represents a unique word from a corpus and a corpus represents the "School corpus" presented in our previous work [4]. For every calculate average TF-IDF( ), called the weight of and denoted as . It is known that is not the probability of the word .
The probability of ℎ can be calculated using the following formula: = / ∑ . We match for each word. Now ∑ = 1. The probability density function. Suppose unique words are distributed independently in the total corpus. In that case, word can be applied multiple times. In order to escape repeating the word We consider only the first appearance of this word. For each word observe i as a random variable. As the probability density function of the unique words, we get the following function:
( ) = f(i) can be considered as the probability density function of word . In the Cartesian coordinate plane, observe i on the OX axis and observe along the OY axis. Figure 1 presents the described observations extracted from the "School corpus". We need it to observe the position of stop words along with the corpus. The described values extracted from the corpus are presented in Table 1. The variety of words increases gradually with grades in the school literature. It means that the probability density function of unique words is not symmetrical. One may predict it without a mathematical way. However, mathematically, the data in Table 1, especially, > , confirms that the probability density function is asymmetric. The stop words are distributed along the axis (not grouped at one part of the axis); represented by orange dots in Figure 2.
Numerical characteristics of the probability
Evaluation using a sub-corpus
This section presents the probability density function of unique words of selected work from the corpus. Each book from the corpus is devoted to one topic. The prediction: Every book consists of the culmination part of the topic, the rest can be stop words. That is why we investigated just one book. A random book was selected from the range of 25 books (in the corpus): 11 th class literature. The book consists of 12837 unique words. The same process that was presented in Section 3.2 was applied to just the selected part of the corpus in order to create the probability density function of unique words. Figure 3 shows the probability density function of 11 th class literature unique words. Mathematical analysis of the distribution is presented in Table 2. We obtain Figure 4 by the rule of stop words detection method, as mentioned in [4].
< 0 means that the probability density function is asymmetric. The values were sorted in descending order and the lowest 5 percent of them are candidates for stop words. Figure 4 graphically represents the process, words with probability less than * are candidates to be a stop word ( * = 0,00001034371184).
The number of these candidates is 642. 85,8% of these words is located outside of the interval ( − , + ). On the left side of the interval there are 545 stop words and on the right side are 6 stop words. The same facts can be observed graphically on Figure 5 (Taking into the account the numerical characteristics of 5% words of selected work and comparing Figure 3 and figure 4 we detected their position along with the text).
Evaluation results
The accuracy of the presented method if confirmed using the following reasoning: Let suppose hypothesis H0: Stop words of the selected document (11th class literature) are located outside of the interval (E-σ,E+σ); and alternative hypothesis H1: Stop words of the selected document (11th class literature) are located inside of the interval (Eσ,E+σ).
The critical value -Z (Z-score or Standard score) is obtained using this Equation:
= − /√ .; where N=12837, =6419, =7076.62, = 3461.419. In the presented task |Z|≈21,526. Z is located on the left side of E-σ, meaning there is no reason to reject the null hypothesis. This is the basis for rejecting the H1 hypothesis.
Conclusions and further work
Throughout the work performed in this paper, we presented a natural extension of the already presented previous research of the automatic detection of stop words in the Uzbek language [4] and the main focus of the analysis was twofold: a) a probability distributions model of the observed text and b) the accuracy of the detection method for stop words. From all theoretical investigations from previous sections, it can be concluded that, for a single genre, the majority of stopwords have the following nature: if < 0, are located at the beginning parts of the text; if > 0, are located at the ending of the text; if = 0, are located at the beginning at the ending part of the text.
In future works, we would like to use the results of this article as the basis for automatically extracting keywords and automatically extracting the abstract of a given text.
Acknowledgements
The authors gratefully acknowledge the European Commission for funding the InnoRenew CoE project (Grant Agreement $\#$739574) under the Horizon2020 Widespread-Teaming program and the Republic of Slovenia (Investment funding of the Republic of Slovenia and the European Union of the European Regional Development Fund).
Conclusion
The paper presents a natural extension of the already presented research of automatic detection of stop words in Uzbek language [8] and presents two goals: a) a probability distributions model of the observed text and b) the accuracy of the detection method for stop words.
a) The probability density is defined and later used to observe the accuracy of the automatic method for extraction of stop words of Uzbek language. b) The accuracy of the method that is presented in Section ¡Error! No se encuentra el origen de la referencia..
From this fact it can be concluded that, for a single genre, more of the stop words for texts: if < 0, are located at the beginning parts of the text; if > 0, are located at the ending of the text; if = 0, are located at the beginning at the ending part of the text. Further we use this result in the process of automatically extracting keywords from the given text and automatically extracting the annotation of the given text.
Figure 1 .
1The probability density function of unique words. The X-axis represents the index number of words, while the Yaxis shows the probability score.
This section presents numerical characteristics of the probability of unique words. They are calculated by the following formulas:= ∑ • − the mathematical expectation of the unique words = ∑ ( − )2 •dispersion of the unique words = √standard deviation of the unique words = ∑• − − ℎ of the unique words
Figure 2 .
2The probability density function of unique words with stop words. The orange dots indicate the positions of stop words along with the corpus.
Figure 3 :
3probability density function of 11th class literature unique words
Figure 5 :
585,8% of the stop word candidates are indeed located outside of the(E-σ,E+σ) interval
Table 1 :
1Basic statistical properties extracted from the corpus.3
1
2
3
3
23310,74
23310,74
13623,72
2,52864E+12
23310,74
728996416,52
25687931167881,50
41266663785,91
0,163
Table 2 :
2Distribution analysis of the selected single book3
1
2
3
3
7076,623
11981425
3461,41
414472396507
7076,623
602060020 598084106956 -10667328016
-0,251
Towards stop words identification in Tamil text clustering. F Fayaza, F Farhath, IJACSA) International Journal of Advanced Computer Science and Applications. 1212F. Fayaza, F. Farhath. "Towards stop words identification in Tamil text clustering.", (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 12, (2021).
Dynamic Stopword Removal for Sinhala Language. A A V A Jayaweera, Y N Senanayake, P S Haddela, 10.1109/NITC48475.2019.9114476Natl. Inf. Technol. Conf. NITC 2019. A. A. V. A. Jayaweera, Y. N. Senanayake, P. S. Haddela, "Dynamic Stopword Removal for Sinhala Language," 2019 Natl. Inf. Technol. Conf. NITC 2019, pp. 8-10, 2019, doi: 10.1109/NITC48475.2019.9114476.
Stop word detection as a binary classification problem. M Kumova, B Karaoğlan, Anadolu University Journal of Science and Technology A-Applied Sciences and Engineering. 182M. Kumova, B. Karaoğlan. "Stop word detection as a binary classification problem." Anadolu University Journal of Science and Technology A-Applied Sciences and Engineering 18, no. 2 (2017): 346-359.
Cross-Lingual Word Embeddings for Turkic Languages. E Kuriyozov, Y Doval, C Gomez-Rodriguez, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceE. Kuriyozov, Y. Doval, C. Gomez-Rodriguez. "Cross-Lingual Word Embeddings for Turkic Languages", Proceedings of The 12th Language Resources and Evaluation Conference, pp4054-- 4062, 2020
Construction and Evaluation of Sentiment Datasets for Low-Resource Languages: The Case of Uzbek. E Kuriyozov, S Matlatipov, M A Alonso, C Gómez-Rodríguez, Language and Technology Conference. ChamSpringerKuriyozov, E., Matlatipov, S., Alonso, M.A. and Gómez-Rodríguez, C., 2022. Construction and Evaluation of Sentiment Datasets for Low-Resource Languages: The Case of Uzbek. In Language and Technology Conference (pp. 232-243). Springer, Cham.
Stopword Identification and Removal Techniques on TC and IR applications: A Survey. D J Ladani, N P Desai, 10.1109/ICACCS48705.2020.90741662020 6th Int. Conf. Adv. Comput. Commun. Syst. 2020D. J. Ladani, N. P. Desai, "Stopword Identification and Removal Techniques on TC and IR applications: A Survey," 2020 6th Int. Conf. Adv. Comput. Commun. Syst. ICACCS 2020, pp. 466-472, (2020), doi: 10.1109/ICACCS48705.2020.9074166.
Lists of Uzbek Stopwords. K Madatov, S Bekchanov, J Vičič, 10.5281/zenodo.6319953Zenodo. K. Madatov, S. Bekchanov, J. Vičič. "Lists of Uzbek Stopwords", Zenodo, (2021), doi: 10.5281/zenodo.6319953
Automatic Detection of Stop Words for Texts in the Uzbek Language. K Madatov, S Bekchanov, J Vičič, Preprints. 2022MDPIK. Madatov, S. Bekchanov, J. Vičič. "Automatic Detection of Stop Words for Texts in the Uzbek Language", Preprints, MDPI, 2022
O 'zbek Tili Matnlaridaginomuhim so 'zlar //Computer Linguistics: Problems, Solutions, Prospects. K Madatov, M Sharipov, S Bekchanov, 2021. -Т. 1. -nr. 1K. Madatov, M. Sharipov, S. Bekchanov. O 'zbek Tili Matnlaridaginomuhim so 'zlar //Computer Linguistics: Problems, Solutions, Prospects. -2021. -Т. 1. -nr. 1.
Towards the Uzbek Language Endings as a Language Resource. S Matlatipov, U Tukeyev, M Aripov, Advances in Computational Collective Intelligence. ChamSpringer2020S. Matlatipov, U. Tukeyev, M. Aripov. "Towards the Uzbek Language Endings as a Language Resource", In: Advances in Computational Collective Intelligence. ICCCI 2020. Communications in Computer and Information Science, vol 1287. Springer, Cham., (2020)
Cosine Similarity and its Implementation to Uzbek Language Data. S Matlatipov, Central Asian Problems of Modern Science and Education. 20208S. Matlatipov. "Cosine Similarity and its Implementation to Uzbek Language Data," Central Asian Problems of Modern Science and Education: Vol. 2020 : Iss. 4 , Article 8, (2020).
Uzbek News Categorization using Word Embeddings and Convolutional Neural Networks. I Rabbimov, S Kobilov, I Mporas, 10.1109/AICT50176.2020.9368822IEEE 14th International Conference on Application of Information and Communication Technologies (AICT). pp 1-5. I. Rabbimov, S. Kobilov, I. Mporas. Uzbek News Categorization using Word Embeddings and Convolutional Neural Networks. 2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT). pp 1-5, (2020), doi:10.1109/AICT50176.2020.9368822
Multi-Class Text Classification of Uzbek News Articles using Machine Learning. I Rabbimov, S Kobilov, 10.1088/1742-6596/1546/1/012097Journal of Physics: Conference Series. I. Rabbimov, S. Kobilov. "Multi-Class Text Classification of Uzbek News Articles using Machine Learning". Journal of Physics: Conference Series. (2020), doi: 10.1088/1742-6596/1546/1/012097
A Rule-Based Approach to Identify Stop Words for Gujarati Language. R M Rakholia, J R Saini, Proceedings of the 5 th International Conference on Frontiers in Intelligent Computing: Theory and Applications. the 5 th International Conference on Frontiers in Intelligent Computing: Theory and ApplicationsR. M. Rakholia, J. R. Saini, "A Rule-Based Approach to Identify Stop Words for Gujarati Language," In Proceedings of the 5 th International Conference on Frontiers in Intelligent Computing: Theory and Applications, pp. 797-806, (2017)
Encyclopedia of machine learning. C. Sammut, G. WebbSpringer Science & Business MediaC. Sammut, G. Webb, eds. "Encyclopedia of machine learning". Springer Science & Business Media, (2011)
SimRelUz: Similarity and Relatedness scores as a Semantic Evaluation dataset for Uzbek language. Ulugbek Salaev, Kuriyozov Elmurod, Carlos Gomez-Rodriguez, Proceedings of the the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages. the the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced LanguagesEuropean Language Resources AssociationSalaev, Ulugbek, Elmurod, Kuriyozov, and Carlos, Gomez-Rodriguez. "SimRelUz: Similarity and Relatedness scores as a Semantic Evaluation dataset for Uzbek language". In Proceedings of the the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages (pp. 199-206). European Language Resources Association, 2022.
Sinhala Language Corpora and Stopwords from a Decade of Sri Lankan Facebook. Y Wijeratne, N Silva, 10.2139/ssrn.3650976arXivY. Wijeratne, N. de Silva, "Sinhala Language Corpora and Stopwords from a Decade of Sri Lankan Facebook," arXiv, 2020, doi: 10.2139/ssrn.3650976.
| [] |
[
"The State of Human-centered NLP Technology for Fact-checking",
"The State of Human-centered NLP Technology for Fact-checking"
] | [
"Anubrata Das \nSchool of Information\nThe University of Texas at Austin\nAustinTXUSA\n",
"Houjiang Liu \nSchool of Information\nThe University of Texas at Austin\nAustinTXUSA\n",
"Venelin Kovatchev \nSchool of Information\nThe University of Texas at Austin\nAustinTXUSA\n",
"Matthew Lease \nSchool of Information\nThe University of Texas at Austin\nAustinTXUSA\n"
] | [
"School of Information\nThe University of Texas at Austin\nAustinTXUSA",
"School of Information\nThe University of Texas at Austin\nAustinTXUSA",
"School of Information\nThe University of Texas at Austin\nAustinTXUSA",
"School of Information\nThe University of Texas at Austin\nAustinTXUSA"
] | [] | A B S T R A C T Misinformation threatens modern society by promoting distrust in science, changing narratives in public health, heightening social polarization, and disrupting democratic elections and financial markets, among a myriad of other societal harms. To address this, a growing cadre of professional fact-checkers and journalists provide high-quality investigations into purported facts. However, these largely manual efforts have struggled to match the enormous scale of the problem. In response, a growing body of Natural Language Processing (NLP) technologies have been proposed for more scalable fact-checking. Despite tremendous growth in such research, however, practical adoption of NLP technologies for fact-checking still remains in its infancy today.In this work, we review the capabilities and limitations of the current NLP technologies for fact-checking. Our particular focus is to further chart the design space for how these technologies can be harnessed and refined in order to better meet the needs of human fact-checkers. To do so, we review key aspects of NLP-based fact-checking: task formulation, dataset construction, modeling, and human-centered strategies, such as explainable models and human-in-the-loop approaches. Next, we review the efficacy of applying NLP-based fact-checking tools to assist human fact-checkers. We recommend that future research include collaboration with fact-checker stakeholders early on in NLP research, as well as incorporation of human-centered design practices in model development, in order to further guide technology development for human use and practical adoption. Finally, we advocate for more research on benchmark development supporting extrinsic evaluation of human-centered fact-checking technologies. * Corresponding author ORCID(s): 0000-0002-5412-6149 (A. Das); 0000-0003-0983-6202 (H. Liu); 0000-0003-1259-1541 (V. Kovatchev); | 10.1016/j.ipm.2022.103219 | [
"https://export.arxiv.org/pdf/2301.03056v1.pdf"
] | 255,051,606 | 2301.03056 | 2bf89a35747cd95f307cc8539a8f45d32077d43e |
The State of Human-centered NLP Technology for Fact-checking
Anubrata Das
School of Information
The University of Texas at Austin
AustinTXUSA
Houjiang Liu
School of Information
The University of Texas at Austin
AustinTXUSA
Venelin Kovatchev
School of Information
The University of Texas at Austin
AustinTXUSA
Matthew Lease
School of Information
The University of Texas at Austin
AustinTXUSA
The State of Human-centered NLP Technology for Fact-checking
A R T I C L E I N F ONatural Language Processing Misinformation Disinformation Explainability Human-AI Teaming
A B S T R A C T Misinformation threatens modern society by promoting distrust in science, changing narratives in public health, heightening social polarization, and disrupting democratic elections and financial markets, among a myriad of other societal harms. To address this, a growing cadre of professional fact-checkers and journalists provide high-quality investigations into purported facts. However, these largely manual efforts have struggled to match the enormous scale of the problem. In response, a growing body of Natural Language Processing (NLP) technologies have been proposed for more scalable fact-checking. Despite tremendous growth in such research, however, practical adoption of NLP technologies for fact-checking still remains in its infancy today.In this work, we review the capabilities and limitations of the current NLP technologies for fact-checking. Our particular focus is to further chart the design space for how these technologies can be harnessed and refined in order to better meet the needs of human fact-checkers. To do so, we review key aspects of NLP-based fact-checking: task formulation, dataset construction, modeling, and human-centered strategies, such as explainable models and human-in-the-loop approaches. Next, we review the efficacy of applying NLP-based fact-checking tools to assist human fact-checkers. We recommend that future research include collaboration with fact-checker stakeholders early on in NLP research, as well as incorporation of human-centered design practices in model development, in order to further guide technology development for human use and practical adoption. Finally, we advocate for more research on benchmark development supporting extrinsic evaluation of human-centered fact-checking technologies. * Corresponding author ORCID(s): 0000-0002-5412-6149 (A. Das); 0000-0003-0983-6202 (H. Liu); 0000-0003-1259-1541 (V. Kovatchev);
Introduction
Misinformation and related issues (disinformation, deceptive news, clickbait, rumours, and information credibility) increasingly threaten society. While concerns of misinformation existed since the early days of written text (Marcus, 1992), with recent development of social media, the entry barrier for creating and spreading content has never been lower. Moreover, polarization online drives the spread of misinformation that in turn increases polarization (Cinelli, Pelicon, Mozetič, Quattrociocchi, Novak and Zollo, 2021a,b;Vicario, Quattrociocchi, Scala and Zollo, 2019). Braking such a vicious cycle would require addressing the problem of misinformation at its root.
Fields such as journalism (Graves, 2018b;Graves and Amazeen, 2019;Neely-Sardon and Tignor, 2018) and archival studies (LeBeau, 2017) have extensively studied misinformation, and recent years have seen a significant growth in fact-checking initiatives to address this problem. Various organizations now focus on fact-checks (e.g., PolitiFact, Snopes, FactCheck, First Draft, and Full Fact), and organizations such as the International Fact-Checking Network (IFCN) 1 train and provide resources for independent fact-checkers and journalists to further scale expert fact-checking.
While professional fact-checkers and journalists provide high-quality investigations of purported facts to inform the public, human effort struggles to match the global Internet scale of the problem. To address this, a growing body of research has investigated Natural Language Processing (NLP) to fully or partially automate fact-checking (Guo, Schlichtkrull and Vlachos, 2022;Nakov, Corney, Hasanain, Alam, Elsayed, Barrón-Cedeño, Papotti, Shaar and Da San Martino, 2021a;Zeng, Abumansour and Zubiaga, 2021;Graves, 2018a). However, even state-of-the-art NLP technologies still cannot match human capabilities in many areas and remain insufficient to automate fact-checking in practice. Experts argue (Arnold, 2020;Nakov et al., 2021a) that fact-checking is a complex process and requires subjective judgement and expertise. While current NLP systems are increasingly better at addressing simple fact-checking tasks, identifying false claims that are contextual and beyond simple declarative • Section 5 reviews approaches for automating fact checking. We discuss general NLP capabilities (Section 5.1), explainable approaches (Section 5.2), and human-in-the-loop (Section 5.3) approaches for fact-checking.
• Section 6 surveys existing tools that apply NLP for fact-checking in a practical, real-world context. We argue that the human-centered perspective is necessary for the practical adoption of automated solutions.
• Section 7 provides future research directions in the context of human-centered fact checking. We discuss the work division between human and AI for mixed-initiative fact-checking in Section 7.1. In Section 7.2 we propose a novel concept for measuring trust and a novel human-centered evaluation of NLP to assist fact-checkers.
• We conclude our literature review with Section 8.
Fact-Checking Pipeline
The core idea behind automated fact-checking is enabling AI to reason over available information to determine the truthfulness of a claim. For successful automation, it is essential first to understand the complex process of journalistic fact-checking that involves human expertise along with skilled effort towards gathering evidence and synthesizing the evidence. Additional complexity comes from the need to process heterogeneous sources (e.g., information across various digital and non-digital sources). Data is also spread across different modalities such as images, videos, tables, graphs, among others. Moreover, there is a lack of tools that support effective and efficient fact-checking workflows (Graves, 2018a;Nakov et al., 2021a;Arnold, 2020). Graves (2017) breaks down the practical fact-checking mechanism for human fact-checkers into multiple steps such as a) identifying the claims to check, b) tracing false claims, c) consulting experts, and d) sharing the resulting fact-check. A growing body of AI literature -specifically in NLP -focuses on automating the fact-checking process. We synthesize several related surveys (Guo et al., 2022;Nakov et al., 2021a;Graves, 2018a;Zeng et al., 2021;Micallef et al., 2022) and distinguish four typical stages that constitute the automated fact-checking technology pipeline (illustrated in Figure 1). Note that the pipeline we describe below closely follows the structure of Guo et al. (2022), though the broader literature is also incorporated within these four stages:
• Claim Detection, Checkworthiness, and Prioritization: Claim detection involves monitoring news and/or online information for potentially false content to fact-check. One must identify claims that are potentially falsifiable (e.g., purported facts rather subjective opinions) (Guo et al., 2022;Zeng et al., 2021). Moreover, because it is impractical to fact-check everything online given limited fact-checking resources (human or automated), fact checkers must prioritize what to fact-check (Arnold, 2020). NLP researchers have sought to inform such prioritization by automatically predicting the "checkworthiness" of claims . Additionally, to avoid repeated work, fact-checkers may consult existing fact-checking databases before judging the veracity of a new claim (claim matching (Zeng et al., 2021)). We see claim matching as a part of prioritizing claims, as fact-checkers would prioritize against checking such claims.
• Evidence Retrieval: Once it is clear which claims to fact-check, the next step is to gather relevant, trustworthy evidence for assessing the claim (Guo et al., 2022;Zeng et al., 2021).
• Veracity Prediction: Given the evidence, it is necessary to assess it to determine the veracity of the claim (Guo et al., 2022;Zeng et al., 2021).
• Explanation: Finally, for human use, one must explain the fact-checking outcome via human-understandable justification for the model's determination (Graves, 2018a;Kotonya and Toni, 2020a;Guo et al., 2022).
In the subsequent sections, we discuss each of the tasks above in the context of existing NLP research in automated fact-checking. Some other steps (for example, detecting propaganda in text, click-bait detection) are also pertinent to fact-checking but do not directly fit into the stages described above. They are briefly discussed in Section 3.5.
Task Formulation for Automated Fact-Checking
Modern Natural Language Processing is largely-data driven. In this article, we distinguish task formulation (conceptual) vs. dataset construction (implementation activity, given the task definition). That said, the availability of a suitable dataset or the feasibility of constructing a new dataset can also bear on how tasks are formulated.
Claim Detection, Checkworthiness, and Prioritization
Fact-checkers and news organizations monitor information sources such as social media (Facebook, Twitter, Reddit, etc.), political campaigns and speeches, and public addresses from government officials on critical issues (Arnold, 2020;Nakov et al., 2021a). Additional sources include tip-lines on end-to-end encrypted platforms (such as WhatsApp, Telegram, and Signal) (Kazemi, Garimella, Shahi, Gaffney and Hale, 2021b). The volume of information on various platforms makes it challenging to efficiently monitor all sources for misinformation. Zeng et al. (2021) define the claim detection step as identifying, filtering, and prioritizing claims.
To identify claims, social media streams are often monitored for rumors (Guo et al., 2022). A rumour can be defined as a claim that is unverified and being circulated online (Zubiaga, Aker, Bontcheva, Liakata and Procter, 2018). Rumours are characterized by the subjectivity of the language and the reach of the content to the users (Qazvinian, Rosengren, Radev and Mei, 2011). Additionally, metadata related to virality, such as the number of shares (or retweets and re-posts), likes, or comments are also considered when identifying whether a post is a rumour (Zhang, Cao, Li, Sheng, Zhong and Shu, 2021a;Gorrell, Kochkina, Liakata, Aker, Zubiaga, Bontcheva and Derczynski, 2019). However, detecting rumours alone is not sufficient to decide whether a claim needs to be fact-checked.
For each text of interest, the key questions fact-checking systems need to address include:
1. Is there a claim to check? 2. Does the claim contain verifiable information? 3. Is the claim checkworthy? 4. Has a trusted source already fact-checked the claim?
Regarding the first criterion -is there a claim -one might ask whether the claim contains a purported fact or an opinion (Hassan, Arslan, Li and Tremayne, 2017a). For example, a statement such as "reggae is the most soulful genre of music" represents personal preference that is not checkable. In contrast, "<NAME> won a gold medal in the Olympics" is checkable by matching <NAME> to the list of all gold medal winners.
Whether the claim contains verifiable information is more challenging. For example, if a claim can only be verified by private knowledge or personal experience that is not broadly accessible, then it cannot be checked (Konstantinovskiy, Price, Babakar and Zubiaga, 2021). For example, if someone claims to have eaten a certain food yesterday, it is probably impossible to verify beyond their personal testimony.
As this example suggests, the question of whether the claim contains verifiable information depends in large part on what evidence is available for verification. This, in turn, may not be clear until after evidence retrieval is performed. In practice fact-checkers may perform some preliminary research, but mostly try to gauge checkworthiness only based on the claim itself.
In addition, this consideration is only one of many that factors into deciding whether to check a claim. Even a claim that may appear to be unverifiable may still be of such great public interest that it is worth conducting the fact-check. Moreover, even if the fact-check is conducted and ultimately indeterminate (i.e., evidence does not exist either to verify or refute the claim), simply showing that a claim's veracity cannot be determined may still be a valuable outcome.
A claim is deemed checkworthy if a claim is of significant public interest or has the potential to cause harm (Nakov, Da San Martino, Elsayed, Barrón-Cedeño, Míguez, Shaar, Alam, Haouari, Hasanain, Babulkov et al., 2021b;Hassan, Li and Tremayne, 2015). For example, a claim related to the effect of a vaccine on the COVID-19 infection rate is more relevant to the public interest and hence more checkworthy than a claim about some philosopher's favorite food.
Claims, like memes, often appear several times and/or across multiple platforms (in the same form or with slight modification) (Leskovec, Backstrom and Kleinberg, 2009;Nakov et al., 2021a;Arnold, 2020). Fact-checking organizations maintain a growing database claims which have already been fact-checked. Thus, detected claims are compared against databases of already fact-checked claims by trusted organizations (Shaar, Martino, Babulkov and Nakov, 2020;Shaar, Alam, Da San Martino and Nakov, 2021a). Comparing new claims against such databases helps to avoid duplicating work on previously fact-checked claims. This step is also known as claim matching (Zeng et al., 2021).
Reports from practitioners argue that if a claim is not checked within the first few hours, a late fact-check does not have much impact on changing the ongoing misinformation narrative Arnold, 2020). Moreover, limited resources for fact-checking make it crucial for organizations to prioritize the claims to be checked (Borel, 2016). Claims can be prioritized based on their checkworthiness (Nakov, Da San Martino, Barrón-Cedeño, Zaghouani, Míguez, Alam, Caselli, Kutlu, Strub and Mandl, 2022;Nakov et al., 2021b). Nakov et al. (2022) note that checkworthiness is determined based on factors such as 1. How urgently a claim needs to be checked? 2. How much harm can a claim cause Alam, Dalvi, Shaar, Durrani, Mubarak, Nikolov, Da San Martino, Abdelali, Sajjad, Darwish and Nakov, 2021a;Shaar, Hasanain, Hamdan, Ali, Haouari, Nikolov, Kutlu, Kartal, Alam, Da San Martino, Barrón-Cedeño, Míguez, Beltrán, Elsayed and Nakov, 2021b)? 3. Would the claim require attention from policy makers for addressing the underlying issue?
Note that estimating harms is quite challenging, especially without first having a thorough understanding and measures of harm caused by misinformationNeumann, De-Arteaga and Fazelpour (2022).
The spread of a claim on social media provides another potential signal for identifying public interest (Arnold, 2020). In the spirit of doing "the greatest good for the greatest number", viral claims might be prioritized highly because any false information in them has the potential to negatively impact a large number of people. On the other hand, since fairness considerations motivate equal protections for all people, we cannot serve only the majority at the expense of minority groups (Ekstrand, Das, Burke, Diaz et al., 2022;Neumann et al., 2022). Moreover, such minority groups may be more vulnerable, motivating greater protections, and may be disproportionately impacted by mis/disinformation (Guo et al., 2022). See Section 3.6 for additional discussion.
Evidence Retrieval
Some sub-tasks in automated fact-checking can be performed without the presence of explicit evidence. For example, the linguistic properties of the text can be used to determine whether it is machine-generated (Wang, 2017;Rashkin, Choi, Jang, Volkova and Choi, 2017). However, assessing assessing claim veracity without evidence is clearly more challenging (Schuster, Schuster, Shah and Barzilay, 2020).
Provenance of a claim can also signal information quality; known unreliable source or distribution channels are often repeat offenders in spreading false information 2 . Such analysis of provenance can be further complicated when content is systematically propagated by multiple sources (twitter misinformation bots) (Jones, 2019).
It is typically assumed that fact-checking requires gathering of reliable and trustworthy evidence that provides information to reason about the claim (Graves, 2018a;Li, Gao, Meng, Li, Su, Zhao, Fan and Han, 2016). In some cases, multiple aspects of a claim needs to be checked. A fact-checker would then decompose such a claim into distinct questions and gather relevant evidence for the question (Borel, 2016;Chen et al., 2022). From an information retrieval (IR) perspective, we can conceptualize each of those questions as an "information need" for which the fact-checker must formulate one or more queries to a search engine (Bendersky, Metzler and Croft, 2012) in order to retrieve necessary evidence.
Evidence can be found across many modalities, including text, tables, knowledge graphs, images, and videos. Various metadata can also provide evidence and are sometimes required to assess the claim. Examples include context needed to disambiguate claim terms, or background of the individual or organization from whom the claim originated.
Retrieving relevant evidence also depends on the following questions (Singh, Das, Li and Lease, 2021):
1. Is there sufficient evidence available related to a claim? 2. Is it accessible or available in the public domain? 3. Is it in a format that can be read and processed?
As noted earlier in Section 3.1, the preceding claim detection task involves assessing whether a claim contains verifiable information; this depends in part on what evidence exists to be retrieved, which is not actually known until evidence retrieval is performed. Having now reached this evidence retrieval step, we indeed discover whether sufficient evidence exists to support or refute the claim. Additionally, evidence should be trustworthy, reputable (Nguyen, Kharosekar, Lease and Wallace, 2018b;Lease, 2018), and unbiased (Chen, Khashabi, Yin, Callison-Burch and Roth, 2019a).
Once evidence is retrieved, stance detection assesses the degree to which the evidence supports or refutes the claim (Nguyen, Kharosekar, Krishnan, Krishnan, Tate, Wallace and Lease, 2018a;Ferreira and Vlachos, 2016;Popat, Mukherjee, Yates and Weikum, 2018). Stance detection is typically formulated as a classification task (or ordinal regression) over each piece of retrieved evidence. Note that some works formulate stance detection as an independent task (Hanselowski et al., 2018;Hardalov et al., 2021).
Veracity Prediction
Given a claim and gathered evidence, veracity prediction involves reasoning over the collected evidence and the claim. Veracity prediction can be formulated as a binary classification task (i.e., true vs. false) (Popat et al., 2018;Nakashole and Mitchell, 2014;Potthast, KIESELJ et al., 2018), or as a fine-grained, multi-class task following the journalistic fact-checking practices (Augenstein, Lioma, Wang, Lima, Hansen, Hansen and Simonsen, 2019;Shu, Mahudeswaran, Wang, Lee and Liu, 2020;Wang, 2017). In some cases, there may not be enough information available to determine the veracity of a claim (Thorne, Vlachos, Christodoulopoulos and Mittal, 2018).
Note that fact-checking is potentially a recursive process because retrieved evidence may itself need to be factchecked before it can be trusted and acted upon (Graves, 2018a). This is also consistent with broader educational practices in information literacy 3 in which readers are similarly encouraged to evaluate the quality of information they consume. Such assessment of information reliability can naturally integrate with the veracity prediction task in factoring in the reliability of the evidence along with its stance (Nguyen et al., 2018b;Guo et al., 2022).
Explaining Veracity Prediction
While a social media platform might use automated veracity predictions in deciding whether to automatically block or demote content, the use of fact-checking technology often involves a human-in-the-loop, whether it is a platform moderator, a journalist, or an end-user. When we consider such human-centered use of fact-checking technologies, providing an automated veracity prediction without justifying the answer can cause a system to be ignored or distrusted, or even reinforce mistaken human beliefs in false claims (the "backfire effect" (Lewandowsky, Ecker, Seifert, Schwarz and Cook, 2012)). Explanations and justifications are especially important given the noticeable drop in performance of state-of-the-art NLP systems when facing adversarial examples (Kovatchev, Chatterjee, Govindarajan, Chen, Choi, Chronis, Das, Erk, Lease, Li et al., 2022). Consequently, automated fact-checking systems intended for humanconsumption should seek to explain their veracity predictions in a similar manner to that of existing journalistic fact-checking practices (Uscinski, 2015). A brief point to make is that much of the explanation research has focused on explanations for researchers and engineers engaged in system development (types of explanations, methods of generating them, and evaluation regimens). In contrast,we emphasize here explanations for system users.
Various types of explanations can be provided, such as through 1. evidence attribution 2. explaining the decision-making process for a fact-check 3. summarizing the evidence 4. case-based explanations Evidence attribution is the process of identifying evidence or a specific aspect of the evidence (such as paragraphs, sentences, or even tokens of interest) (Thorne et al., 2018;Popat et al., 2018;Shu, Cui, Wang, Lee and Liu, 2019;Lu and Li, 2020). Furthermore, the relative importance of the evidence can also justify the fact-checking outcome (Nguyen et al., 2018a). Alternatively, a set of rules or interactions to break down parts of the decision-making process can also serve as an explanation (Gad-Elrab, Stepanova, Urbani and Weikum, 2019; Nguyen et al., 2018b). Such formulation focuses more on how the evidence is processed to arrive at a decision. Explaining the veracity can also be formulated as a summarization problem over the gathered evidence to explain a fact-check (Atanasova, Simonsen, Lioma and Augenstein, 2020a;Kotonya and Toni, 2020b). Finally, case-based explanations can provide the user with similar, human-labeled instances (Das, Gupta, Kovatchev, Lease and Li, 2022).
Related Tasks
In addition to tasks that are considered central to the automated fact-checking pipeline, some additional tasks bear mentioning as related and complementary to the fact-checking enterprise. Examples of such tasks include propaganda detection (Da San Martino, Cresci, Barrón-Cedeño, Yu, Pietro and Nakov, 2020), clickbait detection (Potthast, Köpsel, Stein and Hagen, 2016), and argument mining (Lawrence and Reed, 2020). Furthermore, some tasks can be formulated independent of the fact-checking pipeline and utilized later to improve individual fact-checking sub-tasks. For example, predicting the virality of social media content (Jain, Garg and Jain, 2021) can help improve claim detection and claim checkworthiness. Similarly, network analysis on fake news propagation (Shao, Ciampaglia, Flammini and Menczer, 2016) can help in analyzing provenance.
With an eye toward building more human-centered AI approaches, there are also some tasks that could be applied to help automate parts of the fact-checking process. For example, claim detection might be improved via an URL recommendation engine for content that might need fact-checking (Vo and Lee, 2018). Additionally, fact-checkers could benefit from a predicted score for claim difficulty (Singh et al., 2021). In terms of evidence retrieval and veracity prediction, one might generate fact-checking briefs to aid inexpert fact-checkers (Fan et al., 2020). Instead of summarizing the evidence in general (Section 3.4), one might instead summarize with the specific goal of decision support (Hsu and Tan, 2021).
Key Challenges
Most work in automated fact-checking has been done on veracity prediction, and to a lesser extent, on explanation generation. Recently, we have seen more attention directed towards claim detection and checkworthiness. In contrast, work on evidence retrieval remains less developed. Guo et al. (2022) points out several sources of biases in the claim check-worthiness task. Claims could be of variable interest to different social groups. Additionally, claims that might cause more harm to marginalized groups compared to the general population may not get enough attention. Ideally, models identifying check-worthiness need to overcome any possible disparate impact.
Claim Detection
Similar concerns appear in the report by Full Fact (Arnold, 2020). One of the criteria for selecting a claim for factchecking across several organizations is "Could the claim threaten democratic processes or minority groups?" However, such criterion may be at odds with the concerns of virality. Fact-checking organizations often monitor virality metrics to decide which claims to fact-check (Arnold, 2020;Nakov et al., 2021a). Nevertheless, if a false claim is targeted towards an ethnic minority, such claims may not cross the virality thresholds.
Prioritizing which claims to fact-check requires attention to various demographic traits: content creators, readers, and subject matter. Claim check-worthiness dataset design can thus benefit from consideration of demographics.
Evidence Retrieval Evidence retrieval has been largely neglected in the automated fact-checking NLP literature. It is often assumed that evidence is already available, or, coarse-grained evidence is gathered from putting the claims into a search engine (Popat et al., 2018;Nguyen et al., 2018a). However, Hasanain and Elsayed (2022) show in their study that search engines optimized for relevance seldom retrieve evidence most useful for veracity prediction. Although retrieving credible information has been studied thoroughly in IR (Clarke, Rizvi, Smucker, Maistro and Zuccon, 2020a), more work is needed that is focused on retrieving evidence for veracity assessment (Lease, 2018;Clarke, Rizvi, Smucker, Maistro and Zuccon, 2020b).
Veracity Prediction and Explanation
A critical challenge for automated systems is to reason over multiple sources of evidence while also taking source reputation into account. Additionally, explaining a complex reasoning process is a non-trivial task. The notion of model explanations itself is polysemous and evolving in general, not to mention in the context of fact checking. As explainable NLP develops, automated fact-checking tasks also need to evolve and provide explanations that are accessible to human stakeholders yet faithful to the underlying model. For example, case-based explanations are mostly unexplored in automated fact-checking, although working systems have been proposed for propaganda detection .
In many NLP tasks, such as machine translation or natural language inference, the goal is to build fully-automated, end-to-end solutions. However, in the context of fact-checking, state-of-the-art limitations suggest the need for humansin-the-loop for the forseable future. Given this, automated tooling to support human fact-checkers is crucial. However, understanding the fact-checker needs and incorporating those needs in the task formulation has been largely absent from the automated fact-checking literature, with a few notable exceptions Demartini et al., 2020). Future research could benefit from greater involvement of fact-checkers in the NLP research process and shifting goals from complete automation toward human support.
Dataset Construction
Corresponding to task formulation (Section 3), our presentation of fact-checking datasets is also organized around claims, evidence, veracity prediction, and explanation. Note that not all datasets have all of these components.
Claim detection and claim check-worthiness
Claim detection datasets typically contain claims and their sources (documents, social media streams, transcripts from political speeches) (Guo et al., 2022). One form of claim detection is identifying rumours on social media, where datasets are primarily constructed with text from Twitter (Zubiaga et al., 2018;Qazvinian et al., 2011) and Reddit (Gorrell et al., 2019;Lillie, Middelboe and Derczynski, 2019). Some works provide the claims in the context they appeared on social media (Zhang et al., 2021a;Ma, Gao, Mitra, Kwon, Jansen, Wong and Cha, 2016). However, several studies note that most claim detection datasets do not contain enough context. As the discussion of metadata in Section 3 suggests, broader context might include: social media reach, virality metrics, the origin of a claim, and relevant user data (i.e., who posted a claim, how influential they are online, etc.) (Arnold, 2020;Nakov et al., 2021a).
Claim check-worthiness datasets Shaar et al., 2021b;Barrón-Cedeño, Elsayed, Nakov, Da San Martino, Hasanain, Suwaileh, Haouari, Babulkov, Hamdan, Nikolov et al., 2020;Atanasova, Barrón-Cedeño, Elsayed, Suwaileh, Zaghouani, Kyuchukov, Da San Martino and Nakov, 2018;Konstantinovskiy et al., 2021;Hassan et al., 2015) filter claims from a source (similar to claim detection, sources include social media feeds and political debate transcripts, among others) by annotating claims based on the checkworthiness criteria (mentioned in the section 3.1). Each claim is given a checkworthiness score to obtain a ranked list. Note that claim detection and checkworthiness datasets may be expert annotated (Hassan et al., 2015) or crowd annotated Shaar et al., 2021b;Barrón-Cedeño et al., 2020;Atanasova et al., 2018;Konstantinovskiy et al., 2021)
4 .
The datasets discussed above do not capture multi-modal datasets, and few do. One such dataset is r/Fakeddit (Nakamura, Levy and Wang, 2020). This dataset contains images and associated text content from Reddit as claims. Misinformation can also spread through multi-modal memes, and tasks such as Facebook (now Meta) Hateful Memes Challenge (Kiela, Firooz, Mohan, Goswami, Singh, Ringshia and Testuggine, 2020) for hate speech suggest what might be similarly done for misinformation detection.
Evidence
Early datasets in fact-checking provide metadata with claims as the only form of evidence. Such metadata include social media post properties, user information, publication date, source information (Wang, 2017;Potthast et al., 2018). As discussed earlier in Section 3.2, such metadata does not contain the world knowledge necessary to reason about a complex claim. To address the above limitations, recent datasets consider external evidence (Guo et al., 2022).
Evidence is collected differently depending upon the problem setup. For artificial claims, evidence is often retrieved from a single source such as Wikipedia articles (Thorne et al., 2018;Jiang, Bordia, Zhong, Dognin, Singh and Bansal, 2020;Schuster, Fisch and Barzilay, 2021). Domain limited evidence for real-world claims is collected from problemspecific sources, such as academic articles for scientific claims (Kotonya and Toni, 2020b;Wadden, Lin, Lo, Wang, van Zuylen, Cohan and Hajishirzi, 2020), or specific evidence listed in fact-checking websites (Vlachos and Riedel, 2014;Hanselowski, Stab, Schulz, Li and Gurevych, 2019). Open-domain evidence for real-world claims is usually collected from the web via search engines (Popat et al., 2018;Augenstein et al., 2019).
Recently, there has been more work considering evidence beyond free text. Such formats include structured or semistructured forms of evidence. Sources include knowledge bases for structured form of evidence (Shi and Weninger, 2016) and semi-structured evidence from semi-structured knowledge bases (Vlachos and Riedel, 2015), tabular data (Chen et al., 2019a;Gupta, Mehta, Nokhiz and Srikumar, 2020), and tables within a document (Aly, Guo, Schlichtkrull, Thorne, Vlachos, Christodoulopoulos, Cocarascu and Mittal, 2021).
Additionally, there are some retrieval-specific datasets that aim at retrieving credible information from search engines (Clarke et al., 2020b). However, such tasks don't incorporate claim checking as an explicit task.
Veracity Prediction
Evidence retrieval and veracity prediction datasets are usually constructed jointly. Note, in some cases, evidence may be absent from the datasets. Veracity prediction datasets usually do not deal with claim detection or claim checkworthiness tasks separately. Instead, such datasets contain a set of claims that are either artificially constructed or collected from the internet.
Artificial claims in veracity prediction datasets are often limited in scope and constructed for natural language reasoning research (Aly et al., 2021;Thorne et al., 2018;Schuster et al., 2021;Jiang et al., 2020;Chen, Wang, Chen, Zhang, Wang, Li, Zhou and Wang, 2019b). For example, FEVER (Thorne et al., 2018) and HoVer (Jacovi and Goldberg, 2021) obtain claims from Wikipedia pages. Some datasets also implement subject-predicate-object triplets for factchecking against knowledge bases (Kim and Choi, 2020;Shi and Weninger, 2016).
Fact-checking websites are popular sources for creating veracity prediction datasets based on real claims. Several datasets obtain claims from either a single website or collect claims from many such websites and collate them (Wang, 2017;Hanselowski et al., 2018;Augenstein et al., 2019;Vlachos and Riedel, 2014). Note that such claims are inherently expert annotated. Other sources of claims are social media (Potthast et al., 2018;Shu, Sliva, Wang, Tang and Liu, 2017), news outlets (Horne, Khedr and Adali, 2018;Gruppi, Horne and Adalı, 2021;Nørregaard, Horne and Adalı, 2019), blogs, discussions in QA forums, or similar user-generated publishing platforms (Mihaylova, Nakov, Màrquez, Barrón-Cedeño, Mohtarami, Karadzhov and Glass, 2018).
Additionally some fact-checking datasets target domain-specific problems such as scientific literature (Wadden et al., 2020), climate change (Diggelmann, Boyd-Graber, Bulian, Ciaramita and Leippold, 2020), and public health (Kotonya and Toni, 2020b). Most datasets are monolingual but recent effort have started to incorporate multi-lingual claims (Gupta and Srikumar, 2021;Barnabò, Siciliano, Castillo, Leonardi, Nakov, Da San Martino and Silvestri, 2022).
Early datasets focus on a binary veracity prediction -true or false (Mihalcea and Strapparava, 2009). Recent datasets often adopt an ordinal veracity labeling scheme that mimics fact checkering websites (Vlachos and Riedel, 2014;Wang, 2017;Augenstein et al., 2019). However, every fact-checking website has a different scale for veracity, so datasets that span across multiple websites come with a normalization problem. While some datasets do not normalize the labels (Augenstein et al., 2019), some normalize them post-hoc (Kotonya and Toni, 2020a;Gupta and Srikumar, 2021;Hanselowski et al., 2019).
Explanation
While an explanation is tied to veracity prediction, only a few datasets explicitly address the problem of explainable veracity prediction (Atanasova et al., 2020a;Kotonya and Toni, 2020b;Alhindi, Petridis and Muresan, 2018). Broadly in NLP, often parts of the input is highlighted to provide an explanation for the prediction. This form of explanations is known as extractive rationale (Zaidan, Eisner and Piatko, 2007;Kutlu, McDonnell, Elsayed and Lease, 2020). Incorporating the idea of the extractive rationale, some datasets include a sentence from the evidence along with the label (Thorne et al., 2018;Hanselowski et al., 2018;Wadden et al., 2020;Schuster et al., 2021). Although such datasets do not explicitly define evidence as a form of explanation in such cases, the line between evidence retrieval and explanation blurs if the evidence is the explanation. However, explanations are different from evidence in a few ways. Particularly, explanations need to be concise for user consumption, while evidence can be a collection of documents or long documents. Explanations are user sensitive. Consequently, evidence alone as a form of explanation might have some inherent assumption about the user that might not be understandable for different groups of users (e.g., experts vs. non-experts).
Challenges
Claims Checkworthiness datasets are highly imbalanced, i.e., the number of checkworthy claims are relatively low compared to non-checkworthy claims (Williams, Rodrigues and Novak, 2020). Datasets are also not generalizable due to their limited domain-specific context (Guo et al., 2022). Additionally, while existing datasets cover various languages such as English, Arabic, Spanish, Bulgarian, and Dutch, they are primarily monolingual. Consequently, building multilingual checkworthiness predictors is still challenging. Much of the data in check-worthiness datasets is not originally intended to be used in classification. The criteria used by different organizations when selecting which claims to check is often subjective and may not generalize outside of the particular organization.
Some annotation practices can result in artifacts in the dataset. For example, artificially constructed false claims, such as a negation-based false claim in FEVER, can lead to artifacts in models (Schuster et al., 2021). Models do not generalize well beyond the dataset because they might overfit to the annotation schema (Bansal, Nushi, Kamar, Lasecki, Weld and Horvitz, 2019). One way to identify such blind spots is by using adversarial datasets for fact-checking. Such a setting is incorporated in FEVER 2.0 ( .
Datasets constructed for research may not always capture how fact-checkers work in practice. This leads to limitations in the algorithms built on them. For example, interviews with fact-checkers report that they tend to consider a combination of contents of the posts and associated virality metrics (indicating reach) during fact-checking (Arnold, 2020). However, most fact-checking datasets do not include virality metrics.
Evidence Retrieval Some datasets have been constructed by using a claim verbatim as a query and taking the top search results as evidence. However, some queries are better than others for retrieving desired information. Consequently, greater care might be taken in crafting effective queries or otherwise improving evidence retrieval such that resulting datasets are more likely to contain quality evidence for veracity prediction. Otherwise, poor quality evidence becomes a bottleneck for the quality of the models trained at the later stages in the fact-checking pipeline (Singh et al., 2021).
Veracity Prediction
A key challenge in veracity prediction datasets is that the labels are not homogeneous across fact-checking websites and normalizing might introduce noise.
Explanation Some datasets include entire fact-checking articles as evidence and their summaries as the form of explanation (Atanasova et al., 2020a;Kotonya and Toni, 2020b). In such cases, "explanation" components assume an already available fact-checking article. Instead, providing abstractive summaries and explaining the reasoning process over the evidence would be more valuable.
Data Generation Recent years have seen an increasing interest in the use of data generation and data augmentation for various NLP tasks (Liu, Swayamdipta, Smith and Choi, 2022;Hartvigsen, Gabriel, Palangi, Sap, Ray and Kamar, 2022;Dhole, Gangal, Gehrmann, Gupta, Li, Mahamood, Mahendiran, Mille, Srivastava, Tan et al., 2021;Kovatchev, Smith, Lee and Devine, 2021). The use of synthetic data has not been extensively explored in the context of factchecking.
Automating Fact-checking
NLP research in automated fact-checking has primarily focused on building models for different automated factchecking tasks utilizing existing datasets. In the following section, we highlight the broad modeling strategies employed in the literature, with more detailed discussion related to explainable methods for automated fact-checking.
General NLP Capabilities
Claim Detection and Checkworthiness While claim detection is usually implemented as a classification task only, claim checkworthiness is typically implemented both as ranking and classification task (Zeng et al., 2021). As discussed earlier in the task formulation Section (3.1), the broad task of claim detection can be broken down into sub-tasks of identifying claims, filtering duplicate claims, and prioritizing claims based on their checkworthiness. Another instance of identifying claims is detecting rumors in social media streams.
Some early works in rumor detection focus on feature engineering from available metadata the text itself (Enayet and El-Beltagy, 2017;Aker, Derczynski and Bontcheva, 2017;Zhou, Jain, Phoha and Zafarani, 2020). More advanced methods for claim detection involve LSTM and other sequence models (Kochkina, Liakata and Augenstein, 2017). Such models are better at capturing the context of the text (Zubiaga, Liakata, Procter, Wong Sak Hoi and Tolmie, 2016). Tree-LSTM (Ma, Gao and Wong, 2018) and Hierarchical attention networks (Guo, Cao, Zhang, Guo and Li, 2018) capture the internal structure of the claim or the context in which the claim appears. Additionally, graph neural network approaches can capture the related social media activities along with the text (Monti, Frasca, Eynard, Mannion and Bronstein, 2019).
Similarly, early works in claim-checkworthiness utilize support vector machines using textual features and rank the claims in terms of their priorities (Hassan et al., 2017a). For example, Konstantinovskiy et al. (2021) build a classification model for checkworthiness by collapsing the labels to checkable vs. non-checkable claim. They build a logistic regression model that uses word embeddings along with syntax based features (parts of speech tags, and named entities). Neural methods such as LSTM performed well in earlier checkworthiness shared tasks (Elsayed, Nakov, Barrón-Cedeño, Hasanain, Suwaileh, Da San Martino and Atanasova, 2019). Additionally, Atanasova, Nakov, Màrquez, Barrón-Cedeño, Karadzhov, Mihaylova, Mohtarami and Glass (2019b) show that capturing context helps with the checkworthiness task as well. Models such as RoBERTa obtained higher performance in the later edition of the CheckThat! shared task (Williams et al., 2020;Martinez-Rico, Martinez-Romo and Araujo, 2021) for English language claims. Fine-tuning such models for claim detection tasks has become more prevalent for claim checkworthiness in other languages as well Williams et al., 2020).
Filtering previously fact-checked claims is a relatively new task in this domain. Shaar et al. (2020) propose an approach using BERT and BM-25 to match claims against fact-checking databases for matching claims with existing databases. Additionally, fine-tuning RoBERTa on various fact-checking datasets resulted in high performance for identifying duplicate claims (Bouziane, Perrin, Cluzeau, Mardas and Sadeq, 2020). Furthermore, a combination of pretrained model Sentence-BERT and re-ranking with LambdaMART performed well for detecting previously factchecked claims .
Evidence Retrieval and Veracity Prediction Evidence retrieval and veracity prediction in the pipeline can be modeled sequentially or jointly. Similar to claim detection and checkworthiness models, early works use stylistic features and metadata to arrive at veracity prediction without external evidence (Wang, 2017;Rashkin et al., 2017). Models that include evidence retrieval often use commercial search APIs or some retrieval approach such as TF-IDF, and BM25 (Thorne et al., 2018). Similar to question-answering models, some works adopt a two-step approach. First a simpler model (TF-IDF or BM-25) is used at scale and then a more complex model is used for re-ranking after the initial pruning (Thorne et al., 2018;Nie, Wang and Bansal, 2019;Hanselowski et al., 2019). Additionally, document vs. passage retrieval, or 2-stage "telescoping" approaches, are adopted where the first stage is retrieving related documents and the second stage is to retrieve the relevant passage. Two stage approaches are useful for scaling up applications as the first stage is more efficient than the second stage. For domain specific evidence retrieval, using domain-bound word embeddings has been shown to be effective (Zeng et al., 2021).
The IR task is not always a part of the process. Instead, it is often assumed that reliable evidence is already available. While this simplifies the fact-checking task so that researchers can focus on veracity prediction, in practice evidence retrieval is necessary and cannot be ignored. Moreover, in practice one must contend with noisy (non-relevant), low quality, and biased search results during inference.
As discussed earlier in Section 3.3, assessing the reliability of gathered evidence may be necessary. If the evidence is assumed to be trustworthy, then it suffices to detect the stance of each piece of evidence and then aggregate (somehow) to induce veracity (e.g., perhaps assuming all evidence is equally important and trustworthy). However, often one must contend with evidence "in the wild" of questionable reliability, in which case assessing the quality (and bias) of evidence is an important precursor to using it in veracity prediction.
Veracity prediction utilizes textual entailment for inferring veracity over either a single document as evidence or over multiple documents. Dagan, Dolan, Magnini and Roth (2010) define textual entailment as "deciding, given two text fragments, whether the meaning of one text is entailed (can be inferred) from another text." Real-world applications often require reasoning over multiple documents (Augenstein et al., 2019;Kotonya and Toni, 2020b;Schuster et al., 2021). Reasoning over multiple documents can be done either by concatenation (Nie et al., 2019) or weighted aggregation (Nguyen et al., 2018b). Weighted aggregation virtually re-ranks the evidence considered to filter out the unreliable evidence (Ma, Gao, Joty and Wong, 2019;Pradeep, Ma, Nogueira and Lin, 2021). Some approaches also use Knowledge Bases as the central repository of all evidence (Shi and Weninger, 2016). However, evidence is only limited to what is available in the knowledge base (Guo et al., 2022;Zeng et al., 2021). Moreover, a fundamental limitation of knowledge bases is that not all knowledge fits easily into structured relations.
Recent developments in large language models help extend the knowledge base approach. Fact-checking models can rely on pretrained models to provide evidence for veracity prediction (Lee, Li, Wang, Yih, Ma and Khabsa, 2020). However, this approach can encode biases present in the language model (Lee, Bang, Madotto and Fung, 2021).
An alternative approach is to help fact-checkers with downstream tasks by processing evidence. An example of such work is generating summaries over available evidence using BERT (Fan et al., 2020).
Limitations With the recent development of large, pre-trained language models and deep learning for NLP, we see a significant improvement across the fact-checking pipeline. With the introduction of FEVER (Thorne et al., 2018;Thorne, Vlachos, Cocarascu, Christodoulopoulos and Mittal, 2019;Aly et al., 2021) and CheckThat! we have benchmarks for both artificial and real-life claim detection and verification models. However, even the state-of-the-art NLP models perform poorly on the benchmarks above. For example, the best performing model on FEVER 2018 shared task (Thorne et al., 2018) reports an accuracy of 0.67 5 . Models perform worse on multi-modal shared task FEVEROUS (Aly et al., 2021): the best performing model reports 0.56 accuracy score 6 . Similarly, the best checkworthiness model only achieved an average precision of 0.65 for Arabic claims and 0.224 for English claims in the CheckThat! 2021 shared task for identifying checkworthiness in tweets . On the other hand, the best performing model for identifying check-worthy claims in debates reports 0.42 average precision. Surprisingly, Barrón-Cedeño et al. (2020), the top performing model for checkworthiness detection, report an average precision of 0.806 (Williams et al., 2020). . For the fact-checking task of CheckThat! 2021 , the best performing model reports a 0.83 macro F1 score. However, the second-best model only reports a 0.50 F1 score. Given this striking gap in performance between the top system vs. others, it would be valuable for future work to benchmark systems on additional datasets in order to better assess the generality of these findings.
It is not easy to make a direct comparison between different methods that are evaluated in different settings and with different datasets (Zeng et al., 2021). Moreover, the pipeline design of automated fact-checking creates potential bottlenecks, e.g., performance on the veracity prediction task on most datasets is dependent on the claim detection task performance or the quality of the evidence retrieved. Extensive benchmarks are required to incorporate all of the prior subtasks in the fact-checking pipeline systematically (Zeng et al., 2021).
Much of AI research is faced with a fundamental trade-off between working with diverse formulations of a problem and standardized benchmarks for measuring progress. This trade-off also impacts automated fact-checking research. While there exist benchmarks such as FEVER and the CheckThat!, most models built on those benchmarks may not generalize well in a practical setting. Abstract and tractable formulations of a problem may help us develop technologies that facilitate practical adoption. However, practical adoption requires significant engineering effort beyond the research setting. Ideally, we would like to see automated fact-checking research continue to move toward increasingly realistic benchmarks while incorporating diverse formulations of the problem.
Explainable Approaches
Although the terms interpretability and explainability are often used interchangeably, and some times defined to be so (Molnar, 2020), we distinguish interpretability vs. explainability similar to (Kotonya and Toni, 2020a). Specifically, interpretability represents methods that provide direct insight into an AI system's components (such as features and variables), often requiring some understanding of the specific to the algorithm, and often built for expert use cases such as model debugging. Explainability represents methods to understand an AI model without referring to the actual component of the systems. Note that, in the task formulation section, we have also talked about explaining veracity prediction. The goal of such explanation stems from fact-checker needs to help readers understand the fact-checking verdict. Therefore, explaining veracity prediction aligns more closely with explainability over interpretability. When the distinction between explainability vs. interpretability does not matter, we follow Vaughan and Wallach (2020) in adopting intelligibility (Vaughan and Wallach, 2020) as an umbrella term for both concepts. Sokol and Flach (2019) propose a desiderata for designing user experience for machine learning applications. Kotonya and Toni (2020a) extend them in the context of fact-checking and suggest eight properties of intelligibility: actionable, causal, coherent, context-full, interactive, unbiased or impartial, parsimonious, and chronological.
Additionally, there are three dimensions specifically for explainable methods in NLP (Jacovi and Goldberg, 2020): In comparison with the available intelligibility methods in NLP (Wiegreffe and Marasovic, 2021), only a few are applied to existing fact-checking works. Below, we highlight only commonly observed explainable fact-checking methods (also noted by Kotonya and Toni (2020a)). 1. highlighting tokens in articles (Popat et al., 2018) 2. highlighting salient excerpts from evidence utilizing comments related to the post (Shu et al., 2019) 3. n-gram extraction using self-attention (Yang, Pentyala, Mohseni, Du, Yuan, Linder, Ragan, Ji and Hu, 2019) 4. attention from different sources other than the claim text itself, such as the source of tweets, retweet propagation, and retweeter properties (Lu and Li, 2020) Rule discovery as explanations Rule mining is a form of explanation prevalent in knowledge base systems (Gad-Elrab et al., 2019; Ahmadi, Lee, Papotti and Saeed, 2019). These explanations can be more comprehensive, but as noted in the previous section, not all statements can be fact-checked via knowledge-based methods due to limitations of the underlying knowledge-base itself. Some approaches provide general purpose rule mining in an attempt to address this limitation (Ahmadi, Truong, Dao, Ortona and Papotti, 2020).
Attention-based Intelligibility
Summarization as explanations Both extractive and abstractive summaries can provide explanations for factchecking. Atanasova et al. (2020a) provides natural language summaries to explain the fact-checking decision. They explore two different approaches -explanation generation and veracity prediction as separate tasks, and joint training of the both. Joint training performs worse than single training. Kotonya and Toni (2020b) combine abstractive and extractive approaches to provide a novel summarization approach. Brand, Roitero, Soprano, Rahimi and Demartini (2018) show jointly training prediction and explanation generation with encoder-decoder models such as BART (Lewis, Liu, Goyal, Ghazvininejad, Mohamed, Levy, Stoyanov and Zettlemoyer, 2020) results in explanations that help the crowd to perform better veracity assessment.
Counterfactuals and adversarial methods Adversarial attacks on opaque models help to identify any blind-spots, biases and discover data artifacts in models (Ribeiro, Wu, Guestrin and Singh, 2020). Shared task FEVER 2.0 (Thorne et al., 2019) asked participants to devise methods for generating adversarial claims to identify weaknesses in the factchecking methods. Natural language generation models such as GPT-3 can assist in formulating adversarial claims. More control over the generation can come from manipulating the input to natural language generation methods and constraining the generated text within original vocabulary (Niewiński, Pszona and Janicka, 2019). Atanasova, Wright and Augenstein (2020b) generate claims with n-grams inserted into the input text. experiment with several adversarial methods such as rule-based adversary, semantically equivalent adversarial rules (or SEARS) (Ribeiro, Singh and Guestrin, 2018), negation, and paraphrasing-based adversary. Adversarial attacks are evaluated based on the potency (correctness) of the example and reduction in system performance. While methods such as SEARS and paraphrasing hurt the system performance, hand-crafted adversarial examples have higher potency score. Limitations Intelligible methods in NLP and specifically within fact-checking are still in their infancy. Analysis of Kotonya and Toni (2020a) shows that most methods do not fulfill the desiderata mentioned earlier in this section. Specifically, they find that none of the existing models meet the criteria of being actionable, causal, and chronological. They also highlight that no existing method explicitly analyzes whether explanations are impartial. Some forms of explanations, such as rule-based triplets, are unbiased as they do not contain sentences or contain fragments of information (Kotonya and Toni, 2020a). Some explainable methods address a specific simplified formulation of the task. For example, Kotonya and Toni (2020b); Atanasova et al. (2020a) both assume that expert-written fact-checking articles already exist. They provide explanations as summaries of the fact-checking article. However, in practice, a fact-checking system would not have access to such an article for an unknown claim.
Interpretable methods (non-BlackBox)
In the case of automated fact-checking, most intelligible methods focus on explaining the outcome rather than describing the process to arrive at the outcome (Kotonya and Toni, 2020a). Moreover, all of the tasks in the factchecking pipeline have not received equal attention for explainable methods. Kotonya and Toni (2020a) also argue that automatic fact-checking may benefit from explainable methods that provide insight into how outcomes of earlier sub-task in the fact-checking pipeline impact the outcome of later subtasks.
Most explainable NLP works evaluate explanation quality instead of explanation utility or faithfulness. Jacovi and Goldberg (2020) argue for a thorough faithfulness evaluation for explainable models. For example, even though attention-based explanations may provide quality explanations, they may not necessarily be faithful. Moreover, explanation utility requires separate evaluation by measuring whether explanations improve both i) human understanding of the model (Hase and Bansal, 2020) and ii) human effectiveness of the downstream task . Additionally, most intelligible methods establish only one-way communication from the model to humans. Instead, explanations might improve the model and human performance by establishing a bidirectional feedback loop.
Human-in-the-loop Approaches
Human-in-the-loop (HITL) approaches can help scale automated solutions while utilizing human intelligence for complex tasks. There are different ways of applying HITL methods, e.g., delegating sub-tasks to crowd workers (Demartini, Trushkowsky, Kraska, Franklin and Berkeley, 2013;Demartini, Difallah and Cudré-Mauroux, 2012;Sarasua, Simperl and Noy, 2012), active learning (Settles, 2009;Zhang, Lease and Wallace, 2017), interactive machine learning (Amershi, Cakmak, Knox and Kulesza, 2014;Joachims and Radlinski, 2007), and decision support systems where humans make the final decision based on model outcome and explanations (Zanzotto, 2019).
While HITL approaches in artificial intelligence are prevalent, only a few recent works employ such approaches in fact-checking. HITL approaches are predominantly more present in the veracity prediction task than other parts of the pipeline. For example, Demartini et al. (2020) propose a HITL framework for combating online misinformation. However, they only consider hybrid approaches for two sub-tasks in the fact-checking pipeline: a) claim checkworthiness and b) truthfulness judgment (same as veracity prediction). Below, we discuss the existing HITL approaches by how the system leverages human effort for each sub-task in the fact-checking pipeline.
Claim Detection, Checkworthiness, and Prioritization Social media streams are often monitored for rumors as a part of the claim detection task (Guo et al., 2022). Farinneya, Abdollah Pour, Hamidian and Diab (2021) apply an active learning-based approach at the claim detection stage for identifying rumors on social media. In-domain data is crucial for traditional supervised methods to perform well for rumor detection (Ahsan, Kumari and Sharma, 2019), but in real-world scenarios, sufficient in-domain labeled data may not be available in the early stages of development. A semi-supervised approach such as active learning is beneficial for achieving high performance with fewer data points. Empirical results shows that Tweet-BERT, along with the least confidence-based sample selection approach, performs on par with supervised approaches using far less labeled data (Farinneya et al., 2021).
Similarly, Tschiatschek, Singla, Gomez Rodriguez, Merchant and Krause (2018) propose a HITL approach that aims to automatically aggregate user flags and recommend a small subset of the flagged content for expert fact-checking. Their Bayesian inference-based approach jointly learns to detect fake news and identify the accuracy of user flags over time. One strength of this approach is that the algorithm improves over time in identifying users' flagging accuracy. Consequently, over time this algorithm's performance improves. This approach is also robust against spammers. By running the model on publicly available Facebook data where a majority of the users are adversarial, experiments show that their algorithm still performs well.
Duke's Tech & Check team implemented HITL at the claim check-worthiness layer (Adair and Stencel, 2020). To avoid flagging false check-worthy claims, a human expert would sort claims detected by ClaimBuster (Hassan, Zhang, Arslan, Caraballo, Jimenez, Gawsane, Hasan, Joseph, Kulkarni, Nayak et al., 2017b), filter out the ones deemed more important for fact-checkers, and email them to several organizations. In essence, this approach helped fact-checkers prioritize the claims to check through an additional level of filtering. Currently, several published fact-checks on PolitiFact were first alerted by the emails from Tech & Check.
Note that the CheckThat! Shaar et al., 2021b;Barrón-Cedeño et al., 2020;Atanasova, Nakov, Karadzhov, Mohtarami and Da San Martino, 2019a) is a popular shared task for claim detection, check-worthiness, and prioritization tasks. However such shared tasks often have no submissions that employs HITL methodologies. Shared tasks for HITL approaches could encourage more solutions that can complement the limitations of model-only based approaches.
Evidence Retrieval and Veracity Prediction
Most work in HITL fact-checking caters to veracity prediction, and only a few consider evidence retrieval as a separate task. While there is a body of literature on HITL approaches in information retrieval (Chen and Jain, 2013;Demartini, 2015), we know of no work in that direction for fact-checking. Shabani, Charlesworth, Sokhn and Schuldt (2021) leverage HITL approaches for providing feedback about claim source, author, message, and spelling (SAMS). Annotators answer four yes/no questions about whether the article has a source, an author, a clear and unbiased message, and any spelling mistake. Furthermore, this work integrates the features provided by humans in a machine learning pipeline, which resulted in a 7.1% accuracy increase. However, the evaluation is performed on a small dataset with claims related to Covid-19. It is unclear if this approach would generalize outside of the domain. Moreover, further human effort can be reduced in this work by automating spellcheck and grammar-check. SAMS could be quite limited in real life situations as most carefully crafted misinformation often looks like real news. Model generated fake news can successfully fool annotators (Zellers, Holtzman, Rashkin, Bisk, Farhadi, Roesner and Choi, 2020), and thus SAMS might also fail to flag such fake news. Qu, Barbera, Roitero, Mizzaro, Spina and Demartini (2021a) and Qu, Roitero, Mizzaro, Spina and Demartini (2021b) provide an understanding of how human and machine confidence scores can be leveraged to build HITL approaches for fact-checking. They consider explicit self-reported annotator confidence and compute implicit confidence based on standard deviation among ten crowd workers. Model confidence is obtained from bootstrapping (Efron and Tibshirani, 1985) ten different versions of the model and then computing standard deviation over the scores returned by the soft-max layer. Their evaluation shows that explicit crowd and model confidence are poor indicators of accurate classification decisions. Although the crowd and the model make different mistakes, there is no clear signal that confidence is related to accuracy. However, they show that implicit crowd confidence can be a useful signal for identifying when to engage experts to collect labels. A more recent study shows that a politically balanced crowd of ten is correlated with the average rating of three fact-checkers (Allen, Arechar, Pennycook and Rand, 2020). Gold, Kovatchev and Zesch (2019) also find that annotations by a crowd of ten correlate with the judgments of three annotators for textual entailment, which is utilized by veracity prediction models.
A series of studies show that the crowd workers can reliably identify misinformation (Roitero, Soprano, Fan, Spina, Mizzaro and Demartini, 2020a;Roitero, Soprano, Portelli, Spina, Della Mea, Serra, Mizzaro and Demartini, 2020b;Soprano, Roitero, La Barbera, Ceolin, Spina, Mizzaro and Demartini, 2021). Furthermore, Roitero et al. (2020b) show that crowd workers not only can identify false claims but also can retrieve proper evidence to justify their annotation. One weakness of this study is that it only asks users to provide one URL as evidence. However, in practice, factchecking might need reasoning over multiple sources of information. Although these studies do not propose novel HITL solutions, they provide sufficient empirical evidence and insights about where crowd workers can be engaged reliably in the fact-checking pipeline. Nguyen et al. (2018b) propose joint modeling of crowd annotations and machine learning to detect the veracity of textual claims. The key strength of the model is that it assumes all annotators can make mistakes, which is a possibility as fact-checking is a difficult task. Another strength is that this model allows users to import their knowledge into the system. Moreover, this HITL approach can collect on-demand stance labels from the crowd and incorporate them in veracity prediction. Empirical evaluation shows that this approach achieves strong predictive performance. A follow-up study provides an interactive HITL tool for fact-checking (Nguyen et al., 2018a). Nguyen, Weidlich, Yin, Zheng, Nguyen and Nguyen (2020) propose a HITL system to minimise user effort and cost. Users validate algorithmic predictions but do so at a minimal cost by only validating the most-beneficial predictions for improving the system. This system provides a guided interaction to the users and incrementally gets better as users engage with it.
It is important to note that research on crowdsourcing veracity judgment is at an early stage. Different factors such as demographics, political leaning, criteria for determining the expertise of the assessors (Bhuiyan, Zhang, Sehat and Mitra, 2020), cognitive factors (Kaufman, Haupt and Dow, 2022), and even the rating scale (La Barbera, Roitero, Demartini, Mizzaro and Spina, 2020) led to different levels of alignment with expert ratings. Bhuiyan et al. (2020) outline research directions for designing better crowd processes specific to different types of misinformation for the successful utilization of crowd workers.
Explaining Veracity Prediction HITL systems in fact-checking often use veracity explanations to correct model errors. As discussed earlier, Nguyen et al. (2018a) provides an interpretable model that allows users to impart their knowledge when the model is wrong. Empirical evaluation shows that users could impart their knowledge into the system. Similarly, Zhang, Rudra and Anand (2021b) propose a method that collects user feedback from explanations. Note that this method explains veracity prediction outcomes based on the evidence retrieved and their stance. Users provide feedback in terms of stance and relevance of the retrieved evidence. The proposed approach employs lifelong learning which enables the system to improve over time. Currently there is no empirical evaluation of this system to identify the effectiveness of this approach.
Although natural language generation models are getting increasingly better (Radford, Wu, Child, Luan, Amodei, Sutskever et al., 2019), generating abstractive fact-checking explanations is still in its infancy (Kotonya and Toni, 2020b). HITL methods could be leveraged to write reports justifying fact-checking explanations.
Limitations After reviewing existing HITL approaches across different fact-checking tasks, we also list out several limitations as follow. First, some HITL approaches adopt several interpretable models to integrate human input, but the resulting models do not perform as well as the state-of-the-art deep learning models (Nguyen et al., 2018b,a). Farinneya et al. (2021) apply HITL approaches to scale up rumor detection from a limited amount of annotated data. Although it performs well to generalize the algorithm for a new topic in a few-shot manner, one of the weaknesses is that data from other domains or topics causes a high variance in model performance. Consequently, in-domain model performance might degrade when out-of-domain data is introduced in model training. This issue may hinder the model's generalizability in practice, especially where a clear demarcation between topic domains may not be possible.
More importantly, there is a lack of empirical studies on how to apply HITL approaches of fact-checking for practical adoption. Although HITL approaches provide a mechanism to engage human in the process of modeling development, several human factors, such as usability, intelligibility, and trust, become important to consider when applying this method in the real-world use case. Fact-checking is a time-sensitive task and requires expertise to process complex information over multiple sources (Graves, 2017). Fact-checkers and policy makers are often skeptical about any automated or semi-autoamted solutions as this type of research requires human creativity and expertise (Arnold, 2020;Micallef et al., 2022). Therefore, more empirical evidence needs to be found to assess the effectiveness of applying different HITL approaches to automated fact-checking.
Existing Tools for Fact-checking
In the previous section, we reviewed the details of current NLP technologies for fact-checking. Subsequently, we extend our review of automated fact-checking to the HCI literature and discuss existing practices of applying factchecking into real-world tools that assist human fact-checkers. In brief, there is a lack of holistic review of fact-checking tools from a human-centered perspective. Additionally, we found that the articulation of work between human labor and AI tools is still opaque in this field. Research questions include but are not limited to: 1) how can NLP tools facilitate human work in different fact-checking tasks? 2) how can we incorporate user needs and leverage human expertise to inform the design of automated fact-checking?
In this section, we examine current real-world tools that apply NLP technologies in different stages of fact-checking and clarify the main use cases of these tools. We argue that more research concerning human factors for building automated fact-checking, such as user research, human-centered design, and usability studies, should be conducted to improve the practical adoption of automated fact-checking. These studies help us identify the design space of applying explainable and HITL approaches for real-world NLP technologies.
Claim Detection and Prioritization
The first step in claim detection is sourcing content to possibly check. On end-to-end encrypted platforms, such as WhatsApp, Telegram, and Signal, crowdsourcing-based tip-lines play a vital role in identifying suspicious content that is not otherwise accessible (Kazemi et al., 2021b). As another example, Check from Meedan 7 , a tip-line service tool, also helps fact-checkers monitor fake news for in-house social media. User flagging of suspect content on social media platforms such as Facebook is also a valuable signal for identifying such content, and crowdsourcing initiatives like Twitter's BirdWatch can further help triage and prioritize claims for further investigation.
In the stage of finding and choosing claims to check, fact-checkers assess the fact-checking related quality of a claim and decide whether to fact-check it (Graves, 2017;Micallef et al., 2022). NLP models in claim detection, claim matching, and check-worthiness are useful to assist the above decision-making process. However, integrating them into real-world tools that help fact-checkers prioritize what to check requires more personalized effort. Graves (2018a) points out that it is important to design the aforementioned models to cater to fact-checker organizational interests, stakeholder needs, and changing news trends.
As one of the fact-checking qualities of a claim, checkability can be objectively analyzed by whether a claim contains one or more purported facts that can be verified (Section 3.1). Fact-checkers find it useful to apply models that identify checkable claims to their existing workflow because the model helps them filter irrelevant content and claims that are uncheckable when they are choosing claims to check (Arnold, 2020). ClaimBuster, one of the wellknown claim detection tools, is built to find checkable claims from a large scale of text content (Hassan et al., 2017a). Claim detection can also be integrated into speech recognition tools to spot claims from live speech (Adair, 2020).
Additionally, if a claim has already been fact-checked, fact-checkers can skip it and prioritize claims that have not been checked. As a relatively new NLP task, claim matching has been integrated into some current off-the-shelf search engines or fact-checking tools to help fact-checkers find previously fact-checked claims. For example, Google Fact Check Explorer 8 can retrieve previously fact-checked claims by matching similar fact-check content to user input queries. Similarly, with Meedan's Check, if users send a tip with fake news that has been previously fact-checked, the tool further helps fact-checkers retrieve the previous fact-check and send it to users.
Whether or not to fact-check a claim depends on an organization's goals and interests. Tools built for claim detection need to take such interests into account. For example, Full Fact developed a claim detection system that classifies claims into different categories, such as quantity, predictions, correlation or causation, personal experience, and laws or rules of operations (Konstantinovskiy et al., 2021). The claim categories are designed by their fact-checkers to cater to their needs of fact-checking UK political news in a live fact-checking situation. Identifying certain claims, such as quantity, correlation or causation, might be particularly useful for fact-checkers to evaluate the credibility of politician statements and claims. The system also helps tailor fact-checkers' downstream tasks, such as fact-check assignments and automated verification for statistical claims .
Fact-checkers also use social media monitoring tools to find claims to check, such as CrowdTangle, TweetDeck, and Facebook's (unnamed) fact-checking tool, but those tools are not very effective to detect checkable claims. Some factcheckers reported that only roughly 30% of claims flagged by Facebook's fact-checking tool were actually checkable (Arnold, 2020). A low hanging fruit is to integrate claim detection models into these social media monitoring tools so that it is easier for fact-checkers to identify claims that are both viral and checkable. Additionally, these tools should enable fact-checkers to locate certain figures, institutions, or agencies according to their fact-checking interests and stakeholder needs so that these tools can better identify and prioritize truly check-worthy claims. An important question in implementing those systems is how to measure the virality of a claim and its change over time.
It would also be useful to integrate veracity prediction into previous fact-checking tools because fact-checkers may pay the most attention to claims 9 that are suspect and uncertain (since obviously true or false claims likely do not require a fact-check). However, information or data points that are used to give such predictions should also be provided to fact-checkers. If sources, evidence, propagation patterns, or other contextual information that models use to predict claim veracity can be explained clearly for fact-checkers, they can also triage these indicators to prioritize claims more holistically.
Tools for Evidence Retrieval
After finding and prioritizing which claim to check, fact-checkers investigate claims following three main activities: 1) decomposing claims, 2) finding evidence, and 3) tracing the provenance of claims and their spread. Note that these three activities are intertwined with each other by using different information-seeking tools in the fact-checking process. Fact-checkers search for evidence by decomposing claims into sub questions. Evidence found while investigating a claim may further modify or add to the sub-questions (Singh et al., 2021). By iteratively investigating claims via online search, fact-checkers reconstruct the formation and the spread of a claim to assess its truth (Graves, 2017). In this section, we discuss the utility of existing information-seeking tools, including off-the-shelf search engines and domain-specific databases, that assist fact-checkers in each activity.
Claim decomposition is not a specific activity that qualitative researchers have reported or analyzed in their factchecking studies, but we can find more details from where fact-checking organizations describe their methodology 10 and how fact-checkers approach complex claims in their fact-checks 11 . Claim decomposition refers to how factcheckers interpret ambiguous terms of a claim and set the fact-checking boundaries to find evidence. Decomposing claims effectively requires sensitive curiosity and news judgments for fact-checkers that are cultivated through years of practice. Unfortunately, we are not aware of any existing tools that facilitate this process.
Traditional methodology to decompose claims is to ask sub-questions. Recent NLP studies simulate this process by formulating it as a question-answering task (Fan et al., 2020;Chen et al., 2022). Researchers extract justifications from existing fact-checks and crowdsource sub-questions to decompose the claim. For automated-fact-checking, this NLP task might be very beneficial to improve the performance of evidence retrieval by auto-decomposing claims into smaller checkable queries . Although it is difficult for NLP to match the abilities of professional fact-checkers, it might help scale up the traditional, human fact-checking process. It could also help the public, new fact-checkers, or journalists to more effectively investigate complex claims and search for evidence.
How fact-checkers find evidence is usually a domain-specific reporting process, contacting experts or looking for specific documents from reliable sources (Graves, 2017;Micallef et al., 2022). Instead of conducting random searches online, most fact-checkers include a list of reliable sources in which to look for evidence. Tools that are designed for searching domain datasets can also help fact-checkers to find evidence. For example, Li, Fang, Lou, Li and Zhang (2021) built an analytical search engine for retrieving the COVID-19 news data and summarizing it in an easy to digest, tabular format. The system can decompose analytical queries into structured entities and extract quantitative facts from news data. Furthermore, if evidence retrieval is accurate enough for in-domain datasets, the system can take a leap further to auto-verify domain-related claims. We provide more detailed use cases of veracity prediction in Section 6.3.
Fact-checkers mainly use off-the-shelf search engines, such as Google, Bing, etc., to trace a claim's origin from publicly accessible documents (Beers, Haughey, Melinda, Arif and Starbird, 2020;Arnold, 2020). Other digital datasets, such as LexisNexis and InternetArchive, are also useful for fact-checkers to trace claim origin. To capture the formation and change of a claim, search engines should not only filter unrelated content, but also retrieve both topically and evidentially relevant content. Hasanain and Elsayed (2021) report that most topically relevant pages retrieved from Google do not contain evidential information, such as statistics, quotes, entities, or other types of facts. Additionally, most built-in search engines in social media platforms, such as Twitter and Facebook, only filter "spreadable" content not "credible" content (Alsmadi, Alazzam and AlRamahi, 2021).
Furthermore, these off-the-shelf search engines do not support multilingual search, so it is difficult for fact-checkers to trace claims if they are translated from other languages (Graves, 2017;Nakov et al., 2021a). NLP researchers have started to use multilingual embedding models to represent claim-related text in different languages and match existing fact-checks (Kazemi, Gaffney, Garimella and Hale, 2021a). This work not only helps fact-checkers find previously fact-checked claims more easily from other languages, but also to examine how the claim is transformed and reshaped by the media in different languages and socio-political contexts.
Domain-specific Tools for Claim Verification
As discussed in Sections 3.2 and 3.3, most verification prediction models are grounded on the collected evidence and the claim. To build an end-to-end claim verification system, NLP developers need to construct domain-specific datasets incorporating both claims and evidence. Different from complex claims that contain multiple arguments and require decomposition, claims that have simple linguistic structure with purported evidence or contain statistical facts can be automatically verified . Karagiannis, Saeed, Papotti and Trummer (2020) built CoronaCheck, a search engine that can directly verify Covid-19 related statistical claims by retrieving official data curated by experts (Dong, Du and Gardner, 2020). Full Fact (The Poynter Institute, 2021) also took a similar approach to verify statistical macroeconomic claims by retrieving evidence from UK parliamentary reports and national statistics. Additionally, Wadden et al. (2020) built a scientific claim verification pipeline by using abstracts that contain evidence to verify a given scientific claim.
However, pitfalls still exist if fact-checkers use these domain-specific verification tools in practice. For example, the CoronaCheck tool cannot check the claim "The Delta variant causes more death than the Alpha variant" simply because the database does not contain fine-grained death statistics for Covid variants. Additionally, checking a statistical or scientific claim might only be a part of the process of checking a more complex claim, which requires fact-checkers to contextualize the veracity of previous statistical or scientific checks. In general, domain-specific tools are clearly valuable to use when available, though in practice they are often incomplete and insufficient on their own to check complex claims.
Discussion
In this review, we have 1) horizontally outlined the research of applying NLP technologies for fact-checking from the beginning of task formulation to the end of tools adoption; as well as 2) vertically discussing the capabilities and limitations of NLP for each step of a fact-checking pipeline. We perceive a lack of research that bridges both to assist fact-checkers. Explainable and HITL approaches leverage both human and computational intelligence from a humancentered perspective, but there is a need to provide actionable guides to utilize both methods for designing useful fact-checking tools. In this section, we propose several research directions to explore the design space of applying NLP technologies to assist fact-checkers.
Distributing Work between Human and AI for Mixed-initiative Fact-checking
The practice of fact-checking has already become a type of complex and distributed computer-mediated work (Graves, 2018a). Although Graves (2017) breaks down a traditional journalist fact-checking pipeline into five steps, the real situation of fact-checking a claim is more complicated (Juneja and Mitra, 2022). Various AI tools are adopted dynamically and diversely by fact-checkers to complete different fact-checking tasks (Arnold, 2020;Beers et al., 2020;Micallef et al., 2022).
Researchers and practitioners increasingly believe that future fact-checking should be a mixed-initiative practice in which humans perform specific tasks while machines take over others (Nguyen et al., 2018a;Lease, 2020;Nakov et al., 2021a). To embed such hybrid and dynamic human-machine collaborations into existing fact-checking workflow, the task arrangement between human and AI need to be articulated clearly by understanding the expected outcomes and criteria for each. Furthermore, designing a mixed-initiative tool for different fact-checking tasks requires a more fine-grained level of task definition for human and AI (Lease, 2018(Lease, , 2020. In Section 5.3, we discuss several studies highlighting the role of humans in the fact-checking workflow, e.g., a) human experts select check-worthy claims from claim detection tools (Hassan et al., 2017b) and deliver them to fact-checkers, b) ask crowd workers to judge reliable claims sources (Shabani et al., 2021), or c) flag potential misinformation (Roitero et al., 2020b) to improve veracity prediction. All of the above human activities are examples of micro-tasks within a mixed-initiative fact-checking process.
Prior work in crowdsourcing has shown that it is possible to effectively break down the academic research process and utilize crowd workers to partake in smaller research tasks (Vaish, Davis and Bernstein, 2015;Vaish, Gaikwad, Kovacs, Veit, Krishna, Arrieta Ibarra, Simoiu, Wilber, Belongie, Goel et al., 2017). Given this evidence, we can also break down sub-tasks of a traditional fact-checking process into more fine-grained tasks. Therefore, key research questions include: a) How can we design these micro-tasks to facilitate each sub-task of fact-checking, and b) What are the appropriate roles for human and AI in different micro-tasks?
To effectively orchestrate human and AI work, researchers need to understand the respective roles of human and AI, and how they will interact with one another, because it will directly affect whether humans decide to take AI advice (Cimolino and Graham, 2022). Usually, if AI aims to assist high-stake decision-making tasks, such as recidivism prediction (Veale, Van Kleek and Binns, 2018) and medical treatments (Cai, Winter, Steiner, Wilcox and Terry, 2019), considerations of risk and trust will be important factors for people to adopt such AI assistants (Lai, Chen, Liao, Smith-Renner and Tan, 2021). In the context of fact-checking, if AI directly predicts the verdict of a claim, factcheckers may be naturally skeptical about how the AI makes such a prediction (Arnold, 2020). On the other hand, if AI only helps to filter claims that are uncheckable, such as opinions and personal experience, fact-checkers may be more willing to use such automation with less concern about how AI achieves it. Deciding whether a claim is true or false is a high-stake decision-making task for fact-checkers, while filtering uncheckable claims is a less important but tedious task that fact-checkers want automation to help with. Therefore, the extent of human acceptance of AI varies according to how humans assess the task assigned to AI, resulting in different human factors, such as trust, transparency, and fairness. Researchers need to specify or decompose these human factors into different key variables that can be measured during the model development process. Given a deep understanding of the task relationship between human and AI, researchers can then ask further research questions on how to apply an explainable approach, or employ a HITL system vs. automated solutions, to conduct fact-checking. Here we list out several specific research topics that contain mixed-initiative tasks, including: a) assessing claim difficulty leveraging crowd workers, b) breaking down a claim into a multi-hop reasoning task and engaging the crowd to find information relevant to the sub-claims, and c) designing micro-tasks to parse a large number of documents retrieved by web search to identify sources that contain the evidence needed for veracity prediction.
Human-centered Evaluation of NLP Technology for Fact-Checkers
We begin this section by proposing key metrics from human factors for evaluating systems (i.e., what to measure and how to measure them): accuracy, time, model understanding, and trust (Section 7.2.1). Following this, we further propose a template for an experimental protocol for human-centered evaluations in fact-checking (Section 7.2.2).
Metrics
Accuracy Most fact-checking user studies assume task accuracy as the primary user goal (Nguyen et al., 2018a;Mohseni, Yang, Pentyala, Du, Liu, Lupfer, Hu, Ji and Ragan, 2021). Whereas non-expert users (i.e., social media users or other form of content consumers) might be most interested in the veracity outcome along with justification, factcheckers often want to use automation and manual effort interchangeably in their workflow (Arnold, 2020;Nakov et al., 2021a). Thus, we need a more fine-grained approach towards measuring accuracy beyond the final veracity prediction accuracy. For fine-grained accuracy evaluation, it is also crucial to capture fact-checker accuracy, particularly for the sub-tasks for which they use the fact-checking tool.
With the assumption that "ground truth" exists for all of the sub-tasks in the fact-checking pipeline, accuracy can be computed by comparing user answers with the ground truth. Note that measuring sub-task level accuracy is trickier than end-to-end fact-checking accuracy. Sub-task level accuracy can be captured by conducting separate experiments for each sub-task. Suppose the point of interest is to understand user performance for detecting claim-checkworthiness.
In that case, we will need to collect additional data specific to the claim-checkworthiness task.
In some cases, it is possible to merge multiple sub-tasks for evaluation purposes. For example, Miranda, Nogueira, Mendes, Vlachos, Secker, Garrett, Mitchel and Marinho (2019) evaluate the effectiveness of their tool with journalists by capturing the following two key variables: a) the relevance of retrieved evidence, and b) the accuracy of the predicted stance. This method provides essential insight into evidence retrieval, stance detection, and the final fact-checking task. Depending on the tool, the exact detail of this metric will require specific changes according to tool affordances.
Note that both time and accuracy measures need to control for claim properties. For example, if a claim has been previously fact-checked, it would take less time to fact-check such claims. On the other hand, a new claim that is more difficult to assess would require more time.
Model Understanding Fact-checkers want to understand the tools they use. For example, Arnold (2020) pointed out that fact-checkers expressed a need for understanding CrowdTangle's algorithm for detecting viral content on various social media platforms. Similarly, Nakov et al. (2021a) observed a need for increased system transparency in the fact-checking tools used by different organizations. Lease (2018) argues that transparency is equally important for non-expert users to understand the underlying system and make an informed judgement. Although this is not a key variable related to user performance, it is important for practical adoption.
To measure understanding, users could be asked to self-report their level of understanding on a Likert-scale. However, simply asking participants if they understand the algorithm is not a sufficient metric. For example, it does not indicate whether participants will be able to simulate tool behavior (Hase and Bansal, 2020). We suggest the following steps for measuring model understanding based on prior work (Cheng, Wang, Zhang, O'Connell, Gray, Harper and Zhu, 2019).
Decision Prediction.
To capture users' holistic understanding of a tool, users could be provided claims and asked the following: "What label would the tool assign to this claim?" 2. Alternative Prediction. Capturing how changes in the input influence the output can also measure understanding, e.g., by asking users how the tool would assign a label to a claim when input parts are changed. Imagine a tool that showed the users the evidence it has considered to arrive at a veracity conclusion. Now, if certain pieces of evidence were swapped, how would that be reflected in the model prediction?
Trust For practical adoption, trust in a fact-checking tool is crucial across all user groups. While model understanding is often positively correlated with trust, understanding alone may not suffice to establish trust. In this domain, factcheckers and journalists may have less trust in algorithmic tools (Arnold, 2020). On the other hand, there is also the risk of over-trust, or users blindly following model predictions (Nguyen et al., 2018a;Mohseni et al., 2021). To maximize the tool effectiveness, we would want users to neither dismiss all model predictions out of hand (complete skepticism) nor blindly follow all model predictions (complete faith). Instead, it is important to calibrate user trust for the most effective tool usage. We suggest measuring a notion of calibrated trust (Lee and See, 2004): how often users abide by correct model decisions and override erroneous model decisions. To measure calibrated trust, we imagine a confusion matrix shown in Figure 2. The rows denote correct vs. incorrect model predictions while the columns denote correct vs. incorrect user predictions. A user who blindly followed all model predictions would have their behavior entirely captured by the main (primary) diagonal, whereas a user who skeptically rejected all model predictions would have their behavior captured entirely in the secondary diagonal. The ideal user's behavior would be entirely captured in the first column: accepting all correct model predictions and rejecting all incorrect model predictions. To promote effective human-AI teaming, AI tools should assist their human users in developing strong calibrated trust to appropriately trust and distrust model predictions as each case merits.
Beyond calibrated trust, one could also measure quantitative trust by adopting methodologies from the humanmachine trust literature (Lee and Moray, 1992). For example, Cheng et al. (2019) adapted prior work into a 7-point Likert scale. A similar scale can be reused for evaluating trust in a fact-checking tool. For example, we can create five different Likert-scales to measure the agreement (or disagreement) of users with the following statements:
• I understand the fact-checking tool.
• I can predict how the tool will behave.
• I have faith that the tool would be able to cope with the different fact-checking task.
• I trust the decisions made by the tool.
• I can count on the tool to provide reliable fact-checking decisions.
Additional factors Individual differences among users might result in substantial variation in experimental outcomes.
For example, varying technical literacy (Cheng et al., 2019), any prior knowledge about the claims, and users' political leaning (Thornhill, Meeus, Peperkamp and Berendt, 2019) might influence user performance on the task while using fact-checking tools. Thus it is valuable to capture these factors in study design. For example:
1. Technical Literacy: Users' familiarity with popular technology tools (e.g, recommendation engines, spam detectors) and their programming experience (Cheng et al., 2019) as well as familiarity with existing factchecking tools. 2. Media Literacy: Users' familiarity with 1) the fact-checking process, and 2) fact-checks from popular organizations such as PolitiFact and FactCheck.Org. 3. Demographics: Users' education level, gender, age, and political leaning.
Quantitative measures alone are not sufficient as they do not capture certain nuances about how effectively a tool integrates into a fact-checker's workflow. For example, even if users understand and trust the working principle of a tool, it is unclear why they do so. Hence, users might be asked a few open-ended questions at the end of the study to gather qualitative insights. Such questions could include:
1. Describe your understanding of the tool. Do any specific aspects of its design seem to assist or detract from your understanding of how it works? 2. Why do you trust or not trust the tool? 3. Would you use this tool beyond this study, and if so, in what capacity?
Experimental Protocol
One strategy to capture the aforementioned metrics is to design a mixed-methods study. Here we outline the template for such a study. Imagine the goal were to measure the user performance for fact-checking using a new tool (let's call it tool A) compared to an existing tool (tool B). Fact-checking tasks in the real world might be influenced by user priors about the claims being checked. Thus, a within-subject study protocol may be more appropriate to account for such priors (Shi, Bhattacharya, Das, Lease and Gwizdka, 2022). 1. Pre-task: Users would first be asked to fact-check a set of claims. To do so, first a user would be asked to leverage a pre-existing tool B at this stage. Tool B can be replaced with different baselines, depending on the particular use case, ranging from simple web-search by non-expert users to proprietary tools used by fact-checkers and journalists. Users would be asked to think aloud at this stage. 2. Learning: At this stage users would familiarize themselves with the new tool (tool A). Users would need to fact-check a different set of claims from the first one. Ground truth would also accessible to the user to form a prior about what kind of mistakes a tool might make. Claims here would be selected at random to reflect tool capabilities. Moreover, tool performance metrics would be given to the users as additional information. Users would be encouraged to ask questions about the tool at this stage. 3. Prediction: Users would now be asked to fact-check the same claims from step-1 above but this time they are asked to leverage the tool A. Users would be asked to think out loud through this stage. Users could simply guess the answers and achieve a high accuracy score. Thus, claims selected for stages (1) & (3) would be a balanced set of claims with an equal distribution of true positive, true negative, false positive, and false negative samples. This idea is adopted from prior work (Hase and Bansal, 2020). 4. Post-task survey: Users would now be asked to take a small survey for capturing trust, understanding, technical literacy, media literacy, and demographic information. 5. Post-task interview: Upon completion of these steps, users would be interviewed with open-ended questions to gather insights about their understanding and trust in the system.
The measures and study protocol could be useful in the context of evaluating any new fact-checking system compared to an existing system or practices. Specifics might vary depending on the target user group and the tool's intended purpose. Above we use the whole fact-checking pipeline to illustrate our experimental protocol. However this technique can be applied to other sub-tasks of automated fact checking, granted that we have the ground truth of the outcome for that sub-task. For example, let us assume a new claim detection tool has been proposed that takes claims from a tip-line (Kazemi et al., 2021b). Currently, fact-checkers use an existing claim-matching algorithm to filter out the already fact-checked claim. Now, if we replace tool B above with the existing claim-matching algorithm and tool A with the proposed claim detection tool, we can utilize the protocol mentioned above. In conclusion, one could evaluate how users perform for claim detection tasks using the new tool compared to the existing ones in terms of their accuracy, time, understanding, and trust. While we have proposed an ideal, extensive version of an evaluation protocol for evaluating new fact-checking tools, note that the actual protocol used in practice could be tailored according to the time required from the participants and the cost of conducting the experiment.
Conclusion
This review highlights the practices and development of the state-of-the-art in using NLP for automated factchecking, emphasizing both the advances and the limitations of existing task formulation, dataset construction, and modeling approaches. We partially discuss existing practices of applying these NLP tasks into real-world tools that assist human fact-checkers. In recent years we have seen significant progress in automated fact-checking using NLP.
A broad range of tasks, datasets, and modeling approaches have been introduced in different parts of the fact-checking pipeline. Moreover, with recent developments in transformers and large language models, the model accuracy has improved across tasks. However, even state-of-the-art models on existing benchmarks -such as FEVER and CLEF! -may not yet be ready for practical adoption and deployment.
To address these limitations, we advocate development of hybrid, HITL systems for fact-checking. As a starting point, we may wish to reorient the goals of existing NLP tasks from full automation towards decision support. In contrast with fully-automated systems, hybrid systems instead involve humans-in-the-loop and facilitate human-AI teaming Bansal, Wu, Zhou, Fok, Nushi, Kamar, Ribeiro and Weld, 2021b;Bansal, Nushi, Kamar, Horvitz and Weld, 2021a). Such use of hybrid systems can help a) scale-up human decision making; b) augment machine learning capabilities with human accuracy; and c) mitigate unintended consequences from machine errors. Additionally, we need new benchmarks and evaluation practices that can measure how automated and hybrid systems can improve downstream human accuracy (Smeros, Castillo and Aberer, 2021;Fan et al., 2020) and efficiency in fact-checking.
Figure 1 :
1Fact-checking pipeline
Despite the debate about attention being a reliable intelligibility method (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Serrano and Smith, 2019; Bibal, Cardon, Alfter, Wilkens, Wang, François and Watrin, 2022), it remains a popular method in existing deep neural network approaches in fact-checking. Attentionbased explanations are provided in various forms:
Some fact-checking works use a white-box or inherently interpretable model for fact-checking.Nguyen et al. (2018b,a) utilize a probabilistic graphical model and build an interactive interpretable model for fact-checking where users are allowed to directly override model decisions.Kotonya, Spooner, Magazzeni and Toni (2021) propose an interpretable graph neural network for interpretable fact-checking on FEVEROUS dataset(Aly et al., 2021).
Figure 2 :
2Confusion Matrix for User Predictions vs. Model Predictions with respect to ground truth (gold). We assume model predictions are provided to the user, who then decides whether to accept or reject the model prediction. The top-left quadrant (Both) covers cases where users correctly follow model predictions. The top-right quadrant (Model ) denotes the cases where the model is correct but users mistakenly reject the model decisions. The bottom-left quadrant User denotes the cases where users correctly reject erroneous model predictions. The bottom-right quadrant Neither denotes the cases where users incorrectly accept erroneous model predictions. Quantifying user vs. model predictions in this manner enables measurement of calibrated trust: how often users abide by correct model decisions and override erroneous model decisions.
The State of Human-centered NLP Technology for Fact-checking
https://disinformationindex.org/
https://en.wikipedia.org/wiki/Information_literacy
Some of these datasets, such as the CheckThat! datasets, are partially crowd and partially expert annotated.
https://fever.ai/2018/task.html 6 https://fever.ai/task.html
https://meedan.com/check
https://toolbox.google.com/factcheck/explorer 9 https://www.factcheck.org/our-process/ 10 https://leadstories.com/how-we-work.html 11 https://www.factcheck.org/2021/10/oecd-data-conflict-with-bidens-educational-attainment-claim/ In this factcheck, fact-checkers decompose what President Biden mean by "advanced economies" and "young people". The approach of defining these two terms directly influence their fact-checking results.
http://goodsystems.utexas.edu/
AcknowledgementsThis research was supported in part by the Knight Foundation, the Micron Foundation, Wipro, and by Good Systems 12 , a UT Austin Grand Challenge to develop responsible AI technologies. The statements made herein are solely the opinions of the authors and do not reflect the views of the sponsoring agencies.
Squash report card: Improvements during State of the Union ... and how humans will make our AI smarter -Duke Reporters' Lab. B Adair, Adair, B., 2020. Squash report card: Improvements during State of the Union ... and how hu- mans will make our AI smarter - Duke Reporters' Lab. URL: https://reporterslab.org/ squash-report-card-improvements-during-state-of-the-union-and-how-humans-will-make-our-ai-smarter/.
A lesson in automated journalism: Bring back the humans | nieman journalism lab. B Adair, M Stencel, Accessed on 02/08/2022Adair, B., Stencel, M., 2020. A lesson in automated journalism: Bring back the humans | nieman journalism lab. https://www.niemanlab.org/ 2020/07/a-lesson-in-automated-journalism-bring-back-the-humans/. (Accessed on 02/08/2022).
Explainable fact checking with probabilistic answer set programming. N Ahmadi, J Lee, P Papotti, M Saeed, Conference on Truth and Trust Online. Ahmadi, N., Lee, J., Papotti, P., Saeed, M., 2019. Explainable fact checking with probabilistic answer set programming, in: Conference on Truth and Trust Online.
Rulehub: A public corpus of rules for knowledge graphs. N Ahmadi, T T D Truong, L H M Dao, S Ortona, P Papotti, Journal of Data and Information Quality (JDIQ). 12Ahmadi, N., Truong, T.T.D., Dao, L.H.M., Ortona, S., Papotti, P., 2020. Rulehub: A public corpus of rules for knowledge graphs. Journal of Data and Information Quality (JDIQ) 12, 1-22.
Detection of context-varying rumors on twitter through deep learning. M Ahsan, M Kumari, T Sharma, Int. J. Adv. Sci. Technol. 128Ahsan, M., Kumari, M., Sharma, T., 2019. Detection of context-varying rumors on twitter through deep learning. Int. J. Adv. Sci. Technol 128, 45-58.
Simple open stance classification for rumour analysis. A Aker, L Derczynski, K Bontcheva, Proceedings of the International Conference Recent Advances in Natural Language Processing. the International Conference Recent Advances in Natural Language ProcessingAker, A., Derczynski, L., Bontcheva, K., 2017. Simple open stance classification for rumour analysis, in: Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pp. 31-39.
Fighting the covid-19 infodemic in social media: A holistic perspective and a call to arms. F Alam, F Dalvi, S Shaar, N Durrani, H Mubarak, A Nikolov, G Da San Martino, A Abdelali, H Sajjad, K Darwish, P Nakov, ICWSMAlam, F., Dalvi, F., Shaar, S., Durrani, N., Mubarak, H., Nikolov, A., Da San Martino, G., Abdelali, A., Sajjad, H., Darwish, K., Nakov, P., 2021a. Fighting the covid-19 infodemic in social media: A holistic perspective and a call to arms, in: ICWSM.
Fighting the COVID-19 infodemic: Modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society. F Alam, S Shaar, F Dalvi, H Sajjad, A Nikolov, H Mubarak, G Da San Martino, A Abdelali, N Durrani, K Darwish, A Al-Homaid, W Zaghouani, T Caselli, G Danoe, F Stolk, B Bruntink, P Nakov, 10.18653/v1/2021.findings-emnlp.56Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsAlam, F., Shaar, S., Dalvi, F., Sajjad, H., Nikolov, A., Mubarak, H., Da San Martino, G., Abdelali, A., Durrani, N., Darwish, K., Al-Homaid, A., Zaghouani, W., Caselli, T., Danoe, G., Stolk, F., Bruntink, B., Nakov, P., 2021b. Fighting the COVID-19 infodemic: Modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society, in: Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics, Punta Cana, Dominican Republic. pp. 611-649. URL: https://aclanthology. org/2021.findings-emnlp.56, doi:10.18653/v1/2021.findings-emnlp.56.
Where is your evidence: Improving fact-checking by justification modeling. T Alhindi, S Petridis, S Muresan, Alhindi, T., Petridis, S., Muresan, S., 2018. Where is your evidence: Improving fact-checking by justification modeling.
Scaling up fact-checking using the wisdom of crowds. J Allen, A A Arechar, G Pennycook, D G Rand, 10.31234/osf.io/9qdzaPreprint atAllen, J., Arechar, A.A., Pennycook, G., Rand, D.G., 2020. Scaling up fact-checking using the wisdom of crowds. Preprint at https://doi. org/10.31234/osf. io/9qdza .
An ontological analysis of misinformation in online. I Alsmadi, I Alazzam, M A Alramahi, 10.48550/arxiv.2102.11362arXiv:2102.11362Alsmadi, I., Alazzam, I., AlRamahi, M.A., 2021. An ontological analysis of misinformation in online social networks URL: http://arxiv.org/ abs/2102.11362, doi:10.48550/arxiv.2102.11362, arXiv:2102.11362.
Feverous: Fact extraction and verification over unstructured and structured information. R Aly, Z Guo, M S Schlichtkrull, J Thorne, A Vlachos, C Christodoulopoulos, O Cocarascu, A Mittal, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Round 1Aly, R., Guo, Z., Schlichtkrull, M.S., Thorne, J., Vlachos, A., Christodoulopoulos, C., Cocarascu, O., Mittal, A., 2021. Feverous: Fact extraction and verification over unstructured and structured information, in: Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
Power to the people: The role of humans in interactive machine learning. S Amershi, M Cakmak, W B Knox, T Kulesza, 35Ai MagazineAmershi, S., Cakmak, M., Knox, W.B., Kulesza, T., 2014. Power to the people: The role of humans in interactive machine learning. Ai Magazine 35, 105-120.
The challenges of online fact checking: how technology can (and can't) help -full fact. P Arnold, Arnold, P., 2020. The challenges of online fact checking: how technology can (and can't) help -full fact. https://fullfact.org/blog/2020/ dec/the-challenges-of-online-fact-checking-how-technology-can-and-cant-help/. (Accessed on 09/11/2021).
Overview of the clef-2018 checkthat! lab on automatic identification and verification of political claims. P Atanasova, A Barrón-Cedeño, T Elsayed, R Suwaileh, W Zaghouani, S Kyuchukov, G Da San Martino, P Nakov, arXiv:1808.05542Check-worthiness. 1arXiv preprintAtanasova, P., Barrón-Cedeño, A., Elsayed, T., Suwaileh, R., Zaghouani, W., Kyuchukov, S., Da San Martino, G., Nakov, P., 2018. Overview of the clef-2018 checkthat! lab on automatic identification and verification of political claims. task 1: Check-worthiness. arXiv preprint arXiv:1808.05542 .
Overview of the clef-2019 checkthat! lab: Automatic identification and verification of claims. P Atanasova, P Nakov, G Karadzhov, M Mohtarami, G Da San Martino, Check-worthiness. CLEF (Working Notes. 12380Atanasova, P., Nakov, P., Karadzhov, G., Mohtarami, M., Da San Martino, G., 2019a. Overview of the clef-2019 checkthat! lab: Automatic identification and verification of claims. task 1: Check-worthiness. CLEF (Working Notes) 2380.
Automatic fact-checking using context and discourse information. P Atanasova, P Nakov, L Màrquez, A Barrón-Cedeño, G Karadzhov, T Mihaylova, M Mohtarami, J Glass, Journal of Data and Information Quality (JDIQ). 11Atanasova, P., Nakov, P., Màrquez, L., Barrón-Cedeño, A., Karadzhov, G., Mihaylova, T., Mohtarami, M., Glass, J., 2019b. Automatic fact-checking using context and discourse information. Journal of Data and Information Quality (JDIQ) 11, 1-27.
Generating fact checking explanations. P Atanasova, J G Simonsen, C Lioma, I Augenstein, ACLAtanasova, P., Simonsen, J.G., Lioma, C., Augenstein, I., 2020a. Generating fact checking explanations, in: ACL.
Generating label cohesive and well-formed adversarial claims. P Atanasova, D Wright, I Augenstein, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Atanasova, P., Wright, D., Augenstein, I., 2020b. Generating label cohesive and well-formed adversarial claims, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 3168-3177.
Multifc: A real-world multi-domain dataset for evidence-based fact checking of claims. I Augenstein, C Lioma, D Wang, L C Lima, C Hansen, C Hansen, J Simonsen, EMNLPAugenstein, I., Lioma, C., Wang, D., Lima, L.C., Hansen, C., Hansen, C., Simonsen, J., 2019. Multifc: A real-world multi-domain dataset for evidence-based fact checking of claims, in: EMNLP.
Is the most accurate ai the best teammate? optimizing ai for teamwork. G Bansal, B Nushi, E Kamar, E Horvitz, D S Weld, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceBansal, G., Nushi, B., Kamar, E., Horvitz, E., Weld, D.S., 2021a. Is the most accurate ai the best teammate? optimizing ai for teamwork, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11405-11414.
Beyond accuracy: The role of mental models in human-ai team performance. G Bansal, B Nushi, E Kamar, W S Lasecki, D S Weld, E Horvitz, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. the AAAI Conference on Human Computation and CrowdsourcingBansal, G., Nushi, B., Kamar, E., Lasecki, W.S., Weld, D.S., Horvitz, E., 2019. Beyond accuracy: The role of mental models in human-ai team performance, in: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, pp. 2-11.
Does the whole exceed its parts? the effect of ai explanations on complementary team performance. G Bansal, T Wu, J Zhou, R Fok, B Nushi, E Kamar, M T Ribeiro, D Weld, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsBansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., Ribeiro, M.T., Weld, D., 2021b. Does the whole exceed its parts? the effect of ai explanations on complementary team performance, in: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-16.
Fbmultilingmisinfo: Challenging large-scale multilingual benchmark for misinformation detection. G Barnabò, F Siciliano, C Castillo, S Leonardi, P Nakov, G Da San Martino, F Silvestri, 2022 International Joint Conference on Neural Networks (IJCNN). IEEEBarnabò, G., Siciliano, F., Castillo, C., Leonardi, S., Nakov, P., Da San Martino, G., Silvestri, F., 2022. Fbmultilingmisinfo: Challenging large-scale multilingual benchmark for misinformation detection, in: 2022 International Joint Conference on Neural Networks (IJCNN), IEEE. pp. 1-8.
Overview of checkthat! 2020: Automatic identification and verification of claims in social media. A Barrón-Cedeño, T Elsayed, P Nakov, G Da San Martino, M Hasanain, R Suwaileh, F Haouari, N Babulkov, B Hamdan, A Nikolov, International Conference of the Cross-Language Evaluation Forum for European Languages. SpringerBarrón-Cedeño, A., Elsayed, T., Nakov, P., Da San Martino, G., Hasanain, M., Suwaileh, R., Haouari, F., Babulkov, N., Hamdan, B., Nikolov, A., et al., 2020. Overview of checkthat! 2020: Automatic identification and verification of claims in social media, in: International Conference of the Cross-Language Evaluation Forum for European Languages, Springer. pp. 215-236.
Examining the digital toolsets of journalists reporting on disinformation. A Beers, Melinda Haughey, M Arif, A Starbird, K , Proceedings of Computation + Journalism 2020 (C+J '20). Computation + Journalism 2020 (C+J '20)New York, NY, USAACM5Beers, A., Haughey, Melinda, M., Arif, A., Starbird, K., 2020. Examining the digital toolsets of journalists reporting on disinformation, in: Proceedings of Computation + Journalism 2020 (C+J '20). ACM, New York, NY, USA,., p. 5. URL: https://cpb-us-w2.wpmucdn.com/ express.northeastern.edu/dist/d/53/files/2020/02/CJ_2020_paper_50.pdf.
Effective query formulation with multiple information sources. M Bendersky, D Metzler, W B Croft, Proceedings of the fifth ACM international conference on Web search and data mining. the fifth ACM international conference on Web search and data miningBendersky, M., Metzler, D., Croft, W.B., 2012. Effective query formulation with multiple information sources, in: Proceedings of the fifth ACM international conference on Web search and data mining, pp. 443-452.
Investigating differences in crowdsourced news credibility assessment: Raters, tasks, and expert criteria. M M Bhuiyan, A X Zhang, C M Sehat, T Mitra, Proceedings of the ACM on Human-Computer Interaction. 4Bhuiyan, M.M., Zhang, A.X., Sehat, C.M., Mitra, T., 2020. Investigating differences in crowdsourced news credibility assessment: Raters, tasks, and expert criteria. Proceedings of the ACM on Human-Computer Interaction 4, 1-26.
Is attention explanation? an introduction to the debate. A Bibal, R Cardon, D Alfter, R Wilkens, X Wang, T François, P Watrin, 10.18653/v1/2022.acl-long.269doi:10.18653/ v1/2022.acl-long.269Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Bibal, A., Cardon, R., Alfter, D., Wilkens, R., Wang, X., François, T., Watrin, P., 2022. Is attention explanation? an introduction to the debate, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland. pp. 3889-3900. URL: https://aclanthology.org/2022.acl-long.269, doi:10.18653/ v1/2022.acl-long.269.
The Chicago guide to fact-checking. B Borel, University of Chicago PressBorel, B., 2016. The Chicago guide to fact-checking. University of Chicago Press.
Team buster. ai at checkthat! 2020 insights and recommendations to improve fact-checking. M Bouziane, H Perrin, A Cluzeau, J Mardas, A Sadeq, in: CLEF (Working NotesBouziane, M., Perrin, H., Cluzeau, A., Mardas, J., Sadeq, A., 2020. Team buster. ai at checkthat! 2020 insights and recommendations to improve fact-checking., in: CLEF (Working Notes).
A neural model to jointly predict and explain truthfulness of statements. E Brand, K Roitero, M Soprano, A Rahimi, G Demartini, ACM Journal of Data and Information Quality. JDIQBrand, E., Roitero, K., Soprano, M., Rahimi, A., Demartini, G., 2018. A neural model to jointly predict and explain truthfulness of statements. ACM Journal of Data and Information Quality (JDIQ) .
hello ai": uncovering the onboarding needs of medical practitioners for human-ai collaborative decision-making. C J Cai, S Winter, D Steiner, L Wilcox, M Terry, Proceedings of the ACM on Human-computer Interaction. 3Cai, C.J., Winter, S., Steiner, D., Wilcox, L., Terry, M., 2019. " hello ai": uncovering the onboarding needs of medical practitioners for human-ai collaborative decision-making. Proceedings of the ACM on Human-computer Interaction 3, 1-24.
Improving twitter search with real-time human computation. E Chen, A Jain, Engineering Blog. 8Chen, E., Jain, A., 2013. Improving twitter search with real-time human computation. Engineering Blog 8, 2013.
Generating Literal and Implied Subquestions to Fact-check Complex Claims URL. J Chen, A Sriram, E Choi, G Durrett, arXiv:2205.06938Chen, J., Sriram, A., Choi, E., Durrett, G., 2022. Generating Literal and Implied Subquestions to Fact-check Complex Claims URL: http: //arxiv.org/abs/2205.06938, arXiv:2205.06938.
Seeing things from a different angle:discovering diverse perspectives about claims. S Chen, D Khashabi, W Yin, C Callison-Burch, D Roth, 10.18653/v1/N19-1053Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Chen, S., Khashabi, D., Yin, W., Callison-Burch, C., Roth, D., 2019a. Seeing things from a different angle:discovering diverse perspectives about claims, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota. pp. 542-557. URL: https://aclanthology.org/N19-1053, doi:10.18653/v1/N19-1053.
Tabfact: A large-scale dataset for table-based fact verification. W Chen, H Wang, J Chen, Y Zhang, H Wang, S Li, X Zhou, W Y Wang, International Conference on Learning Representations. Chen, W., Wang, H., Chen, J., Zhang, Y., Wang, H., Li, S., Zhou, X., Wang, W.Y., 2019b. Tabfact: A large-scale dataset for table-based fact verification, in: International Conference on Learning Representations.
Explaining decision-making algorithms through ui: Strategies to help non-expert stakeholders. H F Cheng, R Wang, Z Zhang, F O'connell, T Gray, F M Harper, H Zhu, Proceedings of the 2019 chi conference on human factors in computing systems. the 2019 chi conference on human factors in computing systemsCheng, H.F., Wang, R., Zhang, Z., O'Connell, F., Gray, T., Harper, F.M., Zhu, H., 2019. Explaining decision-making algorithms through ui: Strategies to help non-expert stakeholders, in: Proceedings of the 2019 chi conference on human factors in computing systems, pp. 1-12.
Two heads are better than one: A dimension space for unifying human and artificial intelligence in shared control. G Cimolino, T N Graham, 10.1145/3491102.3517610doi:10.1145/3491102.3517610CHI Conference on Human Factors in Computing Systems. New York, NY, USAAssociation for Computing MachineryCimolino, G., Graham, T.N., 2022. Two heads are better than one: A dimension space for unifying human and artificial intelligence in shared control, in: CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA. URL: https://doi.org/10.1145/3491102.3517610, doi:10.1145/3491102.3517610.
Dynamics of online hate and misinformation. M Cinelli, A Pelicon, I Mozetič, W Quattrociocchi, P K Novak, F Zollo, Scientific reports. 11Cinelli, M., Pelicon, A., Mozetič, I., Quattrociocchi, W., Novak, P.K., Zollo, F., 2021a. Dynamics of online hate and misinformation. Scientific reports 11, 1-12.
M Cinelli, A Pelicon, I Mozetič, W Quattrociocchi, P K Novak, F Zollo, arXiv:2105.14005Online hate: Behavioural dynamics and relationship with misinformation. arXiv preprintCinelli, M., Pelicon, A., Mozetič, I., Quattrociocchi, W., Novak, P.K., Zollo, F., 2021b. Online hate: Behavioural dynamics and relationship with misinformation. arXiv preprint arXiv:2105.14005 .
Overview of the trec 2020 health misinformation track. C L Clarke, S Rizvi, M D Smucker, M Maistro, G Zuccon, TRECClarke, C.L., Rizvi, S., Smucker, M.D., Maistro, M., Zuccon, G., 2020a. Overview of the trec 2020 health misinformation track, in: TREC.
Overview of the trec 2020 health misinformation track. C L Clarke, S Rizvi, M D Smucker, M Maistro, G Zuccon, TRECClarke, C.L., Rizvi, S., Smucker, M.D., Maistro, M., Zuccon, G., 2020b. Overview of the trec 2020 health misinformation track, in: TREC.
A survey on computational propaganda detection. G Da San Martino, S Cresci, A Barrón-Cedeño, S Yu, R D Pietro, P Nakov, IJCAIDa San Martino, G., Cresci, S., Barrón-Cedeño, A., Yu, S., Pietro, R.D., Nakov, P., 2020. A survey on computational propaganda detection, in: IJCAI.
Recognizing textual entailment: Rational, evaluation and approaches-erratum. I Dagan, B Dolan, B Magnini, D Roth, Natural Language Engineering. 16Dagan, I., Dolan, B., Magnini, B., Roth, D., 2010. Recognizing textual entailment: Rational, evaluation and approaches-erratum. Natural Language Engineering 16, 105-105.
Prototex: Explaining model decisions with prototype tensors. A Das, C Gupta, V Kovatchev, M Lease, J J Li, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Das, A., Gupta, C., Kovatchev, V., Lease, M., Li, J.J., 2022. Prototex: Explaining model decisions with prototype tensors, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2986-2997.
Hybrid human-machine information systems: Challenges and opportunities. G Demartini, Computer Networks. 90Demartini, G., 2015. Hybrid human-machine information systems: Challenges and opportunities. Computer Networks 90, 5-13.
Zencrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. G Demartini, D E Difallah, P Cudré-Mauroux, Proceedings of the 21st international conference on World Wide Web. the 21st international conference on World Wide WebDemartini, G., Difallah, D.E., Cudré-Mauroux, P., 2012. Zencrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking, in: Proceedings of the 21st international conference on World Wide Web, pp. 469-478.
Human-in-the-loop artificial intelligence for fighting online misinformation: Challenges and opportunities. G Demartini, S Mizzaro, D Spina, The Bulletin of the Technical Committee on Data Engineering. 43Demartini, G., Mizzaro, S., Spina, D., 2020. Human-in-the-loop artificial intelligence for fighting online misinformation: Challenges and opportunities. The Bulletin of the Technical Committee on Data Engineering 43.
Crowdq: Crowdsourced query understanding. G Demartini, B Trushkowsky, T Kraska, M J Franklin, U Berkeley, CIDRDemartini, G., Trushkowsky, B., Kraska, T., Franklin, M.J., Berkeley, U., 2013. Crowdq: Crowdsourced query understanding., in: CIDR.
K D Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, A Mahendiran, S Mille, A Srivastava, S Tan, arXiv:2112.02721Nlaugmenter: A framework for task-sensitive natural language augmentation. arXiv preprintDhole, K.D., Gangal, V., Gehrmann, S., Gupta, A., Li, Z., Mahamood, S., Mahendiran, A., Mille, S., Srivastava, A., Tan, S., et al., 2021. Nl- augmenter: A framework for task-sensitive natural language augmentation. arXiv preprint arXiv:2112.02721 .
Climate-fever: A dataset for verification of real-world climate claims. T Diggelmann, J Boyd-Graber, J Bulian, M Ciaramita, M Leippold, Diggelmann, T., Boyd-Graber, J., Bulian, J., Ciaramita, M., Leippold, M., 2020. Climate-fever: A dataset for verification of real-world climate claims .
An interactive web-based dashboard to track covid-19 in real time. E Dong, H Du, L Gardner, The Lancet infectious diseases. 20Dong, E., Du, H., Gardner, L., 2020. An interactive web-based dashboard to track covid-19 in real time. The Lancet infectious diseases 20, 533-534.
The bootstrap method for assessing statistical accuracy. B Efron, R Tibshirani, Behaviormetrika. 12Efron, B., Tibshirani, R., 1985. The bootstrap method for assessing statistical accuracy. Behaviormetrika 12, 1-35.
Fairness in information access systems. M D Ekstrand, A Das, R Burke, F Diaz, Foundations and Trends® in Information Retrieval. 16Ekstrand, M.D., Das, A., Burke, R., Diaz, F., et al., 2022. Fairness in information access systems. Foundations and Trends® in Information Retrieval 16, 1-177.
Overview of the clef-2019 checkthat! lab: automatic identification and verification of claims. T Elsayed, P Nakov, A Barrón-Cedeño, M Hasanain, R Suwaileh, G Da San Martino, P Atanasova, International Conference of the Cross-Language Evaluation Forum for European Languages. SpringerElsayed, T., Nakov, P., Barrón-Cedeño, A., Hasanain, M., Suwaileh, R., Da San Martino, G., Atanasova, P., 2019. Overview of the clef-2019 checkthat! lab: automatic identification and verification of claims, in: International Conference of the Cross-Language Evaluation Forum for European Languages, Springer. pp. 301-321.
NileTMRG at SemEval-2017 task 8: Determining rumour and veracity support for rumours on Twitter. O Enayet, S R El-Beltagy, 10.18653/v1/S17-2082Proceedings of the 11th International Workshop on Semantic Evaluation. the 11th International Workshop on Semantic EvaluationVancouver, CanadaAssociation for Computational LinguisticsEnayet, O., El-Beltagy, S.R., 2017. NileTMRG at SemEval-2017 task 8: Determining rumour and veracity support for rumours on Twitter., in: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Association for Computational Linguistics, Vancouver, Canada. pp. 470-474. URL: https://aclanthology.org/S17-2082, doi:10.18653/v1/S17-2082.
Generating fact checking briefs. A Fan, A Piktus, F Petroni, G Wenzek, M Saeidi, A Vlachos, A Bordes, S Riedel, 10.18653/v1/2020.emnlp-main.580Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsFan, A., Piktus, A., Petroni, F., Wenzek, G., Saeidi, M., Vlachos, A., Bordes, A., Riedel, S., 2020. Generating fact checking briefs, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online. pp. 7147-7161. URL: https://aclanthology.org/2020.emnlp-main.580, doi:10.18653/v1/2020.emnlp-main.580.
Active learning for rumor identification on social media. P Farinneya, M M Abdollah Pour, S Hamidian, M Diab, 10.18653/v1/2021.findings-emnlp.387Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsFarinneya, P., Abdollah Pour, M.M., Hamidian, S., Diab, M., 2021. Active learning for rumor identification on social media, in: Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics, Punta Cana, Dominican Republic. pp. 4556-4565. URL: https://aclanthology.org/2021.findings-emnlp.387, doi:10.18653/v1/2021.findings-emnlp.387.
Emergent: a novel data-set for stance classification. W Ferreira, A Vlachos, NAACLFerreira, W., Vlachos, A., 2016. Emergent: a novel data-set for stance classification, in: NAACL.
Exfakt: A framework for explaining facts over knowledge graphs and text. M H Gad-Elrab, D Stepanova, J Urbani, G Weikum, Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningGad-Elrab, M.H., Stepanova, D., Urbani, J., Weikum, G., 2019. Exfakt: A framework for explaining facts over knowledge graphs and text, in: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 87-95.
Annotating and analyzing the interactions between meaning relations. D Gold, V Kovatchev, T Zesch, Proceedings of the 13th Linguistic Annotation Workshop. the 13th Linguistic Annotation WorkshopGold, D., Kovatchev, V., Zesch, T., 2019. Annotating and analyzing the interactions between meaning relations, in: Proceedings of the 13th Linguistic Annotation Workshop, pp. 26-36.
G Gorrell, E Kochkina, M Liakata, A Aker, A Zubiaga, K Bontcheva, L Derczynski, Semeval-2019 task 7: Rumoureval, determining rumour veracity and support for rumours. Proceedings of the 13th International Workshop on Semantic EvaluationGorrell, G., Kochkina, E., Liakata, M., Aker, A., Zubiaga, A., Bontcheva, K., Derczynski, L., 2019. Semeval-2019 task 7: Rumoureval, determining rumour veracity and support for rumours, in: Proceedings of the 13th International Workshop on Semantic Evaluation, pp. 845-854.
Understanding the promise and limits of automated fact-checking. D Graves, Graves, D., 2018a. Understanding the promise and limits of automated fact-checking .
Anatomy of a fact check: Objective practice and the contested epistemology of fact checking. Communication. L Graves, Culture & Critique. 10Graves, L., 2017. Anatomy of a fact check: Objective practice and the contested epistemology of fact checking. Communication, Culture & Critique 10, 518-537.
Boundaries not drawn: Mapping the institutional roots of the global fact-checking movement. L Graves, Journalism Studies. 19Graves, L., 2018b. Boundaries not drawn: Mapping the institutional roots of the global fact-checking movement. Journalism Studies 19, 613-631.
Fact-checking as idea and practice in journalism. L Graves, M A Amazeen, Oxford Research Encyclopedia of Communication. Graves, L., Amazeen, M.A., 2019. Fact-checking as idea and practice in journalism, in: Oxford Research Encyclopedia of Communication.
Nela-gt-2020: A large multi-labelled news dataset for the study of misinformation in news articles. M Gruppi, B D Horne, S Adalı, arXiv:2102.04567arXiv preprintGruppi, M., Horne, B.D., Adalı, S., 2021. Nela-gt-2020: A large multi-labelled news dataset for the study of misinformation in news articles. arXiv preprint arXiv:2102.04567 .
Rumor detection with hierarchical social attention network. H Guo, J Cao, Y Zhang, J Guo, J Li, Proceedings of the 27th ACM International Conference on Information and Knowledge Management. the 27th ACM International Conference on Information and Knowledge ManagementGuo, H., Cao, J., Zhang, Y., Guo, J., Li, J., 2018. Rumor detection with hierarchical social attention network, in: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 943-951.
A Survey on Automated Fact-Checking. Z Guo, M Schlichtkrull, A Vlachos, http:/arxiv.org/abs/https:/direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00454/1987018/tacl_a_00454.pdfdoi:10.1162/tacl_a_00454Transactions of the Association for Computational Linguistics. 10Guo, Z., Schlichtkrull, M., Vlachos, A., 2022. A Survey on Automated Fact-Checking. Transactions of the Association for Computational Linguistics 10, 178-206. URL: https://doi.org/10.1162/tacl_a_00454, doi:10.1162/tacl_a_00454, arXiv:https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00454/1987018/tacl_a_00454.pdf.
X-fact: A new benchmark dataset for multilingual fact checking. A Gupta, V Srikumar, ACL/IJCNLPGupta, A., Srikumar, V., 2021. X-fact: A new benchmark dataset for multilingual fact checking, in: ACL/IJCNLP.
INFOTABS: Inference on tables as semi-structured data. V Gupta, M Mehta, P Nokhiz, V Srikumar, 10.18653/v1/2020.acl-main.210Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsGupta, V., Mehta, M., Nokhiz, P., Srikumar, V., 2020. INFOTABS: Inference on tables as semi-structured data, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online. pp. 2309-2324. URL: https://aclanthology.org/2020.acl-main.210, doi:10.18653/v1/2020.acl-main.210.
A retrospective analysis of the fake news challenge stance-detection task. A Hanselowski, S V Avineshp, B Schiller, F Caspelherr, D Chaudhuri, C M Meyer, I Gurevych, COLINGHanselowski, A., AvineshP.V., S., Schiller, B., Caspelherr, F., Chaudhuri, D., Meyer, C.M., Gurevych, I., 2018. A retrospective analysis of the fake news challenge stance-detection task, in: COLING.
A richly annotated corpus for different tasks in automated fact-checking. A Hanselowski, C Stab, C Schulz, Z Li, I Gurevych, Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Hanselowski, A., Stab, C., Schulz, C., Li, Z., Gurevych, I., 2019. A richly annotated corpus for different tasks in automated fact-checking, in: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 493-503.
A survey on stance detection for mis-and disinformation identification. M Hardalov, A Arora, P Nakov, I Augenstein, ArXiv abs/2103.00242Hardalov, M., Arora, A., Nakov, P., Augenstein, I., 2021. A survey on stance detection for mis-and disinformation identification. ArXiv abs/2103.00242.
ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. T Hartvigsen, S Gabriel, H Palangi, M Sap, D Ray, E Kamar, 10.18653/v1/2022.acl-long.234Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Hartvigsen, T., Gabriel, S., Palangi, H., Sap, M., Ray, D., Kamar, E., 2022. ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland. pp. 3309-3326. URL: https://aclanthology.org/2022.acl-long. 234, doi:10.18653/v1/2022.acl-long.234.
bigir at checkthat! 2020: Multilingual bert for ranking arabic tweets by check-worthiness. M Hasanain, T Elsayed, CLEF (Working Notes). Hasanain, M., Elsayed, T., 2020. bigir at checkthat! 2020: Multilingual bert for ranking arabic tweets by check-worthiness., in: CLEF (Working Notes).
Studying effectiveness of Web search for fact checking. M Hasanain, T Elsayed, 10.1002/asi.24577doi:10.1002/asi.24577Journal of the Association for Information Science and Technology. Hasanain, M., Elsayed, T., 2021. Studying effectiveness of Web search for fact checking. Journal of the Association for Information Science and Technology URL: https://onlinelibrary.wiley.com/doi/full/10.1002/asi.24577https://onlinelibrary.wiley.com/doi/ abs/10.1002/asi.24577https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24577, doi:10.1002/asi.24577.
Studying effectiveness of web search for fact checking. M Hasanain, T Elsayed, Journal of the Association for Information Science and Technology. 73Hasanain, M., Elsayed, T., 2022. Studying effectiveness of web search for fact checking. Journal of the Association for Information Science and Technology 73, 738-751.
Evaluating explainable ai: Which algorithmic explanations help users predict model behavior?. P Hase, M Bansal, ACLHase, P., Bansal, M., 2020. Evaluating explainable ai: Which algorithmic explanations help users predict model behavior?, in: ACL.
Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster. N Hassan, F Arslan, C Li, M Tremayne, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningHassan, N., Arslan, F., Li, C., Tremayne, M., 2017a. Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster, in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1803-1812.
Detecting check-worthy factual claims in presidential debates. N Hassan, C Li, M Tremayne, Proceedings of the 24th acm international on conference on information and knowledge management. the 24th acm international on conference on information and knowledge managementHassan, N., Li, C., Tremayne, M., 2015. Detecting check-worthy factual claims in presidential debates, in: Proceedings of the 24th acm international on conference on information and knowledge management, pp. 1835-1838.
Claimbuster: The first-ever end-to-end fact-checking system. N Hassan, G Zhang, F Arslan, J Caraballo, D Jimenez, S Gawsane, S Hasan, M Joseph, A Kulkarni, A K Nayak, Proceedings of the VLDB Endowment. the VLDB Endowment10Hassan, N., Zhang, G., Arslan, F., Caraballo, J., Jimenez, D., Gawsane, S., Hasan, S., Joseph, M., Kulkarni, A., Nayak, A.K., et al., 2017b. Claimbuster: The first-ever end-to-end fact-checking system. Proceedings of the VLDB Endowment 10, 1945-1948.
Sampling the news producers: A large news and feature data set for the study of the complex media landscape. B D Horne, S Khedr, S Adali, Twelfth International AAAI Conference on Web and Social Media. Horne, B.D., Khedr, S., Adali, S., 2018. Sampling the news producers: A large news and feature data set for the study of the complex media landscape, in: Twelfth International AAAI Conference on Web and Social Media.
Decision-focused summarization. C C Hsu, C Tan, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingHsu, C.C., Tan, C., 2021. Decision-focused summarization, in: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 117-132.
Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?. A Jacovi, Y Goldberg, 10.18653/v1/2020.acl-main.386Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJacovi, A., Goldberg, Y., 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online. pp. 4198-4205. URL: https://aclanthology.org/2020.acl-main.386, doi:10.18653/v1/2020.acl-main.386.
Aligning Faithful Interpretations with their Social Attribution. A Jacovi, Y Goldberg, http:/arxiv.org/abs/https:/direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00367/1923972/tacl_a_00367.pdfdoi:10.1162/tacl_a_00367Transactions of the Association for Computational Linguistics. 9Jacovi, A., Goldberg, Y., 2021. Aligning Faithful Interpretations with their Social Attribution. Transactions of the Associa- tion for Computational Linguistics 9, 294-310. URL: https://doi.org/10.1162/tacl_a_00367, doi:10.1162/tacl_a_00367, arXiv:https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl_a_00367/1923972/tacl_a_00367.pdf.
Reconstructing diffusion model for virality detection in news spread networks. K Jain, A Garg, S Jain, Research Anthology on Fake News, Political Warfare, and Combatting the Spread of Misinformation. IGI GlobalJain, K., Garg, A., Jain, S., 2021. Reconstructing diffusion model for virality detection in news spread networks, in: Research Anthology on Fake News, Political Warfare, and Combatting the Spread of Misinformation. IGI Global, pp. 98-111.
Attention is not explanation. S Jain, B C Wallace, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jain, S., Wallace, B.C., 2019. Attention is not explanation, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 3543-3556.
Hover: A dataset for many-hop fact extraction and claim verification. Y Jiang, S Bordia, Z Zhong, C Dognin, M Singh, M Bansal, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsJiang, Y., Bordia, S., Zhong, Z., Dognin, C., Singh, M., Bansal, M., 2020. Hover: A dataset for many-hop fact extraction and claim verification, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 3441-3460.
Search engines that learn from implicit feedback. T Joachims, F Radlinski, Computer. 40Joachims, T., Radlinski, F., 2007. Search engines that learn from implicit feedback. Computer 40, 34-40.
The gulf information war| propaganda, fake news, and fake trends: The weaponization of twitter bots in the gulf crisis. M O Jones, International journal of communication. 1327Jones, M.O., 2019. The gulf information war| propaganda, fake news, and fake trends: The weaponization of twitter bots in the gulf crisis. International journal of communication 13, 27.
Human and technological infrastructures of fact-checking. P Juneja, T Mitra, arXiv:2205.10894arXiv preprintJuneja, P., Mitra, T., 2022. Human and technological infrastructures of fact-checking. arXiv preprint arXiv:2205.10894 .
Scrutinizer: A mixed-initiative approach to large-scale, data-driven claim verification. G Karagiannis, M Saeed, P Papotti, I Trummer, 10.14778/3407790.3407841doi:10.14778/3407790.3407841Proc. VLDB Endow. VLDB Endow13Karagiannis, G., Saeed, M., Papotti, P., Trummer, I., 2020. Scrutinizer: A mixed-initiative approach to large-scale, data-driven claim verification. Proc. VLDB Endow. 13, 2508-2521. URL: https://doi.org/10.14778/3407790.3407841, doi:10.14778/3407790.3407841.
Who's in the crowd matters: Cognitive factors and beliefs predict misinformation assessment accuracy. R A Kaufman, M R Haupt, S P Dow, Proceedings of the ACM on Human-Computer Interaction. 6Kaufman, R.A., Haupt, M.R., Dow, S.P., 2022. Who's in the crowd matters: Cognitive factors and beliefs predict misinformation assessment accuracy. Proceedings of the ACM on Human-Computer Interaction 6, 1-18.
Claim matching beyond english to scale global fact-checking. A Kazemi, D Gaffney, K Garimella, S A Hale, 10.18653/v1/2021.acl-long.347arXiv:2106.00853ACL-IJCNLP 2021 -59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference. Kazemi, A., Gaffney, D., Garimella, K., Hale, S.A., 2021a. Claim matching beyond english to scale global fact-checking. ACL-IJCNLP 2021 -59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference , 4504-4517URL: http://arxiv.org/abs/2106.00853, doi:10.18653/v1/2021.acl-long. 347, arXiv:2106.00853.
Tiplines to combat misinformation on encrypted platforms: A case study of the 2019 indian election on whatsapp. A Kazemi, K Garimella, G K Shahi, D Gaffney, S A Hale, ArXiv abs/2106.04726Kazemi, A., Garimella, K., Shahi, G.K., Gaffney, D., Hale, S.A., 2021b. Tiplines to combat misinformation on encrypted platforms: A case study of the 2019 indian election on whatsapp. ArXiv abs/2106.04726.
The hateful memes challenge: Detecting hate speech in multimodal memes. D Kiela, H Firooz, A Mohan, V Goswami, A Singh, P Ringshia, D Testuggine, H Larochelle, M Ranzato, R Hadsell, M Balcan, Advances in Neural Information Processing Systems. Lin, H.Curran Associates, IncKiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Ringshia, P., Testuggine, D., 2020. The hateful memes challenge: Detecting hate speech in multimodal memes, in: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (Eds.), Advances in Neural Infor- mation Processing Systems, Curran Associates, Inc.. pp. 2611-2624. URL: https://proceedings.neurips.cc/paper/2020/file/ 1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf.
Unsupervised fact checking by counter-weighted positive and negative evidential paths in a knowledge graph. J Kim, K S Choi, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsKim, J., Choi, K.S., 2020. Unsupervised fact checking by counter-weighted positive and negative evidential paths in a knowledge graph, in: Proceedings of the 28th International Conference on Computational Linguistics, pp. 1677-1686.
Turing at SemEval-2017 task 8: Sequential approach to rumour stance classification with branch-LSTM. E Kochkina, M Liakata, I Augenstein, 10.18653/v1/S17-2083Proceedings of the 11th International Workshop on Semantic Evaluation. the 11th International Workshop on Semantic EvaluationVancouver, CanadaAssociation for Computational LinguisticsKochkina, E., Liakata, M., Augenstein, I., 2017. Turing at SemEval-2017 task 8: Sequential approach to rumour stance classification with branch- LSTM, in: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), Association for Computational Linguistics, Vancouver, Canada. pp. 475-480. URL: https://aclanthology.org/S17-2083, doi:10.18653/v1/S17-2083.
Toward automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. L Konstantinovskiy, O Price, M Babakar, A Zubiaga, Digital Threats: Research and Practice. 2Konstantinovskiy, L., Price, O., Babakar, M., Zubiaga, A., 2021. Toward automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. Digital Threats: Research and Practice 2, 1-16.
Graph reasoning with context-aware linearization for interpretable fact extraction and verification. N Kotonya, T Spooner, D Magazzeni, F Toni, 10.18653/v1/2021.fever-1.3Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER). the Fourth Workshop on Fact Extraction and VERification (FEVER)Dominican RepublicAssociation for Computational LinguisticsKotonya, N., Spooner, T., Magazzeni, D., Toni, F., 2021. Graph reasoning with context-aware linearization for interpretable fact extraction and verification, in: Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Dominican Republic. pp. 21-30. URL: https://aclanthology.org/2021.fever-1.3, doi:10.18653/v1/2021.fever-1.3.
Explainable automated fact-checking: A survey. N Kotonya, F Toni, COLINGKotonya, N., Toni, F., 2020a. Explainable automated fact-checking: A survey, in: COLING.
Explainable automated fact-checking for public health claims. N Kotonya, F Toni, EMNLPKotonya, N., Toni, F., 2020b. Explainable automated fact-checking for public health claims, in: EMNLP.
longhorns at dadc 2022: How many linguists does it take to fool a question answering model? a systematic approach to adversarial attacks. V Kovatchev, T Chatterjee, V S Govindarajan, J Chen, E Choi, G Chronis, A Das, K Erk, M Lease, J J Li, Proceedings of the First Workshop on Dynamic Adversarial Data Collection. the First Workshop on Dynamic Adversarial Data CollectionKovatchev, V., Chatterjee, T., Govindarajan, V.S., Chen, J., Choi, E., Chronis, G., Das, A., Erk, K., Lease, M., Li, J.J., et al., 2022. longhorns at dadc 2022: How many linguists does it take to fool a question answering model? a systematic approach to adversarial attacks., in: Proceedings of the First Workshop on Dynamic Adversarial Data Collection, pp. 41-52.
Can vectors read minds better than experts? comparing data augmentation strategies for the automated scoring of children's mindreading ability. V Kovatchev, P Smith, M Lee, R Devine, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Kovatchev, V., Smith, P., Lee, M., Devine, R., 2021. Can vectors read minds better than experts? comparing data augmentation strategies for the automated scoring of children's mindreading ability, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1196-1206.
what is on your mind?" automated scoring of mindreading in childhood and early adolescence. V Kovatchev, P Smith, M Lee, I G Traynor, I L Aguilera, R Devine, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsKovatchev, V., Smith, P., Lee, M., Traynor, I.G., Aguilera, I.L., Devine, R., 2020. "what is on your mind?" automated scoring of mindreading in childhood and early adolescence, in: Proceedings of the 28th International Conference on Computational Linguistics, pp. 6217-6228.
Annotator rationales for labeling tasks in crowdsourcing. M Kutlu, T Mcdonnell, T Elsayed, M Lease, Journal of Artificial Intelligence Research. 69Kutlu, M., McDonnell, T., Elsayed, T., Lease, M., 2020. Annotator rationales for labeling tasks in crowdsourcing. Journal of Artificial Intelligence Research 69, 143-189.
Crowdsourcing truthfulness: The impact of judgment scale and assessor bias. La Barbera, D Roitero, K Demartini, G Mizzaro, S Spina, D , Advances in Information Retrieval 12036. 207La Barbera, D., Roitero, K., Demartini, G., Mizzaro, S., Spina, D., 2020. Crowdsourcing truthfulness: The impact of judgment scale and assessor bias. Advances in Information Retrieval 12036, 207.
Towards a science of human-ai decision making: A survey of empirical studies. V Lai, C Chen, Q V Liao, A Smith-Renner, C Tan, arXiv:2112.11471arXiv preprintLai, V., Chen, C., Liao, Q.V., Smith-Renner, A., Tan, C., 2021. Towards a science of human-ai decision making: A survey of empirical studies. arXiv preprint arXiv:2112.11471 .
Argument mining: A survey. J Lawrence, C Reed, Computational Linguistics. 45Lawrence, J., Reed, C., 2020. Argument mining: A survey. Computational Linguistics 45, 765-818.
Fact checking and information retrieval. M Lease, DESIRESLease, M., 2018. Fact checking and information retrieval., in: DESIRES, pp. 97-98.
Designing human-ai partnerships to combat misinfomation. M Lease, Lease, M., 2020. Designing human-ai partnerships to combat misinfomation .
Entitled to the facts: A fact-checking role for librarians. C Lebeau, Reference and User Services Quarterly. 57LeBeau, C., 2017. Entitled to the facts: A fact-checking role for librarians. Reference and User Services Quarterly 57, 76-78.
Trust, control strategies and allocation of function in human-machine systems. J Lee, N Moray, Ergonomics. 35Lee, J., Moray, N., 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 1243-1270.
Trust in automation: designing for appropriate reliance. J D Lee, K A See, Human Factors. 46Lee, J.D., See, K.A., 2004. Trust in automation: designing for appropriate reliance. Human Factors 46, 50-80.
Towards few-shot fact-checking via perplexity. N Lee, Y Bang, A Madotto, P Fung, 10.18653/v1/2021.naacl-main.158Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsLee, N., Bang, Y., Madotto, A., Fung, P., 2021. Towards few-shot fact-checking via perplexity, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Online. pp. 1971-1981. URL: https://aclanthology.org/2021.naacl-main.158, doi:10.18653/v1/2021.naacl-main.158.
Language models as fact checkers?. N Lee, B Z Li, S Wang, W T Yih, H Ma, M Khabsa, 10.18653/v1/2020.fever-1.5Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER). the Third Workshop on Fact Extraction and VERification (FEVER)Association for Computational LinguisticsLee, N., Li, B.Z., Wang, S., Yih, W.t., Ma, H., Khabsa, M., 2020. Language models as fact checkers?, in: Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER), Association for Computational Linguistics, Online. pp. 36-41. URL: https://aclanthology. org/2020.fever-1.5, doi:10.18653/v1/2020.fever-1.5.
Meme-tracking and the dynamics of the news cycle. J Leskovec, L Backstrom, J Kleinberg, Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. the 15th ACM SIGKDD international conference on Knowledge discovery and data miningLeskovec, J., Backstrom, L., Kleinberg, J., 2009. Meme-tracking and the dynamics of the news cycle, in: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 497-506.
Misinformation and its correction: Continued influence and successful debiasing. S Lewandowsky, U K Ecker, C M Seifert, N Schwarz, J Cook, Psychological science in the public interest. 13Lewandowsky, S., Ecker, U.K., Seifert, C.M., Schwarz, N., Cook, J., 2012. Misinformation and its correction: Continued influence and successful debiasing. Psychological science in the public interest 13, 106-131.
Bart: Denoising sequenceto-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsLewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L., 2020. Bart: Denoising sequence- to-sequence pre-training for natural language generation, translation, and comprehension, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871-7880.
AnaSearch: Extract, Retrieve and Visualize Structured Results from Unstructured Text for Analytical Queries. T Li, L Fang, J G Lou, Z Li, D Zhang, 10.1145/3437963.3441694doi:10.1145/3437963.3441694WSDM 2021 -Proceedings of the 14th ACM International Conference on Web Search and Data Mining. Li, T., Fang, L., Lou, J.G., Li, Z., Zhang, D., 2021. AnaSearch: Extract, Retrieve and Visualize Structured Results from Unstructured Text for Analytical Queries. WSDM 2021 -Proceedings of the 14th ACM International Conference on Web Search and Data Mining , 906-909URL: https://doi.org/10.1145/3437963.3441694, doi:10.1145/3437963.3441694.
A survey on truth discovery. Y Li, J Gao, C Meng, Q Li, L Su, B Zhao, W Fan, J Han, ACM Sigkdd Explorations Newsletter. 17Li, Y., Gao, J., Meng, C., Li, Q., Su, L., Zhao, B., Fan, W., Han, J., 2016. A survey on truth discovery. ACM Sigkdd Explorations Newsletter 17, 1-16.
Joint rumour stance and veracity prediction. A E Lillie, E R Middelboe, L Derczynski, Proceedings of the 22nd Nordic Conference on Computational Linguistics. the 22nd Nordic Conference on Computational LinguisticsLillie, A.E., Middelboe, E.R., Derczynski, L., 2019. Joint rumour stance and veracity prediction, in: Proceedings of the 22nd Nordic Conference on Computational Linguistics, pp. 208-221.
Wanli: Worker and ai collaboration for natural language inference dataset creation. A Liu, S Swayamdipta, N A Smith, Y Choi, Liu, A., Swayamdipta, S., Smith, N.A., Choi, Y., 2022. Wanli: Worker and ai collaboration for natural language inference dataset creation.
Gcan: Graph-aware co-attention networks for explainable fake news detection on social media. Y J Lu, C T Li, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsLu, Y.J., Li, C.T., 2020. Gcan: Graph-aware co-attention networks for explainable fake news detection on social media, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 505-514.
Sentence-level evidence embedding for claim verification with hierarchical attention networks. J Ma, W Gao, S Joty, K F Wong, 10.18653/v1/P19-1244Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsMa, J., Gao, W., Joty, S., Wong, K.F., 2019. Sentence-level evidence embedding for claim verification with hierarchical attention networks, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy. pp. 2561-2571. URL: https://aclanthology.org/P19-1244, doi:10.18653/v1/P19-1244.
Detecting rumors from microblogs with recurrent neural networks. J Ma, W Gao, P Mitra, S Kwon, B J Jansen, K F Wong, M Cha, IJCAIMa, J., Gao, W., Mitra, P., Kwon, S., Jansen, B.J., Wong, K.F., Cha, M., 2016. Detecting rumors from microblogs with recurrent neural networks, in: IJCAI.
Rumor detection on Twitter with tree-structured recursive neural networks. J Ma, W Gao, K F Wong, 10.18653/v1/P18-1184Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Ma, J., Gao, W., Wong, K.F., 2018. Rumor detection on Twitter with tree-structured recursive neural networks, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Melbourne, Australia. pp. 1980-1989. URL: https://aclanthology.org/P18-1184, doi:10.18653/v1/P18-1184.
Mesoamerican writing systems: Propaganda, myth, and history in four ancient civilizations. J Marcus, Princeton University Press PrincetonMarcus, J., 1992. Mesoamerican writing systems: Propaganda, myth, and history in four ancient civilizations. Princeton University Press Princeton.
Nlp&ir@ uned at checkthat! 2021: check-worthiness estimation and fake news detection using transformer models. J Martinez-Rico, J Martinez-Romo, L Araujo, . L Faggioli, 33Martinez-Rico, J., Martinez-Romo, J., Araujo, L., 2021. L.: Nlp&ir@ uned at checkthat! 2021: check-worthiness estimation and fake news detection using transformer models. Faggioli et al.[33] .
True or False: Studying the Work Practices of Professional Fact-Checkers. N Micallef, V Armacost, N Memon, S Patil, 10.1145/3512974doi:10.1145/3512974Proceedings of the ACM on Human-Computer Interaction. 6Micallef, N., Armacost, V., Memon, N., Patil, S., 2022. True or False: Studying the Work Practices of Professional Fact-Checkers. Proceedings of the ACM on Human-Computer Interaction 6, 1-44. URL: https://doi.org/10.1145/3512974, doi:10.1145/3512974.
The lie detector: Explorations in the automatic recognition of deceptive language. R Mihalcea, C Strapparava, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. the ACL-IJCNLP 2009 Conference Short PapersSuntec, SingaporeAssociation for Computational LinguisticsMihalcea, R., Strapparava, C., 2009. The lie detector: Explorations in the automatic recognition of deceptive language, in: Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, Association for Computational Linguistics, Suntec, Singapore. pp. 309-312. URL: https: //aclanthology.org/P09-2078.
Fact checking in community forums. T Mihaylova, P Nakov, L Màrquez, A Barrón-Cedeño, M Mohtarami, G Karadzhov, J Glass, Thirty-Second AAAI Conference on Artificial Intelligence. Mihaylova, T., Nakov, P., Màrquez, L., Barrón-Cedeño, A., Mohtarami, M., Karadzhov, G., Glass, J., 2018. Fact checking in community forums, in: Thirty-Second AAAI Conference on Artificial Intelligence.
Automated fact checking in the news room. S Miranda, D Nogueira, A Mendes, A Vlachos, A Secker, R Garrett, J Mitchel, Z Marinho, The World Wide Web Conference. Miranda, S., Nogueira, D., Mendes, A., Vlachos, A., Secker, A., Garrett, R., Mitchel, J., Marinho, Z., 2019. Automated fact checking in the news room, in: The World Wide Web Conference, pp. 3579-3583.
Machine learning explanations to prevent overtrust in fake news detection. S Mohseni, F Yang, S Pentyala, M Du, Y Liu, N Lupfer, X Hu, S Ji, E Ragan, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social MediaMohseni, S., Yang, F., Pentyala, S., Du, M., Liu, Y., Lupfer, N., Hu, X., Ji, S., Ragan, E., 2021. Machine learning explanations to prevent overtrust in fake news detection, in: Proceedings of the International AAAI Conference on Web and Social Media, pp. 421-431.
Interpretable machine learning. C Molnar, Lulu. comMolnar, C., 2020. Interpretable machine learning. Lulu. com.
Fake news detection on social media using geometric deep learning. F Monti, F Frasca, D Eynard, D Mannion, M M Bronstein, arXiv:1902.06673arXiv preprintMonti, F., Frasca, F., Eynard, D., Mannion, D., Bronstein, M.M., 2019. Fake news detection on social media using geometric deep learning. arXiv preprint arXiv:1902.06673 .
Fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection. K Nakamura, S Levy, W Y Wang, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceNakamura, K., Levy, S., Wang, W.Y., 2020. Fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection, in: Proceedings of the 12th Language Resources and Evaluation Conference, European Language Resources Association, Marseille, France. pp. 6149-6157. URL: https://aclanthology.org/2020.lrec-1.755.
Language-aware truth assessment of fact candidates. N Nakashole, T Mitchell, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsLong Papers1Nakashole, N., Mitchell, T., 2014. Language-aware truth assessment of fact candidates, in: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1009-1019.
Automated fact-checking for assisting human fact-checkers. P Nakov, D Corney, M Hasanain, F Alam, T Elsayed, A Barrón-Cedeño, P Papotti, S Shaar, G Da San Martino, IJCAINakov, P., Corney, D., Hasanain, M., Alam, F., Elsayed, T., Barrón-Cedeño, A., Papotti, P., Shaar, S., Da San Martino, G., 2021a. Automated fact-checking for assisting human fact-checkers, in: IJCAI.
P Nakov, G Da San Martino, A Barrón-Cedeño, W Zaghouani, R Míguez, F Alam, T Caselli, M Kutlu, J M Strub, T Mandl, Checkthat! lab on fighting the covid-19 infodemic and fake news detection. proposal for a clef-2022 labNakov, P., Da San Martino, G., Barrón-Cedeño, A., Zaghouani, W., Míguez, R., Alam, F., Caselli, T., Kutlu, M., Strub, J.M., Mandl, T., 2022. Checkthat! lab on fighting the covid-19 infodemic and fake news detection (proposal for a clef-2022 lab).
The clef-2021 checkthat! lab on detecting check-worthy claims, previously fact-checked claims, and fake news. P Nakov, G Da San Martino, T Elsayed, A Barrón-Cedeño, R Míguez, S Shaar, F Alam, F Haouari, M Hasanain, N Babulkov, ECIRNakov, P., Da San Martino, G., Elsayed, T., Barrón-Cedeño, A., Míguez, R., Shaar, S., Alam, F., Haouari, F., Hasanain, M., Babulkov, N., et al., 2021b. The clef-2021 checkthat! lab on detecting check-worthy claims, previously fact-checked claims, and fake news, in: ECIR (2).
Focus on the facts: A news and information literacy instructional program. A Neely-Sardon, M Tignor, The Reference Librarian. 59Neely-Sardon, A., Tignor, M., 2018. Focus on the facts: A news and information literacy instructional program. The Reference Librarian 59, 108-121.
Justice in misinformation detection systems: An analysis of algorithms, stakeholders, and potential harms. T Neumann, M De-Arteaga, S Fazelpour, 10.1145/3531146.3533205doi:10.1145/3531146.35332052022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USAAssociation for Computing MachineryNeumann, T., De-Arteaga, M., Fazelpour, S., 2022. Justice in misinformation detection systems: An analysis of algorithms, stakeholders, and potential harms, in: 2022 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, New York, NY, USA. p. 1504-1515. URL: https://doi.org/10.1145/3531146.3533205, doi:10.1145/3531146.3533205.
Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking. A T Nguyen, A Kharosekar, S Krishnan, S Krishnan, E Tate, B C Wallace, M Lease, Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. the 31st Annual ACM Symposium on User Interface Software and TechnologyNguyen, A.T., Kharosekar, A., Krishnan, S., Krishnan, S., Tate, E., Wallace, B.C., Lease, M., 2018a. Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking, in: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, pp. 189-199.
An interpretable joint graphical model for fact-checking from crowds. A T Nguyen, A Kharosekar, M Lease, B C Wallace, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)Nguyen, A.T., Kharosekar, A., Lease, M., Wallace, B.C., 2018b. An interpretable joint graphical model for fact-checking from crowds, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pp. 1511-1518. URL: ../papers/nguyen-aaai18. pdf.
Factcatch: Incremental pay-as-you-go fact checking with minimal user effort. T T Nguyen, M Weidlich, H Yin, B Zheng, Q H Nguyen, Q V H Nguyen, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalNguyen, T.T., Weidlich, M., Yin, H., Zheng, B., Nguyen, Q.H., Nguyen, Q.V.H., 2020. Factcatch: Incremental pay-as-you-go fact checking with minimal user effort, in: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2165-2168.
Revealing the importance of semantic retrieval for machine reading at scale. Y Nie, S Wang, M Bansal, 10.18653/v1/D19-1258Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNie, Y., Wang, S., Bansal, M., 2019. Revealing the importance of semantic retrieval for machine reading at scale, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China. pp. 2553-2566. URL: https://aclanthology.org/ D19-1258, doi:10.18653/v1/D19-1258.
Gem: Generative enhanced model for adversarial attacks. P Niewiński, M Pszona, M Janicka, Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER). the Second Workshop on Fact Extraction and VERification (FEVER)Niewiński, P., Pszona, M., Janicka, M., 2019. Gem: Generative enhanced model for adversarial attacks, in: Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pp. 20-26.
Nela-gt-2018: A large multi-labelled news dataset for the study of misinformation in news articles. J Nørregaard, B D Horne, S Adalı, Proceedings of the international AAAI conference on web and social media. the international AAAI conference on web and social mediaNørregaard, J., Horne, B.D., Adalı, S., 2019. Nela-gt-2018: A large multi-labelled news dataset for the study of misinformation in news articles, in: Proceedings of the international AAAI conference on web and social media, pp. 630-638.
A survey on natural language processing for fake news detection. R Oshikawa, J Qian, W Y Wang, LRECOshikawa, R., Qian, J., Wang, W.Y., 2020. A survey on natural language processing for fake news detection, in: LREC.
Declare: Debunking fake news and false claims using evidence-aware deep learning. K Popat, S Mukherjee, A Yates, G Weikum, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingPopat, K., Mukherjee, S., Yates, A., Weikum, G., 2018. Declare: Debunking fake news and false claims using evidence-aware deep learning, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 22-32.
A stylometric inquiryintohyperpartisanandfakenews. M Potthast, R Kieselj, ACL 231240Procofthe 56thAnnual Meetingofthe Associationfor-Computational Linguistics. ofthe 56thAnnual Meetingofthe Associationfor-Computational LinguisticsStroudsburg, PAPotthast, M., KIESELJ, R., et al., 2018. A stylometric inquiryintohyperpartisanandfakenews. Procofthe 56thAnnual Meetingofthe Associationfor- Computational Linguistics. Stroudsburg, PA: ACL 231240.
Clickbait detection. M Potthast, S Köpsel, B Stein, M Hagen, European Conference on Information Retrieval. SpringerPotthast, M., Köpsel, S., Stein, B., Hagen, M., 2016. Clickbait detection, in: European Conference on Information Retrieval, Springer. pp. 810-817.
Scientific claim verification with VerT5erini. R Pradeep, X Ma, R Nogueira, J Lin, Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis. the 12th International Workshop on Health Text Mining and Information AnalysisAssociation for Computational LinguisticsPradeep, R., Ma, X., Nogueira, R., Lin, J., 2021. Scientific claim verification with VerT5erini, in: Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, Association for Computational Linguistics, online. pp. 94-103. URL: https: //aclanthology.org/2021.louhi-1.11.
Rumor has it: Identifying misinformation in microblogs. V Qazvinian, E Rosengren, D Radev, Q Mei, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingQazvinian, V., Rosengren, E., Radev, D., Mei, Q., 2011. Rumor has it: Identifying misinformation in microblogs, in: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 1589-1599.
Combining human and machine confidence in truthfulness assessment. Y Qu, D L Barbera, K Roitero, S Mizzaro, D Spina, G Demartini, ACM Journal of Data and Information Quality. JDIQQu, Y., Barbera, D.L., Roitero, K., Mizzaro, S., Spina, D., Demartini, G., 2021a. Combining human and machine confidence in truthfulness assessment. ACM Journal of Data and Information Quality (JDIQ) .
Human-in-the-loop systems for truthfulness: A study of human and machine confidence. Y Qu, K Roitero, S Mizzaro, D Spina, G Demartini, Qu, Y., Roitero, K., Mizzaro, S., Spina, D., Demartini, G., 2021b. Human-in-the-loop systems for truthfulness: A study of human and machine confidence .
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, OpenAI blog 1, 9Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al., 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 9.
Truth of varying shades: Analyzing language in fake news and political fact-checking. H Rashkin, E Choi, J Y Jang, S Volkova, Y Choi, Proceedings of the 2017 conference on empirical methods in natural language processing. the 2017 conference on empirical methods in natural language processingRashkin, H., Choi, E., Jang, J.Y., Volkova, S., Choi, Y., 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking, in: Proceedings of the 2017 conference on empirical methods in natural language processing, pp. 2931-2937.
Semantically equivalent adversarial rules for debugging nlp models. M T Ribeiro, S Singh, C Guestrin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Ribeiro, M.T., Singh, S., Guestrin, C., 2018. Semantically equivalent adversarial rules for debugging nlp models, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 856-865.
Beyond accuracy: Behavioral testing of nlp models with checklist. M T Ribeiro, T Wu, C Guestrin, S Singh, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsRibeiro, M.T., Wu, T., Guestrin, C., Singh, S., 2020. Beyond accuracy: Behavioral testing of nlp models with checklist, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4902-4912.
Can the crowd identify misinformation objectively? the effects of judgment scale and assessor's background. K Roitero, M Soprano, S Fan, D Spina, S Mizzaro, G Demartini, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalRoitero, K., Soprano, M., Fan, S., Spina, D., Mizzaro, S., Demartini, G., 2020a. Can the crowd identify misinformation objectively? the effects of judgment scale and assessor's background, in: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 439-448.
The covid-19 infodemic: Can the crowd judge recent misinformation objectively?. K Roitero, M Soprano, B Portelli, D Spina, V Della Mea, G Serra, S Mizzaro, G Demartini, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementRoitero, K., Soprano, M., Portelli, B., Spina, D., Della Mea, V., Serra, G., Mizzaro, S., Demartini, G., 2020b. The covid-19 infodemic: Can the crowd judge recent misinformation objectively?, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 1305-1314.
Crowdmap: Crowdsourcing ontology alignment with microtasks, in: International semantic web conference. C Sarasua, E Simperl, N F Noy, SpringerSarasua, C., Simperl, E., Noy, N.F., 2012. Crowdmap: Crowdsourcing ontology alignment with microtasks, in: International semantic web conference, Springer. pp. 525-541.
Get your vitamin C! robust fact verification with contrastive evidence. T Schuster, A Fisch, R Barzilay, 10.18653/v1/2021.naacl-main.52Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsSchuster, T., Fisch, A., Barzilay, R., 2021. Get your vitamin C! robust fact verification with contrastive evidence, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Online. pp. 624-643. URL: https://aclanthology.org/2021.naacl-main.52, doi:10.18653/v1/2021. naacl-main.52.
The limitations of stylometry for detecting machine-generated fake news. T Schuster, R Schuster, D J Shah, R Barzilay, Computational Linguistics. 46Schuster, T., Schuster, R., Shah, D.J., Barzilay, R., 2020. The limitations of stylometry for detecting machine-generated fake news. Computational Linguistics 46, 499-510.
Is attention interpretable?. S Serrano, N A Smith, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsSerrano, S., Smith, N.A., 2019. Is attention interpretable?, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2931-2951.
Active learning literature survey. B Settles, Settles, B., 2009. Active learning literature survey .
Assisting the human fact-checkers: Detecting all previously fact-checked claims in a document. S Shaar, F Alam, G Da San Martino, P Nakov, arXiv:2109.07410arXiv preprintShaar, S., Alam, F., Da San Martino, G., Nakov, P., 2021a. Assisting the human fact-checkers: Detecting all previously fact-checked claims in a document. arXiv preprint arXiv:2109.07410 .
S Shaar, M Hasanain, B Hamdan, Z S Ali, F Haouari, A Nikolov, M Kutlu, Y S Kartal, F Alam, G Da San Martino, A Barrón-Cedeño, R Míguez, J Beltrán, T Elsayed, P Nakov, Overview of the clef-2021 checkthat! lab task 1 on check-worthiness estimation in tweets and political debates. CLEFShaar, S., Hasanain, M., Hamdan, B., Ali, Z.S., Haouari, F., Nikolov, A., Kutlu, M., Kartal, Y.S., Alam, F., Da San Martino, G., Barrón-Cedeño, A., Míguez, R., Beltrán, J., Elsayed, T., Nakov, P., 2021b. Overview of the clef-2021 checkthat! lab task 1 on check-worthiness estimation in tweets and political debates, in: CLEF.
That is a known lie: Detecting previously fact-checked claims. S Shaar, G D S Martino, N Babulkov, P Nakov, ACLShaar, S., Martino, G.D.S., Babulkov, N., Nakov, P., 2020. That is a known lie: Detecting previously fact-checked claims, in: ACL.
Sams: Human-in-the-loop approach to combat the sharing of digital misinformation. S Shabani, Z Charlesworth, M Sokhn, H Schuldt, CEUR Workshop ProcShabani, S., Charlesworth, Z., Sokhn, M., Schuldt, H., 2021. Sams: Human-in-the-loop approach to combat the sharing of digital misinformation, in: CEUR Workshop Proc.
Hoaxy: A platform for tracking online misinformation. C Shao, G L Ciampaglia, A Flammini, F Menczer, Proceedings of the 25th international conference companion on world wide web. the 25th international conference companion on world wide webShao, C., Ciampaglia, G.L., Flammini, A., Menczer, F., 2016. Hoaxy: A platform for tracking online misinformation, in: Proceedings of the 25th international conference companion on world wide web, pp. 745-750.
Discriminative predicate path mining for fact checking in knowledge graphs. Knowledge-based systems. B Shi, T Weninger, 104Shi, B., Weninger, T., 2016. Discriminative predicate path mining for fact checking in knowledge graphs. Knowledge-based systems 104, 123-133.
The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims. L Shi, N Bhattacharya, A Das, M Lease, J Gwizdka, Proceedings of the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval (CHIIR. the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval (CHIIRShi, L., Bhattacharya, N., Das, A., Lease, M., Gwizdka, J., 2022. The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims, in: Proceedings of the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval (CHIIR). URL: https://utexas.box.com/v/shi-chiir2022.
defend: Explainable fake news detection. K Shu, L Cui, S Wang, D Lee, H Liu, Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. the 25th ACM SIGKDD international conference on knowledge discovery & data miningShu, K., Cui, L., Wang, S., Lee, D., Liu, H., 2019. defend: Explainable fake news detection, in: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 395-405.
Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. K Shu, D Mahudeswaran, S Wang, D Lee, H Liu, Big data. 83Shu, K., Mahudeswaran, D., Wang, S., Lee, D., Liu, H., 2020. Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. Big data 8 3, 171-188.
Fake news detection on social media: A data mining perspective. K Shu, A Sliva, S Wang, J Tang, H Liu, ACM SIGKDD explorations newsletter. 19Shu, K., Sliva, A., Wang, S., Tang, J., Liu, H., 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD explorations newsletter 19, 22-36.
The case for claim difficulty assessment in automatic fact checking. P Singh, A Das, J J Li, M Lease, arXiv:2109.09689arXiv preprintSingh, P., Das, A., Li, J.J., Lease, M., 2021. The case for claim difficulty assessment in automatic fact checking. arXiv preprint arXiv:2109.09689 .
Sciclops: Detecting and contextualizing scientific claims for assisting manual fact-checking. P Smeros, C Castillo, K Aberer, Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM). the 30th ACM International Conference on Information & Knowledge Management (CIKM)Smeros, P., Castillo, C., Aberer, K., 2021. Sciclops: Detecting and contextualizing scientific claims for assisting manual fact-checking, in: Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM).
Desiderata for interpretability: explaining decision tree predictions with counterfactuals. K Sokol, P Flach, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceSokol, K., Flach, P., 2019. Desiderata for interpretability: explaining decision tree predictions with counterfactuals, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 10035-10036.
The many dimensions of truthfulness: Crowdsourcing misinformation assessments on a multidimensional scale. M Soprano, K Roitero, D La Barbera, D Ceolin, D Spina, S Mizzaro, G Demartini, Information Processing & Management. 58102710Soprano, M., Roitero, K., La Barbera, D., Ceolin, D., Spina, D., Mizzaro, S., Demartini, G., 2021. The many dimensions of truthfulness: Crowdsourcing misinformation assessments on a multidimensional scale. Information Processing & Management 58, 102710.
Global Fact 8 Pre-recorded Segment 4. The Poynter InstituteThe Poynter Institute, 2021. Global Fact 8 Pre-recorded Segment 4. URL: https://www.youtube.com/watch?v=gOhPKDaeQxI&t=770s.
Adversarial attacks against fact extraction and verification. J Thorne, A Vlachos, arXiv:1903.05543arXiv preprintThorne, J., Vlachos, A., 2019. Adversarial attacks against fact extraction and verification. arXiv preprint arXiv:1903.05543 .
Fever: a large-scale dataset for fact extraction and verification. J Thorne, A Vlachos, C Christodoulopoulos, A Mittal, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Thorne, J., Vlachos, A., Christodoulopoulos, C., Mittal, A., 2018. Fever: a large-scale dataset for fact extraction and verification, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 809-819.
The fever2. 0 shared task. J Thorne, A Vlachos, O Cocarascu, C Christodoulopoulos, A Mittal, Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER). the Second Workshop on Fact Extraction and VERification (FEVER)Thorne, J., Vlachos, A., Cocarascu, O., Christodoulopoulos, C., Mittal, A., 2019. The fever2. 0 shared task, in: Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pp. 1-6.
A digital nudge to counter confirmation bias. Frontiers in big data 2, 11. C Thornhill, Q Meeus, J Peperkamp, B Berendt, S Tschiatschek, A Singla, M Gomez Rodriguez, A Merchant, A Krause, Companion Proceedings of the The Web Conference. Fake news detection in social networks via crowd signalsThornhill, C., Meeus, Q., Peperkamp, J., Berendt, B., 2019. A digital nudge to counter confirmation bias. Frontiers in big data 2, 11. Tschiatschek, S., Singla, A., Gomez Rodriguez, M., Merchant, A., Krause, A., 2018. Fake news detection in social networks via crowd signals, in: Companion Proceedings of the The Web Conference 2018, pp. 517-524.
The epistemology of fact checking (is still naìve): Rejoinder to amazeen. J E Uscinski, Critical Review. 27Uscinski, J.E., 2015. The epistemology of fact checking (is still naìve): Rejoinder to amazeen. Critical Review 27, 243-252.
Crowdsourcing the research process. R Vaish, J Davis, M Bernstein, Collective Intelligence. 3Vaish, R., Davis, J., Bernstein, M., 2015. Crowdsourcing the research process. Collective Intelligence 3.
Crowd research: Open and scalable university laboratories. R Vaish, S N S Gaikwad, G Kovacs, A Veit, R Krishna, I Arrieta Ibarra, C Simoiu, M Wilber, S Belongie, S Goel, Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. the 30th Annual ACM Symposium on User Interface Software and TechnologyVaish, R., Gaikwad, S.N.S., Kovacs, G., Veit, A., Krishna, R., Arrieta Ibarra, I., Simoiu, C., Wilber, M., Belongie, S., Goel, S., et al., 2017. Crowd research: Open and scalable university laboratories, in: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 829-843.
A human-centered agenda for intelligible machine learning. Machines We Trust: Getting Along with Artificial Intelligence. J W Vaughan, H Wallach, Vaughan, J.W., Wallach, H., 2020. A human-centered agenda for intelligible machine learning. Machines We Trust: Getting Along with Artificial Intelligence .
Fairness and accountability design needs for algorithmic support in high-stakes public sector decisionmaking. M Veale, M Van Kleek, R Binns, Proceedings of the 2018 chi conference on human factors in computing systems. the 2018 chi conference on human factors in computing systemsVeale, M., Van Kleek, M., Binns, R., 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision- making, in: Proceedings of the 2018 chi conference on human factors in computing systems, pp. 1-14.
Polarization and fake news: Early warning of potential misinformation targets. M D Vicario, W Quattrociocchi, A Scala, F Zollo, ACM Transactions on the Web (TWEB). 13Vicario, M.D., Quattrociocchi, W., Scala, A., Zollo, F., 2019. Polarization and fake news: Early warning of potential misinformation targets. ACM Transactions on the Web (TWEB) 13, 1-22.
Fact checking: Task definition and dataset construction. A Vlachos, S Riedel, Proceedings of the ACL 2014 workshop on language technologies and computational social science. the ACL 2014 workshop on language technologies and computational social scienceVlachos, A., Riedel, S., 2014. Fact checking: Task definition and dataset construction, in: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp. 18-22.
Identification and verification of simple claims about statistical properties. A Vlachos, S Riedel, 10.18653/v1/D15-1312Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsVlachos, A., Riedel, S., 2015. Identification and verification of simple claims about statistical properties, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Lisbon, Portugal. pp. 2596-2601. URL: https://aclanthology.org/D15-1312, doi:10.18653/v1/D15-1312.
The rise of guardians: Fact-checking url recommendation to combat fake news. N Vo, K Lee, The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. Vo, N., Lee, K., 2018. The rise of guardians: Fact-checking url recommendation to combat fake news, in: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 275-284.
Fact or fiction: Verifying scientific claims. D Wadden, S Lin, K Lo, L L Wang, M Van Zuylen, A Cohan, H Hajishirzi, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Wadden, D., Lin, S., Lo, K., Wang, L.L., van Zuylen, M., Cohan, A., Hajishirzi, H., 2020. Fact or fiction: Verifying scientific claims, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 7534-7550.
W Y Wang, liar, liar pants on fire": A new benchmark dataset for fake news detection. ACLWang, W.Y., 2017. "liar, liar pants on fire": A new benchmark dataset for fake news detection, in: ACL.
Teach me to explain: A review of datasets for explainable natural language processing. S Wiegreffe, A Marasovic, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. Vanschoren, J., Yeung, S.the Neural Information Processing Systems Track on Datasets and BenchmarksWiegreffe, S., Marasovic, A., 2021. Teach me to explain: A review of datasets for explainable natural language processing, in: Vanschoren, J., Yeung, S. (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. URL: https: //datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/698d51a19d8a121ce581499d7b701668-Paper-round1. pdf.
Attention is not not explanation. S Wiegreffe, Y Pinter, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingWiegreffe, S., Pinter, Y., 2019. Attention is not not explanation, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 11-20.
Accenture at checkthat! 2020: If you say so: Post-hoc fact-checking of claims using transformer-based models. E Williams, P Rodrigues, V Novak, arXiv:2009.02431arXiv preprintWilliams, E., Rodrigues, P., Novak, V., 2020. Accenture at checkthat! 2020: If you say so: Post-hoc fact-checking of claims using transformer-based models. arXiv preprint arXiv:2009.02431 .
Xfake: Explainable fake news detector with visualizations. F Yang, S K Pentyala, S Mohseni, M Du, H Yuan, R Linder, E D Ragan, S Ji, X Hu, The World Wide Web Conference. Yang, F., Pentyala, S.K., Mohseni, S., Du, M., Yuan, H., Linder, R., Ragan, E.D., Ji, S., Hu, X., 2019. Xfake: Explainable fake news detector with visualizations, in: The World Wide Web Conference, pp. 3600-3604.
Using "annotator rationales" to improve machine learning for text categorization. O Zaidan, J Eisner, C Piatko, Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conferenceZaidan, O., Eisner, J., Piatko, C., 2007. Using "annotator rationales" to improve machine learning for text categorization, in: Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conference, pp. 260-267.
Human-in-the-loop artificial intelligence. F M Zanzotto, Journal of Artificial Intelligence Research. 64Zanzotto, F.M., 2019. Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research 64, 243-252.
Defending against neural fake news. R Zellers, A Holtzman, H Rashkin, Y Bisk, A Farhadi, F Roesner, Y Choi, Neurips, X Zeng, A S Abumansour, A Zubiaga, Language and Linguistics Compass. 15Automated fact-checking: A surveyZellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., Choi, Y., 2020. Defending against neural fake news. Neurips . Zeng, X., Abumansour, A.S., Zubiaga, A., 2021. Automated fact-checking: A survey. Language and Linguistics Compass 15, e12438.
Mining dual emotion for fake news detection. X Zhang, J Cao, X Li, Q Sheng, L Zhong, K Shu, Proceedings of the Web Conference 2021. the Web Conference 2021Zhang, X., Cao, J., Li, X., Sheng, Q., Zhong, L., Shu, K., 2021a. Mining dual emotion for fake news detection, in: Proceedings of the Web Conference 2021, pp. 3465-3476.
Active discriminative text representation learning. Y Zhang, M Lease, B Wallace, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceZhang, Y., Lease, M., Wallace, B., 2017. Active discriminative text representation learning, in: Proceedings of the AAAI Conference on Artificial Intelligence.
Faxplainac: A fact-checking tool based on explainable models with human correction in the loop. Z Zhang, K Rudra, A Anand, Proceedings of the 30th ACM International Conference on Information & Knowledge Management. the 30th ACM International Conference on Information & Knowledge ManagementZhang, Z., Rudra, K., Anand, A., 2021b. Faxplainac: A fact-checking tool based on explainable models with human correction in the loop, in: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 4823-4827.
Fake news early detection: A theory-driven model. X Zhou, A Jain, V V Phoha, R Zafarani, Digital Threats: Research and Practice. 1Zhou, X., Jain, A., Phoha, V.V., Zafarani, R., 2020. Fake news early detection: A theory-driven model. Digital Threats: Research and Practice 1, 1-25.
A survey of fake news: Fundamental theories, detection methods, and opportunities. X Zhou, R Zafarani, ACM Computing Surveys (CSUR). 53Zhou, X., Zafarani, R., 2020. A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Computing Surveys (CSUR) 53, 1-40.
Detection and resolution of rumours in social media: A survey. A Zubiaga, A Aker, K Bontcheva, M Liakata, R Procter, ACM Computing Surveys (CSUR). 51Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M., Procter, R., 2018. Detection and resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR) 51, 1-36.
Analysing how people orient to and spread rumours in social media by looking at conversational threads. A Zubiaga, M Liakata, R Procter, G Wong Sak Hoi, P Tolmie, PloS one. 11Zubiaga, A., Liakata, M., Procter, R., Wong Sak Hoi, G., Tolmie, P., 2016. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one 11, e0150989.
| [] |
[
"Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference",
"Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference"
] | [
"Timo Schick schickt@cis.lmu.deinquiries@cislmu.org \nCenter for Information and Language Processing\nLMU Munich\nGermany\n\nSulzer GmbH\nMunichGermany\n",
"Hinrich Schütze \nCenter for Information and Language Processing\nLMU Munich\nGermany\n"
] | [
"Center for Information and Language Processing\nLMU Munich\nGermany",
"Sulzer GmbH\nMunichGermany",
"Center for Information and Language Processing\nLMU Munich\nGermany"
] | [
"Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics"
] | Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in lowresource settings by a large margin. 1 | 10.18653/v1/2021.eacl-main.20 | [
"https://www.aclweb.org/anthology/2021.eacl-main.20.pdf"
] | 210,838,924 | 2001.07676 | e6fa5deb0bc9ac7f6c90a41451d1edfd29a4f971 |
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
April 19 -23, 2021
Timo Schick schickt@cis.lmu.deinquiries@cislmu.org
Center for Information and Language Processing
LMU Munich
Germany
Sulzer GmbH
MunichGermany
Hinrich Schütze
Center for Information and Language Processing
LMU Munich
Germany
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics
the 16th Conference of the European Chapter of the Association for Computational LinguisticsApril 19 -23, 2021255
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in lowresource settings by a large margin. 1
Introduction
Learning from examples is the predominant approach for many NLP tasks: A model is trained on a set of labeled examples from which it then generalizes to unseen data. Due to the vast number of languages, domains and tasks and the cost of annotating data, it is common in real-world uses of NLP to have only a small number of labeled examples, making few-shot learning a highly important research area. Unfortunately, applying standard supervised learning to small training sets often performs poorly; many problems are difficult to grasp from just looking at a few examples. For instance, assume we are given the following pieces of text:
• T 1 : This was the best pizza I've ever had.
• T 2 : You can get better sushi for half the price.
• T 3 : Pizza was average. Not worth the price. 1 Our implementation is publicly available at https:// github.com/timoschick/pet.
Best pizza ever! +1 ) ∈ T ( Best pizza ever! It was . Furthermore, imagine we are told that the labels of T 1 and T 2 are l and l , respectively, and we are asked to infer the correct label for T 3 . Based only on these examples, this is impossible because plausible justifications can be found for both l and l . However, if we know that the underlying task is to identify whether the text says anything about prices, we can easily assign l to T 3 . This illustrates that solving a task from only a few examples becomes much easier when we also have a task description, i.e., a textual explanation that helps us understand what the task is about.
With the rise of pretrained language models (PLMs) such as GPT (Radford et al., 2018), BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), the idea of providing task descriptions has become feasible for neural architectures: We can simply append such descriptions in natural language to an input and let the PLM predict continuations that solve the task (Radford et al., 2019;Puri and Catanzaro, 2019). So far, this idea has mostly been considered in zero-shot scenarios where no training data is available at all.
In this work, we show that providing task descriptions can successfully be combined with standard supervised learning in few-shot settings: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that uses natural language patterns to reformulate input examples into cloze-style phrases. As illustrated in Figure 1, PET works in three steps: First, for each pattern a separate PLM is finetuned on a small training set T . The ensemble of all models is then used to annotate a large unlabeled dataset D with soft labels. Finally, a standard classifier is trained on the soft-labeled dataset. We also devise iPET, an iterative variant of PET in which this process is repeated with increasing training set sizes.
On a diverse set of tasks in multiple languages, we show that given a small to medium number of labeled examples, PET and iPET substantially outperform unsupervised approaches, supervised training and strong semi-supervised baselines. Radford et al. (2019) provide hints in the form of natural language patterns for zero-shot learning of challenging tasks such as reading comprehension and question answering (QA). This idea has been applied to unsupervised text classification (Puri and Catanzaro, 2019), commonsense knowledge mining (Davison et al., 2019) and argumentative relation classification (Opitz, 2019). Srivastava et al. (2018) use task descriptions for zero-shot classification but require a semantic parser. For relation extraction, Bouraoui et al. (2020) automatically identify patterns that express given relations. Mc-Cann et al. (2018) rephrase several tasks as QA problems. Raffel et al. (2020) frame various problems as language modeling tasks, but their patterns only loosely resemble natural language and are unsuitable for few-shot learning. 2 Another recent line of work uses cloze-style phrases to probe the knowledge that PLMs acquire during pretraining; this includes probing for factual and commonsense knowledge (Trinh and Le, 2018;Petroni et al., 2019;Wang et al., 2019;Sakaguchi et al., 2020), linguistic capabilities (Ettinger, 2020;Kassner and Schütze, 2020), understanding of rare words (Schick and Schütze, 2020), and ability to perform symbolic reasoning (Talmor et al., 2019). Jiang et al. (2020) consider the problem of finding the best pattern to express a given task.
Related Work
Other approaches for few-shot learning in NLP include exploiting examples from related tasks (Yu et al., 2018;Gu et al., 2018;Dou et al., 2019;Qian and Yu, 2019;Yin et al., 2019) and using data augmentation (Xie et al., 2020;Chen et al., 2020); the latter commonly relies on back-translation (Sennrich et al., 2016), requiring large amounts of parallel data. Approaches using textual class descriptors typically assume that abundant examples are available for a subset of classes (e.g., Romera-Paredes and Torr, 2015;Veeranna et al., 2016;Ye et al., 2020). In contrast, our approach requires no additional labeled data and provides an intuitive interface to leverage task-specific human knowledge.
The idea behind iPET -training multiple generations of models on data labeled by previous generations -bears resemblance to self-training and bootstrapping approaches for word sense disambiguation (Yarowsky, 1995), relation extraction (Brin, 1999;Agichtein and Gravano, 2000;Batista et al., 2015), parsing (McClosky et al., 2006;Reichart and Rappoport, 2007;Huang andHarper, 2009), machine translation (Hoang et al., 2018), and sequence generation (He et al., 2020).
Pattern-Exploiting Training
Let M be a masked language model with vocabulary V and mask token ∈ V , and let L be a set of labels for our target classification task A. We write an input for task A as a sequence of phrases x = (s 1 , . . . , s k ) with s i ∈ V * ; for example, k = 2 if A is textual inference (two input sentences). We define a pattern to be a function P that takes x as input and outputs a phrase or sentence P (x) ∈ V * that contains exactly one mask token, i.e., its output can be viewed as a cloze question. Furthermore, we define a verbalizer as an injective function v : L → V that maps each label to a word from M 's vocabulary. We refer to (P, v) as a pattern-verbalizer pair (PVP).
Using a PVP (P, v) enables us to solve task A as follows: Given an input x, we apply P to obtain an input representation P (x), which is then processed by M to determine the label y ∈ L for which v(y) is the most likely substitute for the mask. For example, consider the task of identifying whether two sentences a and b contradict each other (label y 0 ) or agree with each other (y 1 ). For this task, we may choose the pattern P (a, b) = a?
, b. combined with a verbalizer v that maps y 0 to "Yes" and y 1 to "No". Given an example input pair x = (Mia likes pie, Mia hates pie), the task now changes from having to assign a label without inherent meaning to answering whether the most likely choice for the masked position in P (x) = Mia likes pie?
, Mia hates pie.
is "Yes" or "No".
PVP Training and Inference
Let p = (P, v) be a PVP. We assume access to a small training set T and a (typically much larger) set of unlabeled examples D. For each sequence z ∈ V * that contains exactly one mask token and w ∈ V , we denote with M (w | z) the unnormalized score that the language model assigns to w at the masked position. Given some input x, we define the score for label l ∈ L as
s p (l | x) = M (v(l) | P (x))
and obtain a probability distribution over labels using softmax:
q p (l | x) = e sp(l|x)
l ∈L e sp(l |x)
We use the cross-entropy between q p (l | x) and the true (one-hot) distribution of training example (x, l) -summed over all (x, l) ∈ T -as loss for finetuning M for p.
Auxiliary Language Modeling
In our application scenario, only a few training examples are available and catastrophic forgetting can occur. As a PLM finetuned for some PVP is still a language model at its core, we address this by using language modeling as auxiliary task. With L CE denoting cross-entropy loss and L MLM language modeling loss, we compute the final loss as
L = (1 − α) · L CE + α · L MLM
This idea was recently applied by Chronopoulou et al. (2019) in a data-rich scenario. As L MLM is typically much larger than L CE , in preliminary experiments, we found a small value of α = 10 −4 to consistently give good results, so we use it in all our experiments. To obtain sentences for language modeling, we use the unlabeled set D. However, we do not train directly on each x ∈ D, but rather on P (x), where we never ask the language model to predict anything for the masked slot.
Combining PVPs
A key challenge for our approach is that in the absence of a large development set, it is hard to identify which PVPs perform well. To address this, we use a strategy similar to knowledge distillation (Hinton et al., 2015). First, we define a set P of PVPs that intuitively make sense for a given task A. We then use these PVPs as follows:
(1) We finetune a separate language model M p for each p ∈ P as described in Section 3.1.
As T is small, this finetuning is cheap even for a large number of PVPs.
(2) We use the ensemble M = {M p | p ∈ P} of finetuned models to annotate examples from D. We first combine the unnormalized class scores for each example x ∈ D as
s M (l | x) = 1 Z p∈P w(p) · s p (l | x)
where Z = p∈P w(p) and the w(p) are weighting terms for the PVPs. We experiment with two different realizations of this weighing term: either we simply set w(p) = 1 for all p or we set w(p) to be the accuracy obtained using p on the training set before training. We refer to these two variants as uniform and weighted. Jiang et al. (2020) use a similar idea in a zero-shot setting.
We transform the above scores into a probability distribution q using softmax. Following Hinton et al. (2015), we use a temperature of T = 2 to obtain a suitably soft distribution. All pairs (x, q) are collected in a (soft-labeled) training set T C .
(3) We finetune a PLM C with a standard sequence classification head on T C .
The finetuned model C then serves as our classifier for A. All steps described above are depicted in Figure 2; an example is shown in Figure 1. (2) The final set of models is used to create a soft-labeled dataset T C .
(3) A classifier C is trained on this dataset.
Iterative PET (iPET)
Distilling the knowledge of all individual models into a single classifier C means they cannot learn from each other. As some patterns perform (possibly much) worse than others, the training set T C for our final model may therefore contain many mislabeled examples.
To compensate for this shortcoming, we devise iPET, an iterative variant of PET. The core idea of iPET is to train several generations of models on datasets of increasing size. To this end, we first enlarge the original dataset T by labeling selected examples from D using a random subset of trained PET models ( Figure 2a). We then train a new generation of PET models on the enlarged dataset (b); this process is repeated several times (c).
More formally, let M 0 = {M 0 1 , . . . , M 0 n } be the initial set of PET models finetuned on T , where each M 0 i is trained for some PVP p i . We train k generations of models M 1 , . . . , M k where M j = {M j 1 , . . . , M j n } and each M j i is trained for p i on its own training set T j i . In each iteration, we multiply the training set size by a fixed constant d ∈ N while maintaining the label ratio of the original dataset. That is, with c 0 (l) denoting the number of examples with label l in T , each T j i contains c j (l) = d · c j−1 (l) examples with label l. This is achieved by generating each T j i as follows:
1. We obtain N ⊂ M j−1 \ {M j−1 i } by randomly choosing λ · (n − 1) models from the previous generation with λ ∈ (0, 1] being a hyperparameter.
2. Using this subset, we create a labeled dataset
T N = {(x, arg max l∈L s N (l | x)) | x ∈ D} . For each l ∈ L, we obtain T N (l) ⊂ T N by randomly choosing c j (l) − c 0 (l) examples with label l from T N .
To avoid training future generations on mislabeled data, we prefer examples for which the ensemble of models is confident in its prediction. The underlying intuition is that even without calibration, examples for which labels are predicted with high confidence are typically more likely to be classified correctly (Guo et al., 2017). Therefore, when drawing from T N , we set the probability of each (x, y) proportional to s N (l | x).
3. We define T j i = T ∪ l∈L T N (l). As can easily be verified, this dataset contains c j (l) examples for each l ∈ L.
After training k generations of PET models, we use M k to create T C and train C as in basic PET.
With minor adjustments, iPET can even be used in a zero-shot setting. To this end, we define M 0 to be the set of untrained models and c 1 (l) = 10/|L| for all l ∈ L so that M 1 is trained on 10 examples evenly distributed across all labels. As T N may not contain enough examples for some label l, we create all T N (l) by sampling from the 100 examples x ∈ D for which s N (l | x) is the highest, even if l = arg max l∈L s N (l | x). For each subsequent generation, we proceed exactly as in basic iPET.
Experiments
We evaluate PET on four English datasets: Yelp Reviews, AG's News, Yahoo Questions (Zhang et al., 2015) and MNLI (Williams et al., 2018). Additionally, we use x-stance (Vamvas and Sennrich, 2020) to investigate how well PET works for other languages. For all experiments on English, we use RoBERTa large (Liu et al., 2019) as language model; for x-stance, we use XLM-R (Conneau et al., 2020). We investigate the performance of PET and all baselines for different training set sizes; each model is trained three times using different seeds and average results are reported.
As we consider a few-shot setting, we assume no access to a large development set on which hyperparameters could be optimized. Our choice of hyperparameters is thus based on choices made in previous work and practical considerations. We use a learning rate of 1 · 10 −5 , a batch size of 16 and a maximum sequence length of 256. Unless otherwise specified, we always use the weighted variant of PET with auxiliary language modeling. For iPET, we set λ = 0.25 and d = 5; that is, we select 25% of all models to label examples for the next generation and quintuple the number of training examples in each iteration. We train new generations until each model was trained on at least 1000 examples, i.e., we set k = log d (1000/|T |) . As we always repeat training three times, the ensemble M (or M 0 ) for n PVPs contains 3n models. Further hyperparameters and detailed explanations for all our choices are given in Appendix B.
Patterns
We now describe the patterns and verbalizers used for all tasks. We use two vertical bars ( ) to mark boundaries between text segments. 3
Yelp For the Yelp Reviews Full Star dataset (Zhang et al., 2015), the task is to estimate the rating that a customer gave to a restaurant on a 1to 5-star scale based on their review's text. We define the following patterns for an input text a: P 1 (a) = It was . a P 2 (a) = Just ! a P 3 (a) = a. All in all, it was .
P 4 (a) = a In summary, the restaurant is . 3 The way different segments are handled depends on the model being used; they may e.g. be assigned different embeddings (Devlin et al., 2019) or separated by special tokens (Liu et al., 2019;Yang et al., 2019). For example, "a b" is given to BERT as the input "
[CLS] a [SEP] b [SEP]".
We define a single verbalizer v for all patterns as
v(1) = terrible v(2) = bad v(3) = okay v(4) = good v(5) = great
AG's News AG's News is a news classification dataset, where given a headline a and text body b, news have to be classified as belonging to one of the categories World (1), Sports (2), Business (3) or Science/Tech (4). For x = (a, b), we define the following patterns:
P 1 (x) = : a b P 2 (x) = a ( ) b P 3 (x) = -a b P 4 (x) = a b ( ) P 5 (x) = News: a b P 6 (x) = [ Category: ] a b
We use a verbalizer that maps 1-4 to "World", "Sports", "Business" and "Tech", respectively.
Yahoo Yahoo Questions (Zhang et al., 2015) is a text classification dataset. Given a question a and an answer b, one of ten possible categories has to be assigned. We use the same patterns as for AG's News, but we replace the word "News" in P 5 with the word "Question". We define a verbalizer that maps categories 1-10 to "Society", "Science", "Health", "Education", "Computer", "Sports", "Business", "Entertainment", "Relationship" and "Politics".
MNLI The MNLI dataset (Williams et al., 2018) consists of text pairs x = (a, b). The task is to find out whether a implies b (0), a and b contradict each other (1) or neither (2). We define
P 1 (x) = "a"? , "b" P 2 (x) = a? , b
and consider two different verbalizers v 1 and v 2 :
v 1 (0) = Wrong v 1 (1) = Right v 1 (2) = Maybe v 2 (0) = No v 2 (1) = Yes v 2 (2) = Maybe
Combining the two patterns with the two verbalizers results in a total of 4 PVPs.
X-Stance The x-stance dataset (Vamvas and Sennrich, 2020) is a multilingual stance detection dataset with German, French and Italian examples.
Each example x = (a, b) consists of a question a concerning some political issue and a comment b; the task is to identify whether the writer of b supports the subject of the question (0) or not (1). We use two simple patterns
P 1 (x) = "a" . "b" P 2 (x) = a . b
and define an English verbalizer v En mapping 0 to "Yes" and 1 to "No" as well as a French (German) verbalizer v Fr (v De ), replacing "Yes" and "No" with "Oui" and "Non" ("Ja" and "Nein"). We do not define an Italian verbalizer because x-stance does not contain any Italian training examples.
Results
English Datasets Table 1 shows results for English text classification and language understanding tasks; we report mean accuracy and standard deviation for three training runs. Lines Table 2 shows that PET and iPET substantially outperform both methods across all tasks, clearly demonstrating the benefit of incorporating human knowledge in the form of PVPs.
X-Stance
We evaluate PET on x-stance to investigate (i) whether it works for languages other than English and (ii) whether it also brings improvements when training sets have medium size. In contrast to Vamvas and Sennrich (2020), we do not perform any hyperparameter optimization on dev and use a shorter maximum sequence length (256 vs 512) to speed up training and evaluation. To investigate whether PET brings benefits even when numerous examples are available, we consider training set sizes of 1000, 2000, and 4000; for each of these configurations, we separately finetune French and German models to allow for a more straightforward downsampling of the training data. Additionally, we train models on the entire French (|T Fr | = 11 790) and German (|T De | = 33 850) training sets. In this case we do not have any additional unlabeled data, so we simply set D = T . For the French models, we use v En and v Fr as verbalizers and for German v En and v De (Section 4.1). Finally, we also investigate the performance of a model trained jointly on French and German data (|T Fr + T De | = 45 640) using v En , v Fr and v De .
Results are shown in Table 3; following Vamvas Table 4: Minimum (min) and maximum (max) accuracy of models based on individual PVPs as well as PET with and without knowledge distillation (|T | = 10). and Sennrich (2020), we report the macro-average of the F1 scores for labels 0 and 1, averaged over three runs. For Italian (column "It"), we report the average zero-shot cross-lingual performance of German and French models as there are no Italian training examples. Our results show that PET brings huge improvements across all languages even when training on much more than a thousand examples; it also considerably improves zero-shot cross-lingual performance.
Analysis
Combining PVPs We first investigate whether PET is able to cope with situations were some PVPs perform much worse than others. For |T | = 10, Table 4 compares the performance of PET to that of the best and worst performing patterns after finetuning; we also include results obtained using the ensemble of PET models corresponding to individual PVPs without knowledge distillation. Even after finetuning, the gap between the best and worst pattern is large, especially for Yelp. However, PET is not only able to compensate for this, but even improves accuracies over using only the bestperforming pattern across all tasks. Distillation brings consistent improvements over the ensemble; additionally, it significantly reduces the size of the final classifier. We find no clear difference between the uniform and weighted variants of PET.
Auxiliary Language Modeling
We analyze the influence of the auxiliary language modeling task on PET's performance. Figure 3 shows performance improvements from adding the language modeling task for four training set sizes. We see that the auxiliary task is extremely valuable when training on just 10 examples. With more data, it becomes less important, sometimes even leading to worse performance. Only for MNLI, we find language modeling to consistently help.
Iterative PET To check whether iPET is able to improve models over multiple generations, Figure 4 shows the average performance of all generations of models in a zero-shot setting. Each additional iteration does indeed further improve the ensemble's performance. We did not investigate whether continuing this process for even more iterations gives further improvements. Another natural question is whether similar results can be obtained with fewer iterations by increasing the training set size more aggressively. To answer this question, we skip generations 2 and 3 for AG's News and Yahoo and for both tasks directly let ensemble M 1 annotate 10 · 5 4 examples for M 4 . As indicated in Figure 4 through dashed lines, this clearly leads to worse performance, highlighting the importance of only gradually increasing the training set size. We surmise that this is the case because annotating too many examples too early leads to a large percentage of mislabeled training examples. In-Domain Pretraining Unlike our supervised baseline, PET makes use of the additional unlabeled dataset D. Thus, at least some of PET's performance gains over the supervised baseline may arise from this additional in-domain data.
To test this hypothesis, we simply further pretrain RoBERTa on in-domain data, a common technique for improving text classification accuracy (e.g., Howard and Ruder, 2018;Sun et al., 2019). As language model pretraining is expensive in terms of GPU usage, we do so only for the Yelp dataset. Figure 5 shows results of supervised learning and PET both with and without this indomain pretraining. While pretraining does indeed improve accuracy for supervised training, the supervised model still clearly performs worse than PET, showing that the success of our method is not simply due to the usage of additional unlabeled data. Interestingly, in-domain pretraining is also helpful for PET, indicating that PET leverages unlabeled data in a way that is clearly different from standard masked language model pretraining.
Conclusion
We have shown that providing task descriptions to pretrained language models can be combined with standard supervised training. Our proposed method, PET, consists of defining pairs of cloze question patterns and verbalizers that help leverage the knowledge contained within pretrained language models for downstream tasks. We finetune models for all pattern-verbalizer pairs and use them to create large annotated datasets on which standard classifiers can be trained. When the initial amount of training data is limited, PET gives large improvements over standard supervised training and strong semi-supervised approaches.
A Implementation
Our implementation of PET and iPET is based on the Transformers library (Wolf et al., 2020) and PyTorch (Paszke et al., 2017).
B Training Details
Except for the in-domain pretraining experiment described in Section 5, all of our experiments were conducted using a single GPU with 11GB RAM (NVIDIA GeForce GTX 1080 Ti).
B.1 Hyperparameter Choices
Relevant training hyperparameters for both individual PET models and the final classifier C as well as our supervised baseline are listed in Table 5. All hyperparameters were selected based on the following considerations and experiments:
Batch size / maximum length Both batch size and maximum sequence length (or block size) are chosen so that one batch fits into 11GB of GPU memory. As Devlin et al. (2019) and Liu et al.
(2019) use larger batch sizes of 16-32, we accumulate gradients for 4 steps to obtain an effective batch size of 16.
Learning rate We found a learning rate of 5e−5 (as used by Devlin et al. (2019)) to often result in unstable training for regular supervised learning with no accuracy improvements on the training set. We therefore use a lower learning rate of 1e−5, similar to Liu et al. (2019). Experiments with various learning rates can be found in Appendix D.
Training steps As the number of training epochs recommended by Liu et al. (2019) in a data-rich scenario is in the range 2-10, we perform supervised training for 250 training steps, corresponding to 4 epochs when training on 1000 examples. For individual PET models, we subdivide each batch into one labeled example from T to compute L CE and three unlabeled examples from D to compute L MLM . Accordingly, we multiply the number of total training steps by 4 (i.e., 1000), so that the number of times each labeled example is seen remains constant (16 · 250 = 4 · 1000). For the final PET classifier, we train for 5000 steps due to the increased training set size (depending on the task, the unlabeled set D contains at least 20 000 examples). Deviating from the above, we always perform training for 3 epochs on x-stance to match the setup of Vamvas and Sennrich (2020) more closely. The effect of varying the number of training steps is further investigated in Appendix D.
Temperature We choose a temperature of 2 when training the final classifier following Hinton et al. (2015).
Auxiliary language modeling To find a suitable value of α for combining language modeling loss and cross-entropy loss, we first observed that in the early stages of training, the former is a few orders of magnitude higher than the latter for all tasks considered. We thus selected a range {1e−3, 1e−4, 1e−5} of reasonable choices for α and performed preliminary experiments on Yelp with 100 training examples to find the best value among these candidates. To this end, we split the training examples into a training set and a dev set using both a 90/10 split and a 50/50 split and took the value of α that maximizes average dev set accuracy. We adopt this value for all other tasks and training set sizes without further optimization.
Models per ensemble As we always train three models per pattern, for both iPET and training the final classifier C, the ensemble M (or M 0 ) for n PVPs contains 3n models. This ensures consistency as randomly choosing any of the three models for each PVP would result in high variance. In preliminary experiments, we found this to have only little impact on the final model's performance.
iPET dataset size For iPET, we quintuple the number of training examples after each iteration (d = 5) so that only a small number of generations is required to reach a sufficient amount of labeled data. We did not choose a higher value because we presume that this may cause training sets for early generations to contain a prohibitively large amount of mislabeled data.
iPET dataset creation We create training sets for the next generation in iPET using 25% of the models in the current generation (λ = 0.25) because we want the training sets for all models to be diverse while at the same time, a single model should not have too much influence.
Others For all other hyperparameters listed in Table 5, we took the default settings of the Transformers library (Wolf et al., 2020).
B.2 Number of parameters
As PET does not require any additional learnable parameters, the number of parameters for both PET and iPET is identical to the number of parameters in the underlying language model: 355M for RoBERTa (large) and 270M for XLM-R (base). For MixText, we use the original implementation 5 and the default set of hyperparameters. Specifically, each batch consists of 4 labeled and 8 unlabeled examples, we use layers 7, 9 and 12 for mixing, we set T = 5, α = 16, and use a learning rate of 5 · 10 −6 for RoBERTa and 5 · 10 −4 for the final classification layer. We optimize the number of training steps for each task and dataset size in the range {1000, 2000, 3000, 4000, 5000}.
B.3 Average runtime
For UDA, we use a PyTorch-based reimplementation 6 . We use the same batch size as for MixText and the hyperparameter values recommended by Xie et al. (2020); we use an exponential schedule for training signal annealing and a learning rate of 2 · 10 −5 . We optimize the number of training steps for each task and dataset size in the range {500, 1000, 1500, . . . , 10000}.
B.5 In-Domain Pretraining
For in-domain pretraining experiments described in Section 5, we use the language model finetuning script of the Transformers library (Wolf et al., 2020); all hyperparameters are listed in the last column of Table 5. Pretraining was performed on a total of 3 NVIDIA GeForce GTX 1080 Ti GPUs. 5 https://github.com/GT-SALT/MixText 6 https://github.com/SanghunYun/UDA_ pytorch
C Dataset Details
For each task and number of examples t, we create the training set T by collecting the first t/|L| examples per label from the original training set, where |L| is the number of labels for the task. Similarly, we construct the set D of unlabeled examples by selecting 10 000 examples per label and removing all labels. For evaluation, we use the official test set for all tasks except MNLI, for which we report results on the dev set; this is due to the limit of 2 submissions per 14 hours for the official MNLI test set. An overview of the number of test examples and links to downloadable versions of all used datasets can be found in Table 6.
Preprocessing In some of the datasets used, newlines are indicated through the character sequence "\n". As the vocabularies of RoBERTa and XLM-R do not feature a newline, we replace this sequence with a single space. We do not perform any other preprocessing, except shortening all examples to the maximum sequence length of 256 tokens. This is done using the longest first strategy implemented in the Transformers library. For PET, all input sequences are truncated before applying patterns.
Evaluation metrics For Yelp, AG's News, Yahoo and MNLI, we use accuracy. For x-stance, we report macro-average of F1 scores using the evaluation script of Vamvas and Sennrich (2020).
D Hyperparameter Importance
To analyze the importance of hyperparameter choices for PET's performance gains over supervised learning, we look at the influence of both the learning rate (LR) and the number of training steps on their test set accuracies.
We try values of {1e−5, 2e−5, 5e−5} for the learning rate and {50, 100, 250, 500, 1000} for the number of training steps. As this results in 30 different configurations for just one task and training set size, we only perform this analysis on Yelp with 100 examples, for which results can be seen in Figure 6. For supervised learning, the configuration used throughout the paper (LR = 1e−5, 250 steps) turns out to perform best whereas for PET, training for fewer steps consistently performs even better. Importantly, PET clearly outperforms regular supervised training regardless of the chosen learning rate and number of training steps. ---* temperature --2.0 -weight decay 0.01 0.01 0.01 0.01 0.0
Figure 1 :
1PET for sentiment classification. (1) A number of patterns encoding some form of task description are created to convert training examples to cloze questions; for each pattern, a pretrained language model is finetuned.(2) The ensemble of trained models annotates unlabeled data. (3) A classifier is trained on the resulting soft-labeled dataset.
Figure 2 :
2Schematic representation of PET (1-3) and iPET (a-c). (1) The initial training set is used to finetune an ensemble of PLMs. (a) For each model, a random subset of other models generates a new training set by labeling examples from D. (b) A new set of PET models is trained using the larger, model-specific datasets. (c) The previous two steps are repeated k times, each time increasing the size of the generated training sets by a factor of d.
Figure 3 :
3Accuracy improvements for PET due to adding L MLM during training
Figure 4 :
4Average accuracy for each generation of models with iPET in a zero-shot setting. Accuracy on AG's News and Yahoo when skipping generation 2 and 3 is indicated through dashed lines.
Figure 5 :
5Accuracy of supervised learning (sup.) and PET both with and without pretraining (PT) on Yelp
Figure 6 :
6Performance of supervised learning and PET (weighted, without auxiliary language modeling) for various learning rates and training steps on Yelp with 100 training examples
Table 1 :
1Average accuracy and standard deviation for RoBERTa (large) on Yelp, AG's News, Yahoo and MNLI
(m:matched/mm:mismatched) for five training set sizes |T |.
Table 2 :
2Comparison of PET with two state-of-the-art semi-supervised methods using RoBERTa (base)we increase the training set size, the performance
gains of PET and iPET become smaller, but for
both 50 and 100 examples, PET continues to con-
siderably outperform standard supervised training
(L8 vs L7, L11 vs L10) with iPET (L9, L12) still
giving consistent improvements. For |T | = 1000,
PET has no advantage on AG's but still improves
accuracy for all other tasks (L14 vs L13). 4
Comparison with SotA We compare PET to
UDA (Xie et al., 2020) and MixText (Chen et al.,
2020), two state-of-the-art methods for semi-
supervised learning in NLP that rely on data aug-
mentation. Whereas PET requires that a task can be
expressed using patterns and that such patterns be
found, UDA and MixText both use backtranslation
(Sennrich et al., 2016) and thus require thousands
of labeled examples for training a machine transla-
tion model. We use RoBERTa (base) for our com-
parison as MixText is specifically tailored towards
Table 3 :
3We instead try several values for both approaches directly on the test set and only report the best results obtained. Despite this,Results on x-stance intra-target for XLM-R
(base) trained on subsets of T De and T Fr and for joint
training on all data (T De + T Fr ). (*): Best results for
mBERT reported in Vamvas and Sennrich (2020).
a 12-layer Transformer (Vaswani et al., 2017). Both
Xie et al. (2020) and Chen et al. (2020) use large de-
velopment sets to optimize the number of training
steps.
David S. Batista, Bruno Martins, and Mário J. Silva. Sergey Brin. 1999. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases, pages 172-183, Berlin, Heidelberg. Springer Berlin Heidelberg.2015. Semi-supervised bootstrapping of relation-
ship extractors with distributional semantics. In Pro-
ceedings of the 2015 Conference on Empirical Meth-
ods in Natural Language Processing, pages 499-
504, Lisbon, Portugal. Association for Computa-
tional Linguistics.
Zied Bouraoui, Jose Camacho-Collados, and Steven
Schockaert. 2020. Inducing relational knowledge
from BERT. In Proceedings of the Thirty-Fourth
AAAI Conference on Artificial Intelligence.
Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mix-
Text: Linguistically-informed interpolation of hid-
den space for semi-supervised text classification. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 2147-
2157, Online. Association for Computational Lin-
guistics.
Alexandra Chronopoulou, Christos Baziotis, and
Alexandros Potamianos. 2019. An embarrassingly
simple approach for transfer learning from pre-
trained language models. In Proceedings of the
2019 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies, Volume 1 (Long and
Short Papers), pages 2089-2095, Minneapolis, Min-
nesota. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
Vishrav Chaudhary, Guillaume Wenzek, Francisco
Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
moyer, and Veselin Stoyanov. 2020. Unsupervised
cross-lingual representation learning at scale. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 8440-
8451, Online. Association for Computational Lin-
guistics.
Joe Davison, Joshua Feldman, and Alexander Rush.
2019. Commonsense knowledge mining from pre-
trained models. In Proceedings of the 2019 Con-
ference on Empirical Methods in Natural Language
Processing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 1173-1178, Hong Kong, China. As-
sociation for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos.
2019. Investigating meta-learning algorithms for
low-resource natural language understanding tasks.
In Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 1192-
1197, Hong Kong, China. Association for Computa-
tional Linguistics.
Allyson Ettinger. 2020. What BERT is not: Lessons
from a new suite of psycholinguistic diagnostics for
language models. Transactions of the Association
for Computational Linguistics, 8:34-48.
Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li,
and Kyunghyun Cho. 2018. Meta-learning for low-
resource neural machine translation. In Proceed-
ings of the 2018 Conference on Empirical Methods
in Natural Language Processing, pages 3622-3631,
Brussels, Belgium. Association for Computational
Linguistics.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q.
Weinberger. 2017. On calibration of modern neu-
ral networks. In Proceedings of the 34th Interna-
tional Conference on Machine Learning -Volume 70,
ICML'17, page 1321-1330. JMLR.org.
Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio
Ranzato. 2020. Revisiting self-training for neural
sequence generation. In International Conference
on Learning Representations.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015.
Distilling the knowledge in a neural network. Com-
puting Research Repository, arXiv:1503.02531.
Vu Cong Duy Hoang, Philipp Koehn, Gholamreza
Haffari, and Trevor Cohn. 2018. Iterative back-
translation for neural machine translation. In Pro-
ceedings of the 2nd Workshop on Neural Machine
Translation and Generation, pages 18-24, Mel-
bourne, Australia. Association for Computational
Linguistics.
Jeremy Howard and Sebastian Ruder. 2018. Universal
language model fine-tuning for text classification. In
Proceedings of the 56th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Roi Reichart and Ari Rappoport. 2007. Self-training
for enhancement and domain adaptation of statisti-
cal parsers trained on small datasets. In Proceed-
ings of the 45th Annual Meeting of the Association of
Computational Linguistics, pages 616-623, Prague,
Czech Republic. Association for Computational Lin-
guistics.
Bernardino Romera-Paredes and Philip Torr. 2015. An
embarrassingly simple approach to zero-shot learn-
ing. In International Conference on Machine Learn-
ing, pages 2152-2161.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat-
ula, and Yejin Choi. 2020. WinoGrande: An adver-
sarial winograd schema challenge at scale. In Pro-
ceedings of the Thirty-Fourth AAAI Conference on
Artificial Intelligence.
Timo Schick and Hinrich Schütze. 2020. Rare words:
A major problem for contextualized embeddings and
how to fix it by attentive mimicking. In Proceedings
of the Thirty-Fourth AAAI Conference on Artificial
Intelligence.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Improving neural machine translation mod-
els with monolingual data. In Proceedings of the
54th Annual Meeting of the Association for Compu-
tational Linguistics (Volume 1: Long Papers), pages
86-96, Berlin, Germany. Association for Computa-
tional Linguistics.
Shashank Srivastava, Igor Labutov, and Tom Mitchell.
2018. Zero-shot learning of classifiers from natu-
ral language quantification. In Proceedings of the
56th Annual Meeting of the Association for Com-
putational Linguistics (Volume 1: Long Papers),
pages 306-316, Melbourne, Australia. Association
for Computational Linguistics.
Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang.
2019. How to fine-tune BERT for text classification?
In Chinese Computational Linguistics, pages 194-
206, Cham. Springer International Publishing.
Training a single PET classifier for 250 steps on one GPU took approximately 30 minutes; training for 1000 steps with auxiliary language modeling took 60 minutes. Depending on the task, labeling examples from D took 15-30 minutes per model. Training the final classifier C for 5000 steps on the soft-labeled dataset T C took 2 hours on average.For comparing PET to UDA (Xie et al., 2020) and MixText (Chen et al., 2020), we reduce the number of unlabeled examples by half to speed up the required backtranslation step. We use the backtranslation script provided by Chen et al. (2020) with their recommended hyperparameter values and use both Russian and German as intermediate languages.B.4 Comparison with SotA
Table 5 :
5Hyperparameters for training individual PET models without auxiliary language modeling (PET−LM) and with language modeling (PET), the final PET classifier (C), regular supervised training (sup.) and in-domain pretraining (In-Dom. PT). Whenever different values are used for the English datasets (En) and x-stance (Xs), both values are given separated by a slash. (*): PET-specific hyperparameters AG's News http://goo.gl/JyCnZq 7600 MNLI (m / mm) https://cims.nyu.edu/˜sbowman/multinli/ 10000 / 10000 X-Stance (De / Fr / It) https://github.com/ZurichNLP/xstance 3479 / 1284 / 1173 Yahoo! Answers http://goo.gl/JyCnZq 60000 Yelp Review Full http://goo.gl/JyCnZqDataset
Link
Table 6 :
6Download links and number of test examples for all datasets 50 100 250 500 1000
For example, they convert inputs (a, b) for recognizing textual entailment (RTE) to "rte sentence1: a sentence2: b", and the PLM is asked to predict strings like "not entailment".
One of the three supervised MNLI runs for |T | = 1000 underfitted the training data and performed extremely poorly. This run is excluded in the reported score (73.1/74.8).
We tried values of k and imax in {250, 500, 1000} and {5, 10, 20}, respectively, but found the resulting verbalizers to be almost identical.
AcknowledgmentsThis work was funded by the European Research Council (ERC #740516). We would like to thank the anonymous reviewers for their helpful comments.Given a set of patterns P 1 , . . . , P n , manually finding a verbalization v(l) for each l ∈ L that represents the meaning of l well and corresponds to a single token in V can be difficult. We therefore devise automatic verbalizer search (AVS), a procedure that automatically finds suitable verbalizers given a training set T and a language model M .Assuming we already have a PVP p = (P, v), we can easily check whether some token t ∈ V is a good verbalization of l ∈ L. To this end, we, the probability M assigns to t given P (x)) should be high only for those examples (x, y) ∈ T where y = l. We thus define the score of t for l given p astraining examples with label l. While this allows us to easily compute the best verbalization for l aŝit requires us to already know verbalizations v(l ) for all other labels l . AVS solves this problem as follows: We first assign random verbalizations to all labels and then repeatedly recompute the best verbalization for each label. As we do not want the resulting verbalizer to depend strongly on the initial random assignment, we simply consider multiple such assignments. Specifically, we define an initial probability distribution ρ 0 where for all t ∈ V, l ∈ L, ρ 0 (t | l) = 1/|V | is the probability of choosing t as verbalization for l. For each l ∈ L, we then sample k verbalizers v 1 , . . . , v k using ρ 0 to computefor all t ∈ V . 7 These scores enable us to define a probability distribution ρ 1 that more closely reflects 7 Note that the score s k l (t) jointly considers all patterns; in preliminary experiments, we found this to result in more robust verbalizers. a word's suitability as a verbalizer for a given label:where Z = t ∈V max(s k l (t ), ) and ≥ 0 ensures that ρ 1 is a proper probability distribution. We repeat this process to obtain a sequence of probability distributions ρ 1 , . . . , ρ imax . Finally, we choose the m ∈ N most likely tokens according to ρ imax (t | l) as verbalizers for each l. During training and inference, we compute the unnormalized score s p (y | x) for each label by averaging over its m verbalizers.We analyze the performance of AVS for all tasks with |T | = 50 training examples and set k = 250, = 10 −3 , i max = 5 and m = 10. 8 To speed up the search, we additionally restrict our search space to tokens t ∈ V that contain at least two alphabetic characters. Of these tokens, we only keep the 10 000 most frequent ones in D.Results are shown inTable 7. As can be seen, carefully handcrafted verbalizers perform much better than AVS; however, PET with AVS still considerably outperforms regular supervised training while eliminating the challenge of manually finding suitable verbalizers.Table 8shows the most probable verbalizers found using AVS for the Yelp dataset. While most verbalizers for this dataset intuitively make sense, we found AVS to struggle with finding good verbalizers for three out of ten labels in the Yahoo dataset and for all MNLI labels.
Snowball: Extracting relations from large plain-text collections. Eugene Agichtein, Luis Gravano, 10.1145/336597.336644Proceedings of the Fifth ACM Conference on Digital Libraries, DL '00. the Fifth ACM Conference on Digital Libraries, DL '00New York, NY, USAAssociation for Computing MachineryEugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM Conference on Dig- ital Libraries, DL '00, page 85-94, New York, NY, USA. Association for Computing Machinery.
Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant, arXiv:1912.13283oLMpics -on what language model pre-training captures. Computing Research Repository. Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics -on what lan- guage model pre-training captures. Computing Re- search Repository, arXiv:1912.13283.
A simple method for commonsense reasoning. H Trieu, Quoc V Trinh, Le, arXiv:1806.02847Computing Research Repository. Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. Computing Research Repository, arXiv:1806.02847.
X-stance: A multilingual multi-target dataset for stance detection. Jannis Vamvas, Rico Sennrich, arXiv:2003.08385Computing Research Repository. Jannis Vamvas and Rico Sennrich. 2020. X-stance: A multilingual multi-target dataset for stance detection. Computing Research Repository, arXiv:2003.08385.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.
Using semantic similarity for multi-label zero-shot classification of text documents. Jinseok Sappadla Prateek Veeranna, Eneldo Nam, Johannes Loza Mencıa, Fürnkranz, Proceeding of European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. eeding of European Symposium on Artificial Neural Networks, Computational Intelligence and Machine LearningBruges, BelgiumElsevierSappadla Prateek Veeranna, Jinseok Nam, Eneldo Loza Mencıa, and Johannes Fürnkranz. 2016. Using se- mantic similarity for multi-label zero-shot classifica- tion of text documents. In Proceeding of European Symposium on Artificial Neural Networks, Compu- tational Intelligence and Machine Learning. Bruges, Belgium: Elsevier, pages 423-428.
Does it make sense? And why? A pilot study for sense making and explanation. Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, Tian Gao, 10.18653/v1/P19-1393Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsCunxiang Wang, Shuailong Liang, Yue Zhang, Xiao- nan Li, and Tian Gao. 2019. Does it make sense? And why? A pilot study for sense making and ex- planation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4020-4026, Florence, Italy. Association for Computational Linguistics.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Long PapersAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Unsupervised data augmentation for consistency training. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, V Quoc, Le, Advances in Neural Information Processing Systems. Curran Associates, Inc33Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Lu- ong, and Quoc V. Le. 2020. Unsupervised data aug- mentation for consistency training. In Advances in Neural Information Processing Systems, volume 33. Curran Associates, Inc.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in Neural Information Processing Systems. Curran Associates, Inc32Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.
Unsupervised word sense disambiguation rivaling supervised methods. David Yarowsky, 10.3115/981658.98168433rd Annual Meeting of the Association for Computational Linguistics. Cambridge, Massachusetts, USAAssociation for Computational LinguisticsDavid Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computa- tional Linguistics, pages 189-196, Cambridge, Mas- sachusetts, USA. Association for Computational Linguistics.
Zero-shot text classification via reinforced self-training. Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, Suhang Zheng, Feng Wang, Jun Zhang, Huajun Chen, 10.18653/v1/2020.acl-main.272Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsZhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. 2020. Zero-shot text clas- sification via reinforced self-training. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3014-3024, Online. Association for Computational Linguistics.
Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. Wenpeng Yin, Jamaal Hay, Dan Roth, 10.18653/v1/D19-1404Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914-3923, Hong Kong, China. Association for Computational Linguistics.
Diverse few-shot text classification with multiple metrics. Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, Bowen Zhou, 10.18653/v1/N18-1109Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, LouisianaAssociation for Computational Linguistics1Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text clas- sification with multiple metrics. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1206-1215, New Orleans, Louisiana. Association for Computational Linguistics.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649-657. Curran Associates, Inc.
| [
"https://github.com/GT-SALT/MixText",
"https://github.com/SanghunYun/UDA_",
"https://github.com/ZurichNLP/xstance"
] |
[
"Guided Generation of Cause and Effect",
"Guided Generation of Cause and Effect"
] | [
"Zhongyang Li \nHarbin Institute of Technology\nChina\n\nJohns Hopkins University\nUSA\n",
"Xiao Ding \nHarbin Institute of Technology\nChina\n",
"Ting Liu tliu@ir.hit.edu.cn \nHarbin Institute of Technology\nChina\n",
"J Edward Hu \nJohns Hopkins University\nUSA\n",
"Benjamin Van Durme vandurme@jhu.edu \nJohns Hopkins University\nUSA\n"
] | [
"Harbin Institute of Technology\nChina",
"Johns Hopkins University\nUSA",
"Harbin Institute of Technology\nChina",
"Harbin Institute of Technology\nChina",
"Johns Hopkins University\nUSA",
"Johns Hopkins University\nUSA"
] | [] | We present a conditional text generation framework that posits sentential expressions of possible causes and effects. This framework depends on two novel resources we develop in the course of this work: a very large-scale collection of English sentences expressing causal patterns (CausalBank); and a refinement over previous work on constructing large lexical causal knowledge graphs (Cause Effect Graph). Further, we extend prior work in lexically-constrained decoding to support disjunctive positive constraints. Human assessment confirms that our approach gives high-quality and diverse outputs. Finally, we use CausalBank to perform continued training of an encoder supporting a recent state-of-the-art model for causal reasoning, leading to a 3-point improvement on the COPA challenge set, with no change in model architecture. * Performed while the first author was visiting Johns Hopkins University. | 10.24963/ijcai.2020/502 | [
"https://arxiv.org/pdf/2107.09846v1.pdf"
] | 220,485,081 | 2107.09846 | 4520bf71096790a09db064df0391d32e5479b3b5 |
Guided Generation of Cause and Effect
Zhongyang Li
Harbin Institute of Technology
China
Johns Hopkins University
USA
Xiao Ding
Harbin Institute of Technology
China
Ting Liu tliu@ir.hit.edu.cn
Harbin Institute of Technology
China
J Edward Hu
Johns Hopkins University
USA
Benjamin Van Durme vandurme@jhu.edu
Johns Hopkins University
USA
Guided Generation of Cause and Effect
We present a conditional text generation framework that posits sentential expressions of possible causes and effects. This framework depends on two novel resources we develop in the course of this work: a very large-scale collection of English sentences expressing causal patterns (CausalBank); and a refinement over previous work on constructing large lexical causal knowledge graphs (Cause Effect Graph). Further, we extend prior work in lexically-constrained decoding to support disjunctive positive constraints. Human assessment confirms that our approach gives high-quality and diverse outputs. Finally, we use CausalBank to perform continued training of an encoder supporting a recent state-of-the-art model for causal reasoning, leading to a 3-point improvement on the COPA challenge set, with no change in model architecture. * Performed while the first author was visiting Johns Hopkins University.
Introduction
Causal knowledge acquisition is crucial for various Artificial Intelligence tasks, such as causal event graph construction, reading comprehension and future event prediction. We propose an approach for acquiring causal knowledge through generating multiple plausible causes (reasons, explanations) and effects (results, consequences) for a provided input sentence. As exemplified in Figure 1, we develop two conditional decoders, one per causal direction. To train such models we mine a large-scale corpus of causal expressions from open domain web text, at a scale greatly surpassing prior work. Our goal is to generate multiple distinct possible causes and effects, where each generated sentence is not intended to be a paraphrase of other candidates. To support this output diversity when conditioned on a single shared input sentence, we turn to lexically-constrained decoding [Post and Vilar, 2018;Hu et al., 2019a], which allows for efficiently forcing a model to produce output containing one or more provided phrases. Our constraints are derived from a resource we construct for this work, replicating a prior effort in lexicalized causal knowledge graph construction [Luo et al., 2016]. This graph cap-babies cry because they are hungry because they are lonely because they are in pain because they want to be loved because they want to go home will lead to sleep problems can lead to depression can lead to a bad marriage can lead to bad food habits result in tears to the eyes cause effect … … Figure 1: Possible causes and effects generated by our model, conditioned on the input sentence "babies cry". Tokens in blue are constraint keywords derived from our Cause Effect Graph, which are forced to be included in the outputs by constrained decoding.
tures causal relations as a mapping across lexical types, lemmato-lemma, but our goal is to generate naturalistic sentences with appropriately inflected morphology: we therefore develop an approach for disjunctive positive lexical constraints, where a decoder's output must contain one of a set of provided words or phrases. In our case, these are morphological variants of the same base lemma, but our approach should benefit other applications of lexically-constrained decoding.
While there is recent work in generating story endings conditioned on a context [Guan et al., 2019;Wang and Wan, 2019;Luo et al., 2019], such work does not require generated sentences to be strictly causes or effects. The ability to propose explanations for an input sentence by generating multiple causes and effects complements this emerging line of research. To our knowledge, this is the first work to consider open-ended generation of causal sentences at a large scale.
We evaluate through carefully designed human evaluation by comparing outputs from various baselines and our proposed model, finding that our model's outputs are preferred. We further demonstrate the usefulness of our new resource by taking a recent state-of-the-art causal reasoning system and boosting its results on the COPA test set by 3 points, relying only on continued training of the model's encoder. Our models and resources are made publicly available. 1 In this paper, we make the following contributions: • proposing the task of open causal generation: producing possible causes and effects for any free-form textual event;
• construction of a causal corpus (CausalBank) containing 314 million CE (cause-effect) pairs;
• an extension to lexically-constrained decoding that supports disjunctive positive constraints (DPC);
• human and automatic evaluations illustrating our method can generate high-quality and diverse causes and effects.
Approach
As shown in Figure 2, our proposed approach for open-ended causal generation includes a data collection module (Section 2.1), a Cause Effect Graph (Section 2.2), and two DPC (disjunctive positive constraint) decoding based Transformer encoder-decoder models (Section 2.3).
CausalBank: A Sentential Causal Corpus
Existing causal corpora were not built to support our goal for open-ended causal generation given any free-form textual input: as in neural machine translation (NMT), we need a large training set with millions of examples. Thus we harvest a large causal dataset from the preprocessed large-scale English Common Crawl corpus (5.14 TB) [Buck et al., 2014]. The key guidelines of our dataset are as follows: 1) The causal relation is explicitly expressed in text with a causal pattern e.g. 'because'; 2) The 'cause' and 'effect' arguments must both appear in the same sentence; 3) The 'cause' and 'effect' arguments can be of any length of contiguous text without overlaps between them; 4) Negative causal relations are filtered. We do not rely on a supervised text extractor to pick out specific sub-spans of a sentence that represent a cause-effect pairing between propositions. 2 We instead curate a series of patterns from previous studies [Mirza et al., 2014;Luo et al., 2016;Girju, 2003]. These patterns can be classified into two categories, according to how they are mostly used in language to convey a causal relation: 1. EPC (effect-pattern-cause) category: I am very sad BECAUSE I lost my phone; 2. CPE (cause-pattern-effect) category: The earthquake RESULTED IN many deaths. For EPC patterns, we simply take the text on the left of the pattern as effect, and take the text on the right of the pattern as cause. The case is reversed for CPE category patterns. These patterns (shown in Table 1) were applied to the Common Crawl corpus, followed by post-filtering: duplicate removal; filtering explicitly negated relations and verbs in passive voice; and restricting the cause and effect to each contain at least two tokens. This results in our CausalBank corpus, denoted here as B, with 133 M EPC + 181 M CPE = 314 M ( , ) ( refers to cause and refers to effect) pairs in total. We manually evaluated 1,000 randomly sampled sentences from the corpus and found that 95% conveyed a meaningful causal relation. et al., 2014]. Given a sentence such as "The storm caused a tremendous amount of damage on the landing beaches.", this approach will harvest the lexical pairs (storm, tremendous), (storm, amount), (storm, damage), (storm, landing), and (storm, beach) as causal evidence. Stop words are removed and only pairs involving nouns, verbs, adjectives and adverbs are retained. The extracted lexical pairs form a directed network of posited causal relations, where nodes in the network are lemmatized terms, and a directed edge between two terms indicates a causal relation, weighted by cooccurrence frequency. For comparison, Figure 3 gives a similar illustration as Figure 1 in Luo et al. [2016]. We refer to our artifact as a Cause Effect Graph (CEG);
Cause Effect Graph: A Lexical Causal KB
Guided Generation
We use Sockeye [Hieber et al., 2017] to train Transformerbased [Vaswani et al., 2017] conditional generation models, one for causes, one for effects. Sockeye supports decoding via N-best (each step greedily chooses the top best N words in beam search based on the generated tokens) and random sampling (each step randomly sampling N words from the softmax distribution based on the generated tokens). The training data (CausalBank) is processed through Byte Pair Encoding [Sennrich et al., 2016] to reduce vocabulary size.
Disjunctive Positive Constraints Decoding
Unlike in NMT, our intended outputs for a given input are diverse in meaning: we wish to generate multiple semantically distinct possible causes or effects. We induce diversity through hard lexical requirements during decoding, using causal keywords from our CEG as positive constraints on the output. A positive constraint forces the decoder to produce a sequence of tokens that contain the constrained sequence, which is achieved through a constrained beam search proposed by Post and Vilar [2018] and made efficient by Hu et al.
[2019a]. Unfortunately, those prior works are restricted to conjunctive positive constraints: all items provided to the decoder must be present in the output. This is problematic in our case: our CEG maps lemmas to lemmas, and thus lemmas will form our constraints, but at generation time we do not require specific morphological inflections of our constrained terms. We wish not to constrain the decoder to a particular lemma, but to allow it to choose the best morphological form as appropriate in its context. For example, when generating a cause for "I brought an umbrella" with rain as the cause keyword, some valid cause sentences, e.g., "It rained" or "It was a rainy day.", would not be permitted based on prior work. One may circumvent this limitation by enumerating all morphological variants of a term, then apply each in turn as a positive constraint in distinct decoding passes. However, this approach does not scale, as its run-time grows exponentially in the number of initial constraints, each with multiple morphological variants.
Here we propose a solution of disjunctive positive constraint decoding, where each constraint is represented by a set of token sequences, and the decoder needs to include only one sequence from each set of constraints in the final output. We modify the algorithm from Hu et al.
[2019a] to allow the decoder to explore the disjunctively constrained space in a single forward sequence, without significant computational overhead.
Algorithm 1 Decoding with Disjunctive Positive Constraints. We consider the generation of one sentence with a beam size of 1 for simplicity. Note that while a beam size of 1 reduces the constrained beam search, the handling of DPC is not affected.
input: a set of disjunctive constraint sets , for each set in , In that work, constraints are represented in a trie, where each constraint is represented by a path from the root to a leaf. One or more state pointers are used to track how many tokens have been generated for each constraint, and tokens that induce more progress are prioritized in a modified beam search proposed by Post and Vilar [2018]. When a constraint is satisfied, the algorithm prunes the path representing that constraint. The distinguishing property of a disjunctive constraint is that once a sequence in a disjunctive set is satisfied, others in the set are also removed and no longer constraints.
= { 0 , 1 , ..., } and = ( ( ) (0) , ( ) (1) , ..., ( ) ( ) ) where ( ) ( ) is the ℎ token in , one of the sequences of the disjunctive constraint set output: a token sequence = ( 0 , 1 , ..., ) ({ 0 0 , ..., }) while −1 ! =EOS and < do ℎ(( 0 , ..., −1 ), ) if finishes the sequence then for in do . ( ) end for Remove from end if + 1 end while return ( 0 , 1 , ...,
For decoding with disjunctive constraints, we represent all constrained sequences, whether they are from the same disjunctive set or not, on a single trie. When a sequence is generated, we prune all sequences in the set as opposed to just the generated sequence. This modification gives us an efficient algorithm for applying disjunctive constraints, as illustrated in Algorithm 1 and Figure 4. While here we use morphological variants in our disjunctive set, our algorithm is broadly applicable for constraining on a set of synonyms or different subword segmentations of the same sequence.
Outputs Reranking While DPC decoding supports arbitrary number of disjunctive constraints in one beam search process, in practice only a few preferred constraints under the model will dominate any N-best output. To encourage diver- Table 2: Dev-set results: perplexity (Per), word accuracy (Acc (%)).
sity we first select a set of candidate constraint tokens from CEG, generate outputs per constraint, then merge and rerank the results. For example, if generating causes for the input sentence = "babies cry", we lemmatize each word in the sentence (baby and cry). These terms map to a set of lemmas via CEG, each associated with an observed frequency; we take the -most frequent (highest weighted) such candidates:
= { 1 , 2 ... }.
For each token in , such as 'love', we get a set of its morphological variants ={'love', 'loves', 'loved', 'loving'} via the python package patterns.en, and pass as a DPC, keeping the top outputs. In total we derive * ( =300 and =5) sentences via beam search decodings. These sentences are ranked by their associated negative log-likelihood scores, and we return the top .
CausalBERT
Previous studies [Phang et al., 2018;Li et al., 2019b] have shown that applying intermediate auxiliary task training to an encoder such as BERT can improve performance on a target task. We designed an intermediate task for > 0 is the margin loss function parameter, which is set to 0.3. Θ is the set of BERT model parameters. is the parameter for L2 regularization, which is set to 0.00001.
By training BERT with this intermediate supervised task, we expect the model to acquire enhanced knowledge about the meaning of a causal relation, and can have better performance on downstream causal inference tasks.
Evaluation
We evaluate our proposed causal generation approach by both human and automatic metrics, and evaluate CausalBank by applying CausalBERT to COPA, which requires the model to choose the correct cause or effect from two candidates.
Model Selection
We first experiment on a small subset of our CausalBank corpus (CB 10M) -10 million CE pairs from the causal pattern al., 2017], and Transformer). 4 For the cause generation model, is used as the source and is used as the target, which is reversed in training the effect model. Perplexity (Per) and word accuracy (Acc) are used to evaluate the model's performance. We find that Transformer constantly achieves the best performance (Table 2).
Then we train two versions of Transformer on the whole CausalBank corpus (CB all). The small model's encoder and decoder both have 6 layers, with a hidden size and embedding size of 512. The big model's encoder and decoder have 12 layers and 4 layers, with a hidden size and embedding size of 768, leading to 134M parameters in total. The vocabulary size is 15,000. The training is stopped when the validation loss stagnates for 20,000 batches. For the cause generation model, and from only the EPC category ( , ) pairs are used as the source and target. For the effect generation model, and from only the CPE category ( , ) pair is used as the source and target. This setting always generates the right part of the sentence conditioned on the left part, which we find to give more reasonable outputs than the above architecture exploration experiments. The bottom of Table 2 shows the large Transformer model constantly achieves the best performance on development set, which contains 5,000 CE pairs.
Evaluating Generation
We evaluate the large Transformer model via human assessment, on two kinds of test sets. The first kind of test sets The compared methods include a simplified KNN method (when the input is "babies cry", we match sentences exactly containing the input as the retrieved neighbors, e.g. "those babies cry loudly", and get the corresponding causes and effects), the GPT-2 124M language model [Radford et al., Method Acc (%) PMI [Jabeen et al., 2014] 58.8 PMI EX [Gordon et al., 2011] 65.4 CS [Luo et al., 2016] 70.2 CS MWP [Sasaki et al., 2017] 71.2 Google T5-base [Raffel et al., 2019] 71.2 BERT-base [Li et al., 2019a] 75.4 CausalBERT-base (ours) 78.6
Google T5-11B [Raffel et al., 2019] 94.8 2019] which can generate continuations conditioned on a start sentence (e.g. "babies cry because"), random sampling based decoding, N-best decoding, DPC decoding with constraint tokens from CEG (CN-cons), and DPC decoding with gold answer as constraint tokens (Gold-cons). Four graduate students from the NLP field were used in annotation. Each was asked to give a score from {0, 1, 2} for the generated {input, cause/effect} pair, where the guidelines are (take cause generation for example): if the generated answer does not make sense or can never be a reasonable cause, reason or explanation for the input event, give a score of 0;
if the generated answer has grammatical errors but can be a reasonable cause, reason or explanation for the input event under some rare conditions (or beyond commonsense), give a score of 1; if the generated answer is a fluent sentence and can be a reasonable cause, reason or explanation with high probability, give a score of 2. Each pair was labeled by two annotators, and we average the judgments over two annotators per pair. The cohen's kappa score is 0.53. Table 3 shows the human evaluation results. Three metrics are adopted: Precision at 1 P@1 (an average score of 1.5 or above is seen as a valid causal answer); P@3; and average human score for each evaluated pair (H). For the TrainSub test set, the KNN method shows strong performance, especially for P@1 and the human scores. However, KNN performs worse for P@3, due to the absence of many possible answers for the same input. Meanwhile, our two versions of DPC decoding strategies (CN-cons, Gold-Cons) also show relatively better performance compared to other generation methods (GPT-2, Random and N-best decoding). KNN performs poorly on the COPA dev set, because most of the inputs never appear in the training data. However, CN-Cons and Gold-Cons can still achieve good performance.
Lexical Diversity We used a modified BLEU score to evaluate lexical diversity (Div in Table 3) where a lower score means a greater lexical diversity. Specifically, we calculate the associated BLEU-1 score between the gold answers and the generated top 3 outputs without brevity penalty. This modification ensures that we don't reward shorter outputs. In most cases, CN-Cons gets the lowest Div scores, showing that our DPC decoding and constraint tokens from CEG together, allows us to explore more in the causes and effects space, and generate more diverse outputs. Also we find that all of these BLEU scores are very low, compared with the BLEU scores in previous text generation studies [Hu et al., 2019b;Vaswani et al., 2017]. This is because our generation task is open-ended (as illustrated in Figure 1). Table 4 shows our CausalBERT results on COPA test. Compared with prior strong knowledgedriven baseline methods, a BERT-base model trained with a margin-based loss [Li et al., 2019a] achieved good performance. Following the experimental settings of Li et al. [2019a], when training the BERT-base model with additional CE pairs from CausalBank, we get an improvement of 3.2%, from 75.4% to 78.6%, showing that our corpus successfully augments BERT base to make it better for causal inference, which is a sign the corpus contains useful causal knowledge. We find that the number of CE pairs in the intermediate task matters: performance first improves and then decreases, with more training data added. 5 We get the best performance of 78.6% with 40 K training CE pairs. Though our result still has a big gap from the current SOTA performance on COPA (94.8% from the largest google T5-11B model), the intent of our experiment is just to illustrate how the only difference was in altering the pre-training with CausalBank. One could possibly get a SOTA model based on our corpus and the google T5 model, if publicly available.
Evaluating CausalBank
Related Work
Conditional Text Generation Such efforts cover a large body of work, including machine translation, response generation and paraphrase generation. Most related is conditional story generation [Guan et al., 2019;Wang and Wan, 2019;Luo et al., 2019;Li et al., 2018b], which aims to generate story continuations based on a given context. These works do not require generated sentences to be strictly causes or effects.
For causal generation, Rashkin et al. [2018] aimed to generate the likely intents and reactions of the event's participants, given a short free-form textual event. Sap et al.
[2019] trained a multi-task model for fine-grained kinds of If-Then commonsense reasoning. However, the causal semantics considered in their work are restricted to a narrow space, and their models are trained on no more than one million examples. Further, their resource was based-on crowdsourcing, which carries risks of human bias [Rudinger et al., 2017;Poliak et al., 2018]. We harvest a significantly larger, open coverage causal corpus, 6 related in approach to DisSent [Nie et al., 2019] but larger, focused on causality, and aimed primarily at generation rather than sentence representation learning.
Of various efforts in guided generation [Ammanabrolu et al., 2019;Tang et al., 2019;Clark et al., 2018;Hu et al., 2019b], Sentential Causal Resource # CE Pairs TCR [Ning et al., 2018] 172 SemEval-2007Task4 [Girju et al., 2007 220 Causal-TimeBank [Mirza et al., 2014] 318 CaTeRS [Mostafazadeh et al., 2016] 488 EventCausalityData [Do et al., 2011] 580 RED [O'Gorman et al., 2016] 1,147 SemEval2010 Task8 [Hendrickx et al., 2009] 1,331 BECauSE 2.0 [Dunietz et al., 2017b] 1,803 EventStoryLine [Caselli and Vossen, 2017] 5,519 PDTB 2.0 [Prasad et al., 2008] 8 lexically-constrained decoding [Hokamp and Liu, 2017] is a modification of beam search originating in neural machine translation which allows the user to specify tokens that must (or must not) appear in the decoder's output. Post and Vilar [2018] proposed a variant of lexicallyconstrained decoding that reduced complexity from linear to constant-time, which was made more efficient by Hu et al. [2019a]. We introduce an extension to lexically-constrained decoding that supports disjunctive positive constraints for multiple optional constraint keywords.
Sentential Causal Resources Existing causal corpora differ in their annotation guidelines and how they are constructed: (1) whether they consider only explicit or also implicit causal relations; (2) whether they consider only intra-sentence relations or if relations can cross sentences; (3) whether the annotation unit is word level or sentence level; and (4) whether the corpus is constructed automatically or by human effort. Ours is concerned with explicit only relations, within a single sentence, relating one part of a sentence to another, and employs constructed patterns but not sentence-level human annotation.
Already mentioned are recent crowdsourcing efforts [Rashkin et al., 2018;Sap et al., 2019]. More related are PDTB [Prasad et al., 2008] 2016] integrates real-world temporal and causal relations between events into a unified framework. Table 5 contrasts the size of causal portion of prior resources with our own. Lexical Causal Resources Lexical semantic resources may encode causal properties on verbs (e.g., [Schuler, 2005;Bonial et al., 2014]) and prepositions (e.g., [Schneider et al., 2015]). Force dynamics theory [Talmy, 1988] from cognitive psychology posits three primary kinds of causal semantics [Wolff, 2007] -CAUSE, ENABLE and PREVENT -which were lexicalized as causal verbs [Wolff and Song, 2003]. The annotation scheme of Dunietz et al.
[2017b] distinguishes three types of causal semantics: CONSEQUENCE, MOTI-VATION, and PURPOSE. In PDTB 2.0 [Prasad et al., 2008], "CONTINGENCY" has two subtypes ("Cause" and "Condition"). FrameNet [Baker, 2014] represents causal relations through a variety of unrelated frames (e.g., CAUSATION and THWARTING) and frame roles (e.g., PURPOSE and EXPLANATION). These efforts motivate our own causal patterns, categorized into: CAUSE (e.g. cause, result in, lead to), EXPLANATION (e.g. because, due to), CONDITION (e.g. if-then, as long as), PURPOSE (e.g. in order to, for the purpose of), and PREVENTION (e.g. stop/prevent-from). Causal Knowledge Acquisition Causal knowledge acquisition [Radinsky et al., 2012;Radinsky and Horvitz, 2013] is crucial for many AI systems, and it is often acquired via text. Hashimoto et al. Church and Hanks [1990] proposed the use of pointwise mutual information (PMI) for mining patterns via text cooccurrence. Many works have followed this strategy, e.g. [Chambers and Jurafsky, 2008;Riaz and Girju, 2010;Gordon et al., 2011;Do et al., 2011;Luo et al., 2016]. Others have mined patterns via discourse patterns in the form of 'A led to B', 'if A then B', etc., e.g., [Khoo et al., 2000;Girju, 2003;Zhao et al., 2017]). See Asghar [2016] for review. Such efforts relate most closely to our CEGraph component, rather than our overall framework. Our concern is the generation of diverse potential causes and effects as natural language statemnts.
Conclusion
We investigate open causal generation for free-form textual input, and build a large sentential causal corpus which we used to train a generative model. We introduced a novel extension to lexically-constrained decoding that supports disjunctive positive constraints, where generated output is forced to contain one of a set of candidates. Automatic and human evaluations show that our method can generate high-quality and diverse causes and effects for new inputs.
Figure 2 :
2Our approach for generating plausible causes and effects.
Figure 3 :
3Cause Effect Graph: A lexical causal knowledge base.
Figure 4 :
4Trie states in positive constraint and disjunctive positive constraint, after generating the token 'rained' in beam search.
BERT using CausalBank B, employing margin loss [Li et al., 2019a; Li et al., 2018a] in the objective function: (Θ) = ( , ) ∈B (max(0, − ( , ) + ( , ))) + 2 ||Θ|| 2 , where ( , ) is the score of true CE pair given by BERT model, ( , ) is the score of corrupted CE pair by replacing or with randomly sampled negative cause or effect from other examples in B.
'because' -considering different NMT encoder and decoder architectures (LSTM, CNN, Conv-Transformer [Gehring et
(TrainSub) contains 100 randomly sampled input examples from the model's training data. The second kind of test sets (COPA Dev) contains 100 randomly sampled examples from the development set of COPA [Roemmele et al., 2011] dataset, which are manually created gold sentences and never seen during the model's training stage.
[2014] and Kruengkrai et al. [2017] applied supervised learning techniques using a benchmark training data with over 100K human-annotated CE pairs. Dasgupta et al. [2018] explored general causal extraction using 5,000 labelled sentences. Do et al. [2011] is an example of a minimally supervised approach. Recent studies [Dunietz et al., 2017a; Dunietz et al., 2018] explored new supervised approaches on the BECauSE 2.0 [Dunietz et al., 2017b] corpus.
Table 5
5illus-
Table 3 :
3Human evaluation results of cause and effect generation.
Table 4 :
4Results on COPA-Test, contrasting prior results to a model by Li et al. built atop BERT-base. This model is improved by 3 points through adoption of CausalBERT.
Table 5 :
5Contrasting size with example prior works: only the causal portion of these corpora are listed. The top are sentential causal corpora, while the bottom are graph-structure causal knowledge bases.
and BECauSE [Dunietz et al., 2017b], but where our resource goal is a much larger corpus, for the purpose of training a neural text generation model. Most related would be the extractive approach of DisSent [Nie et al., 2019], but where we focus specifically on causality, and derive a much larger corpus. [Bethard and Martin, 2008] tagged a small corpus of event pairs conjoined with "and" as causal or not causal. CaTeRS [Mostafazadeh et al., 2016] included causal relations from a commonsense reasoning standpoint. Richer Event Description [O'Gorman et al.,
We found poor annotator agreement on span boundaries in an initial investigation on crowdsourcing data for such a system; we intend to return to this in future work, investigating improvements to our results via trained extraction models for corpus pre-processing.
89.1M in contrast to 13.3M, with relations with a frequency of 5 or lower removed.
Each of these models' encoder and decoder use the same architecture, e.g. both are 6-layer LSTMs, with a hidden size and embedding size of 512. All models are trained for 10 epochs. The vocabulary size is 10,000.
This was not observed in related studies[Phang et al., 2018; Li et al., 2019b], where all training examples from the Multi-NLI dataset were used as an intermediate task. Similar behavior was observed in NMT in continued training for domain adaptation[Thompson et al., 2019]. We believe ours to be a similar setting, where the "indomain" causal data overwhelms the benefits of pretraining; adapting strategies from Thompson et al. is an avenue for future work.6 While we avoid pitfalls of elicitation, we acknowledge that like any corpus-extracted resource ours may suffer from reporting bias [Gordon andVan Durme, 2013]: some types of causes or effects that are known to humans but rarely or ever explicitly stated.
AcknowledgementsReferences
Caters: Causal and temporal relation scheme for semantic annotation of event structures. Mirza, CAtoCL. ACLMirza et al., 2014] P. Mirza, R. Sprugnoli, et al. Annotating causality in the tempeval-3 corpus. In CAtoCL, 2014. [Mostafazadeh et al., 2016] N. Mostafazadeh, A. Grealish, et al. Caters: Causal and temporal relation scheme for semantic annotation of event structures. In Events, 2016. [Nie et al., 2019] A. Nie, E. Bennett, and N. Goodman. Dis- Sent: Learning sentence representations from explicit dis- course relations. In ACL, 2019.
Joint reasoning for temporal and causal relations. Ning, In ACL. [Ning et al., 2018] Q. Ning, Z. Feng, et al. Joint reasoning for temporal and causal relations. In ACL, 2018.
Richer event description: Integrating event coreference with temporal, causal and bridging annotation. O'gorman, CNS. O'Gorman et al., 2016] T. O'Gorman, K. Wright-Bettner, and M. Palmer. Richer event description: Integrating event coreference with temporal, causal and bridging annotation. In CNS, 2016.
Hypothesis only baselines in natural language inference. Phang, arXiv:1811.01088Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. LCS[Phang et al., 2018] J. Phang, T. Févry, and S. R Bowman. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv:1811.01088, 2018. [Poliak et al., 2018] A. Poliak, J. Naradowsky, A. Haldar, R. Rudinger, and B. Van Durme. Hypothesis only baselines in natural language inference. In LCS, 2018.
Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. ; M Vilar, D Post, Vilar, NAACL. and Vilar, 2018] M. Post and D. Vilar. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In NAACL, 2018.
The penn discourse treebank 2.0. Prasad, LREC. [Prasad et al., 2008] R. Prasad, N. Dinesh, et al. The penn discourse treebank 2.0. In LREC, 2008.
Language models are unsupervised multitask learners. [ Radford, [Radford et al., 2019] A. Radford, J. Wu, et al. Language models are unsupervised multitask learners. 2019.
Learning causality for news events prediction. K Radinsky, E Horvitz ; K. Radinsky, S Davidovich, S Markovitch, WSDM. WWW[Radinsky and Horvitz, 2013] K. Radinsky and E. Horvitz. Mining the web to predict future events. In WSDM, 2013. [Radinsky et al., 2012] K. Radinsky, S. Davidovich, and S. Markovitch. Learning causality for news events pre- diction. In WWW, 2012.
Another look at causality: Discovering scenario-specific contingency relationships with no supervision. Raffel, arXiv:1910.10683Exploring the limits of transfer learning with a unified text-to-text transformer. ACL[Raffel et al., 2019] C. Raffel, N. Shazeer, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683, 2019. [Rashkin et al., 2018] H. Rashkin, M. Sap, E. Allaway, N. A. Smith, and Y. Choi. Event2Mind: Commonsense inference on events, intents, and reactions. In ACL, 2018. [Riaz and Girju, 2010] M. Riaz and R. Girju. Another look at causality: Discovering scenario-specific contingency re- lationships with no supervision. In Semantic Comp., 2010. [Roemmele et al., 2011] M. Roemmele, C. A. Bejan, and A. S Gordon. Choice of plausible alternatives: An evalua- tion of commonsense causal reasoning. In AAAI, 2011. [Rudinger et al., 2017] R. Rudinger, Chandler May, and B. Van Durme. Social bias in elicited natural language inferences. In Ethics in NLP, 2017. [Sap et al., 2019] M. Sap, R. Le Bras, et al. Atomic: An atlas of machine commonsense for if-then reasoning. In AAAI, 2019. [Sasaki et al., 2017] S. Sasaki, S. Takase, et al. Handling multiword expressions in causality estimation. 2017. [Schneider et al., 2015] N. Schneider, V. Srikumar, J. D Hwang, and M. Palmer. A hierarchy with, of, and for preposition supersenses. In Linguistic Annotation, 2015. [Schuler, 2005] Karin Kipper Schuler. Verbnet: A broad- coverage, comprehensive verb lexicon. 2005. [Sennrich et al., 2016] R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. In ACL, 2016.
Overcoming catastrophic forgetting during domain adaptation of neural machine translation. Conceptnet 5.5: An open multilingual graph of general knowledge. Wolff and SongWSDMet al., 2017] R. Speer, J. Chin, and C. Havasi. Concept- net 5.5: An open multilingual graph of general knowledge. In AAAI, 2017. [Talmy, 1988] L. Talmy. Force dynamics in language and cognition. Cognitive science, 1988. [Tang et al., 2019] J. Tang, T. Zhao, et al. Target-guided open- domain conversation. In ACL, 2019. [Thompson et al., 2019] B. Thompson, J. Gwinnup, et al. Overcoming catastrophic forgetting during domain adapta- tion of neural machine translation. In NAACL, 2019. [Vaswani et al., 2017] A. Vaswani, N. Shazeer, et al. Atten- tion is all you need. In NIPS. 2017. [Wang and Wan, 2019] T. Wang and X. Wan. T-cvae: Transformer-based conditioned variational autoencoder for story completion. In IJCAI, 2019. [Webber et al., 2019] B. Webber, R. Prasad, A. Lee, and A. Joshi. The pdtb 3.0 annotation manual, 2019. [Wolff and Song, 2003] P. Wolff and G. Song. Models of causation and the semantics of causal verbs. CP, 2003. [Wolff, 2007] P. Wolff. Representing causation. Journal of experimental psychology: General, 2007. [Zhang et al., 2019] H. Zhang, X. Liu, et al. Aser: A large- scale eventuality knowledge graph. arXiv, 2019. [Zhao et al., 2017] S. Zhao, Q. Wang, S. Massung, et al. Con- structing and embedding abstract event causality networks from text snippets. In WSDM, 2017.
| [] |
[
"AdapterHub: A Framework for Adapting Transformers",
"AdapterHub: A Framework for Adapting Transformers"
] | [
"Jonas Pfeiffer \nTechnical University of Darmstadt\n\n",
"Andreas Rücklé \nTechnical University of Darmstadt\n\n",
"Clifton Poth \nTechnical University of Darmstadt\n\n",
"Aishwarya Kamath \nNew York University\n\n",
"Ivan Vulić \nUniversity of Cambridge\n5 DeepMind\n",
"Sebastian Ruder ",
"Kyunghyun Cho \nNew York University\n\n\nCIFAR Associate Fellow\n\n",
"Iryna Gurevych \nTechnical University of Darmstadt\n\n"
] | [
"Technical University of Darmstadt\n",
"Technical University of Darmstadt\n",
"Technical University of Darmstadt\n",
"New York University\n",
"University of Cambridge\n5 DeepMind",
"New York University\n",
"CIFAR Associate Fellow\n",
"Technical University of Darmstadt\n"
] | [
"Proceedings of the 2020 EMNLP (Systems Demonstrations)"
] | The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of hundreds of millions, or even billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks. Adapters-small learnt bottleneck layers inserted within each layer of a pretrained model-ameliorate this issue by avoiding full fine-tuning of the entire model. However, sharing and integrating adapter layers is not straightforward. We propose AdapterHub, a framework that allows dynamic "stichingin" of pre-trained adapters for different tasks and languages. The framework, built on top of the popular HuggingFace Transformers library, enables extremely easy and quick adaptations of state-of-the-art pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages. Downloading, sharing, and training adapters is as seamless as possible using minimal changes to the training scripts and a specialized infrastructure. Our framework enables scalable and easy access to sharing of task-specific models, particularly in lowresource scenarios. AdapterHub includes all recent adapter architectures and can be found at AdapterHub.ml. | 10.18653/v1/2020.emnlp-demos.7 | [
"https://www.aclweb.org/anthology/2020.emnlp-demos.7.pdf"
] | 220,525,782 | 2007.07779 | fc791c2c4e4bd9576d5e3b72512ff6eabc59b001 |
AdapterHub: A Framework for Adapting Transformers
November 16-20, 2020
Jonas Pfeiffer
Technical University of Darmstadt
Andreas Rücklé
Technical University of Darmstadt
Clifton Poth
Technical University of Darmstadt
Aishwarya Kamath
New York University
Ivan Vulić
University of Cambridge
5 DeepMind
Sebastian Ruder
Kyunghyun Cho
New York University
CIFAR Associate Fellow
Iryna Gurevych
Technical University of Darmstadt
AdapterHub: A Framework for Adapting Transformers
Proceedings of the 2020 EMNLP (Systems Demonstrations)
the 2020 EMNLP (Systems Demonstrations)November 16-20, 202046
The current modus operandi in NLP involves downloading and fine-tuning pre-trained models consisting of hundreds of millions, or even billions of parameters. Storing and sharing such large trained models is expensive, slow, and time-consuming, which impedes progress towards more general and versatile NLP methods that learn from and for many tasks. Adapters-small learnt bottleneck layers inserted within each layer of a pretrained model-ameliorate this issue by avoiding full fine-tuning of the entire model. However, sharing and integrating adapter layers is not straightforward. We propose AdapterHub, a framework that allows dynamic "stichingin" of pre-trained adapters for different tasks and languages. The framework, built on top of the popular HuggingFace Transformers library, enables extremely easy and quick adaptations of state-of-the-art pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages. Downloading, sharing, and training adapters is as seamless as possible using minimal changes to the training scripts and a specialized infrastructure. Our framework enables scalable and easy access to sharing of task-specific models, particularly in lowresource scenarios. AdapterHub includes all recent adapter architectures and can be found at AdapterHub.ml.
Introduction
Recent advances in NLP leverage transformerbased language models (Vaswani et al., 2017), pretrained on large amounts of text data (Devlin et al., 2019;Conneau et al., 2020). These models are fine-tuned on a target task and achieve state-of-the-art (SotA) performance for most natural language understanding tasks. Their performance has been shown to scale with their size (Kaplan et al., 2020) and recent models have reached * *Equal contribution. billions of parameters (Raffel et al., 2019;Brown et al., 2020). While fine-tuning large pre-trained models on target task data can be done fairly efficiently (Howard and Ruder, 2018), training them for multiple tasks and sharing trained models is often prohibitive. This precludes research on more modular architectures , task composition (Andreas et al., 2016), and injecting biases and external information (e.g., world or linguistic knowledge) into large models (Lauscher et al., 2019;Wang et al., 2020).
Adapters (Houlsby et al., 2019) have been introduced as an alternative lightweight fine-tuning strategy that achieves on-par performance to full fine-tuning on most tasks. They consist of a small set of additional newly initialized weights at every layer of the transformer. These weights are then trained during fine-tuning, while the pre-trained parameters of the large model are kept frozen/fixed. This enables efficient parameter sharing between tasks by training many task-specific and language-specific adapters for the same model, which can be exchanged and combined post-hoc. Adapters have recently achieved strong results in multi-task and cross-lingual transfer learning (Pfeiffer et al., 2020a,b).
However, reusing and sharing adapters is not straightforward. Adapters are rarely released individually; their architectures differ in subtle yet important ways, and they are model, task, and language dependent. To mitigate these issues and facilitate transfer learning with adapters in a range of settings, we propose AdapterHub, a framework that enables seamless training and sharing of adapters.
AdapterHub is built on top of the popular transformers framework by HuggingFace 1 (Wolf et al., 2020), which provides access to stateof-the-art pre-trained language models. We en-hance transformers with adapter modules that can be combined with existing SotA models with minimal code edits. We additionally provide a website that enables quick and seamless upload, download, and sharing of pre-trained adapters. Adapter-Hub is available online at: AdapterHub.ml.
AdapterHub for the first time enables NLP researchers and practitioners to easily and efficiently share and obtain access to models that have been trained for particular tasks, domains, and languages. This opens up the possibility of building on and combining information from many more sources than was previously possible, and makes research such as intermediate task training (Pruksachatkun et al., 2020), composing information from many tasks (Pfeiffer et al., 2020a), and training models for very low-resource languages (Pfeiffer et al., 2020b) much more accessible.
Contributions. 1) We propose an easy-to-use and extensible adapter training and sharing framework for transformer-based models such as BERT, RoBERTa, and XLM(-R); 2) we incorporate it into the HuggingFace transformers framework, requiring as little as two additional lines of code to train adapters with existing scripts; 3) our framework automatically extracts the adapter weights, storing them separately to the pre-trained transformer model, requiring as little as 1Mb of storage; 4) we provide an open-source framework and website that allows the community to upload their adapter weights, making them easily accessible with only one additional line of code; 5) we incorporate adapter composition as well as adapter stacking out-of-the-box and pave the way for a wide range of other extensions in the future.
Adapters
While the predominant methodology for transfer learning is to fine-tune all weights of the pre-trained model, adapters have recently been introduced as an alternative approach, with applications in computer vision (Rebuffi et al., 2017) as well as the NLP domain (Houlsby et al., 2019;Bapna and Firat, 2019;Wang et al., 2020;Pfeiffer et al., 2020a,b).
Adapter Architecture
Adapters are neural modules with a small amount of additional newly introduced parameters Φ within a large pre-trained model with parameters Θ. The parameters Φ are learnt on a target task while keeping Θ fixed; Φ thus learn to encode task-specific representations in intermediate layers of the pretrained model. Current work predominantly focuses on training adapters for each task separately (Houlsby et al., 2019;Bapna and Firat, 2019;Pfeiffer et al., 2020a,b), which enables parallel training and subsequent combination of the weights.
In NLP, adapters have been mainly used within deep transformer-based architectures (Vaswani et al., 2017). At each transformer layer l, a set of adapter parameters Φ l is introduced. The placement and architecture of adapter parameters Φ within a pre-trained model is non-trivial and may impact their efficacy: Houlsby et al. (2019) experiment with different adapter architectures, empirically validating that a two-layer feed-forward neural network with a bottleneck works well. While this down-and up-projection has largely been agreed upon, the actual placement of adapters within each transformer block, as well as the introduction of new LayerNorms 2 (Ba et al., 2016) varies in the literature (Houlsby et al., 2019;Bapna and Firat, 2019;Stickland and Murray, 2019;Pfeiffer et al., 2020a). In order to support standard adapter architectures from the literature, as well as to enable easy extensibility, AdapterHub provides a configuration file where the architecture settings can be defined dynamically. We illustrate the different configuration possibilities in Figure 3, and describe them in more detail in §3.
Why Adapters?
Adapters provide numerous benefits over fully finetuning a model such as scalability, modularity, and composition. We now provide a few use-cases for adapters to illustrate their usefulness in practice.
Task-specific Layer-wise Representation Learning. Prior to the introduction of adapters, in order to achieve SotA performance on downstream tasks, the entire pre-trained transformer model needs to be fine-tuned . Adapters have been shown to work on-par with full fine-tuning, by adapting the representations at every layer. We present the results of fully fine-tuning the model compared to two different adapter architectures on the GLUE benchmark (Wang et al., 2018) in Table 1 RTE (Wang et al., 2018) 66.2 70.8 69.8 MRPC (Dolan and Brockett, 2005) 90.5 89.7 91.5 STS-B (Cer et al., 2017) 88.8 89.0 89.2 CoLA (Warstadt et al., 2019) 59.5 58.9 59.1 SST-2 (Socher et al., 2013) 92.6 92.2 92.8 QNLI (Rajpurkar et al., 2016) 91.3 91.3 91.2 MNLI (Williams et al., 2018) 84.1 84.1 84.1 QQP (Iyer et al., 2017) 91.4 90.5 90.8 Table 1: Mean development scores over 3 runs on GLUE (Wang et al., 2018) leveraging the BERT-Base pre-trained weights. We present the results with full fine-tuning (Full) and with the adapter architectures of within each transformer layer, respectively. The former adapter thus has more capacity at the cost of training and inference speed. We find that for all settings, there is no large difference in terms of performance between the model architectures, verifying that training adapters is a suitable and lightweight alternative to full fine-tuning in order to achieve SotA performance on downstream tasks.
Small, Scalable, Shareable. Transformer-based models are very deep neural networks with millions or billions of weights and large storage requirements, e.g., around 2.2Gb of compressed storage space is needed for XLM-R Large (Conneau et al., 2020). Fully fine-tuning these models for each task separately requires storing a copy of the fine-tuned model for each task. This impedes both iterating and parallelizing training, particularly in storage-restricted environments. Adapters mitigate this problem. Depending on the model size and the adapter bottleneck size, a single task requires as little as 0.9Mb storage space. We present the storage requirements in Table 2. This highlights that > 99% of the parameters required for each target task are fixed during training and can be shared across all models for inference. For instance, for the popular Bert-Base model with a size of 440Mb, storing 2 fully fine-tuned models amounts to the same storage space required by 125 models with adapters, when using a bottleneck size of 48 and adapters of Pfeiffer et al. (2020a). Moreover, when performing inference on a mobile device, adapters can be leveraged to save a significant amount of storage space, while supporting a large number of target tasks. Additionally, due to the small size of the adapter modules-which in many cases do not exceed the file size of an image-new tasks can be added on-the-fly. Overall, these factors make adapters a much more computationally-and ecologically (Strubell et al., 2019)-viable option compared to updating entire models (Rücklé et al., 2020). Easy access to fine-tuned models may also improve reproducibility as researchers will be able to easily rerun and evaluate trained models of previous work.
Modularity of Representations. Adapters learn to encode information of a task within designated parameters. Due to the encapsulated placement of adapters, wherein the surrounding parameters are fixed, at each layer an adapter is forced to learn an output representation compatible with the subsequent layer of the transformer model. This setting allows for modularity of components such that adapters can be stacked on top of each other, or replaced dynamically. In a recent example, Pfeiffer et al. (2020b) successfully combine adapters that have been independently trained for specific tasks and languages. This demonstrates that adapters are modular and that output representations of different adapters are compatible. As NLP tasks become more complex and require knowledge that is not directly accessible in a single monolithic pre-trained model , adapters will provide NLP researchers and practitioners with many more sources of relevant information that can be easily combined in an efficient and modular way.
Non-Interfering Composition of Information.
Sharing information across tasks has a longstanding history in machine learning (Ruder, 2017). Multi-task learning (MTL), which shares a set of parameters between tasks, has arguably received the most attention. However, MTL suffers from problems such as catastrophic forgetting where information learned during earlier stages of training is "overwritten" (de Masson d'Autume et al., 2019), catastrophic interference where the performance of a set of tasks deteriorates when adding new tasks (Hashimoto et al., 2017), and intricate task weighting for tasks with different distributions (Sanh et al., 2019). The encapsulation of adapters forces them to learn output representations that are compatible across tasks. When training adapters on different downstream tasks, they store the respective information in their designated parameters. Multiple adapters can then be combined, e.g., with attention (Pfeiffer et al., 2020a). Because the respective adapters are trained separately, the necessity of sampling heuristics due to skewed data set sizes no longer arises. By separating knowledge extraction and composition, adapters mitigate the two most common pitfalls of multi-task learning, catastrophic forgetting and catastrophic interference.
Overcoming these problems together with the availability of readily available trained task-specific adapters enables researchers and practitioners to leverage information from specific tasks, domains, or languages that is often more relevant for a specific application-rather than more general pretrained counterparts. Recent work (Howard and Ruder, 2018;Phang et al., 2018;Pruksachatkun et al., 2020;Gururangan et al., 2020) has shown the benefits of such information, which was previously only available by fully fine-tuning a model on the data of interest prior to task-specific fine-tuning.
AdapterHub
AdapterHub consists of two core components: 1) A library built on top of HuggingFace transformers, and 2) a website that dynamically provides analysis and filtering of pre-trained adapters. AdapterHub provides tools for the entire life-cycle of adapters, illustrated in Figure 1 and discussed in what follows: x introducing new adapter weights Φ into pre-trained transformer weights Θ; y training adapter weights Φ on a downstream task (while keeping Θ frozen); z automatic extraction of the trained adapter weights Φ and opensourcing the adapters; { automatic visualization of the adapters with configuration filters; | on-thefly downloading/caching the pre-trained adapter weights Φ and stitching the adapter into the pre- trained transformer model Θ; } performing inference with the trained adapter transformer model.
x
Adapters in Transformer Layers
We minimize the required changes to existing HuggingFace training scripts, resulting in only two additional lines of code. In Figure 2 we present the required code to add adapter weights (line 3) and freeze all the transformer weights Θ (line 4). In this example, the model is prepared to train a task adapter on the binary version of the Stanford Sentiment Treebank (SST; Socher et al., 2013) using the adapter architecture of Pfeiffer et al. (2020a). Similarly, language adapters can be added by setting the type parameter to AdapterType.text language, and other adapter architectures can be chosen accordingly. While we provide ready-made configuration files for well-known architectures in the current literature, adapters are dynamically configurable, which makes it possible to define a multitude of architectures. We illustrate the configurable components as dashed lines and objects in Figure 3. The configurable components are placements of new weights, residual connections as well as placements of Lay-erNorm layers (Ba et al., 2016).
The code changes within the HuggingFace transformers framework are realized through MixIns, which are inherited by the respective transformer classes. This minimizes the amount of code changes of our proposed extensions and en-1 from transformers import AutoModelForSequenceClassification, AdapterType 2 model = AutoModelForSequenceClassification.from_pretrained("roberta-base") 3 model.add_adapter("sst-2", AdapterType.text_task, config="pfeiffer") 4 model.train_adapter(["sst-2"]) 5 # Train model ... 6 model.save_adapter("adapters/text-task/sst-2/", "sst-2") 7 # Push link to zip file to AdapterHub ... capsulates adapters as designated classes. It further increases readability as adapters are clearly separated from the main transformers code base, which makes it easy to keep both repositories in sync as well as to extend AdapterHub.
y Training Adapters
Adapters are trained in the same manner as full finetuning of the model. The information is passed through the different layers of the transformer where additionally to the pre-trained weights at every layer the representations are additionally passed through the adapter parameters. However, in contrast to full fine-tuning, the pre-trained weights Θ are fixed and only the adapter weights Φ and the prediction head are trained. Because Θ is fixed, the adapter weights Φ are encapsuled within the transformer weights, forcing them to learn compatible representations across tasks.
z Extracting and Open-Sourcing Adapters
When training adapters instead of full fine-tuning, it is no longer necessary to store checkpoints of the entire model. Instead, only the adapter weights Φ , as well as the prediction head need to be stored, as the base model's weights Θ remain the same. This is integrated automatically as soon as adapters are trained, which significantly reduces the required storage space during training and enables storing a large number of checkpoints simultaneously.
When adapter training has completed, the parameter file together with the corresponding adapter configuration file are zipped and uploaded to a public server. The user then enters the metadata (e.g., URL to weights, user info, description of training procedure, data set used, adapter architecture, GitHub handle, Twitter handle) into a designated YAML file and issues a pull request to the Adapter-Hub GitHub repository. When all automatic checks pass, the AdapterHub.ml website is automatically regenerated with the newly available adapter, which makes it possible for users to immediately find We also provide a configuration file for the architecture proposed by Bapna and Firat (2019), not shown here. and use these new weights described by the metadata. We hope that the ease of sharing pre-trained adapters will further facilitate and speed up new developments in transfer learning in NLP. Figure 4: | After the correct adapter has been identified by the user on the explore page of AdapterHub.ml, they can load and stitch the pre-trained adapter weights Φ into the transformer Θ (line 3).
{ Finding Pre-Trained Adapters
The website AdapterHub.ml provides a dynamic overview of the currently available pre-trained adapters. Due to the large number of tasks in many different languages as well as different transformer models, we provide an intuitively understandable hierarchical structure, as well as search options. This makes it easy for users to find adapters that are suitable for their use-case. Namely, Adapter-Hub's explore page is structured into three hierarchical levels. At the first level, adapters can be viewed by task or language. The second level allows for a more fine-grained distinction separating adapters into data sets of higher-level NLP tasks following a categorization similar to paperswithcode.com. For languages, the second level distinguishes the adapters by the language they were trained on. The third level separates adapters into individual datasets or domains such as SST for sentiment analysis or Wikipedia for Swahili.
When a specific dataset has been selected, the user can see the available pre-trained adapters for this setting. Adapters depend on the transformer model they were trained on and are otherwise not compatible. 3 The user selects the model architecture and certain hyper-parameters and is shown the compatible adapters. When selecting one of the adapters, the user is provided with additional information about the adapter, which is available in the metadata (see z again for more information).
| Stitching-In Pre-Trained Adapters
Pre-trained adapters can be stitched into the large transformer model as easily as adding randomly initialized weights; this requires a single line of code, see Figure 4, line 3. When selecting an adapter on the website (see { again) the user is provided with sample code, which corresponds to the configuration necessary to include the specific weights. 4 3 We plan to look into mapping adapters between different models as future work. 4 When selecting an adapter based on a name, we allow for string matching as long as there is no ambiguity.
} Inference with Adapters
Inference with a pre-trained model that relies on adapters is in line with the standard inference practice based on full fine-tuning. Similar to training adapters, during inference the active adapter name is passed into the model together with the text tokens. At every transformer layer the information is passed through the transformer layers and the corresponding adapter parameters.
The adapters can be used for inference in the designated task they were trained on. To this end, we provide an option to upload the prediction heads together with the adapter weights. In addition, they can be used for further research such as transferring the adapter to a new task, stacking multiple adapters, fusing the information from diverse adapters, or enriching AdapterHub with adapters for other modalities, among many other possible modes of usage and future directions.
Conclusion and Future Work
We have introduced AdapterHub, a novel easy-touse framework that enables simple and effective transfer learning via training and community sharing of adapters. Adapters are small neural modules that can be stitched into large pre-trained transformer models to facilitate, simplify, and speed up transfer learning across a range of languages and tasks. AdapterHub is built on top of the commonly used HuggingFace transformers, and it requires only adding as little as two lines of code to existing training scripts. Using adapters in Adapter-Hub has numerous benefits such as improved reproducibility, much better efficiency compared to full fine-tuning, easy extensibility to new models and new tasks, and easy access to trained models.
With AdapterHub, we hope to provide a suitable and stable framework for the community to train, search, and use adapters. We plan to continuously improve the framework, extend the composition and modularity possibilities, and support other transformer models, even the ones yet to come. Acknowledgments Jonas Pfeiffer is supported by the LOEWE initiative (Hesse, Germany) within the emergenCITY center. Andreas Rücklé is supported by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE, and by the German Research Foundation under grant EC 503/1-1 and GU 798/21-1. Aishwarya Kamath is supported in part by a DeepMind PhD Fellowship. The work of Ivan Vulić is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). Kyunghyun Cho is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure).
We would like to thank Isabel Pfeiffer for the illustrations.
. The adapters of Houlsby et al. (2019, Figure 3c) and Pfeiffer et al. (2020a, Figure 3b) comprise two and one down-and up-projection Full Pfeif. Houl.
Pfeiffer et al. (2020a, Pfeif., Figure 3b) andHoulsby et al. (2019, Houl., Figure 3c) both with bottleneck size 48. We show F1 for MRPC, Spearman rank correlation for STS-B, and accuracy for the rest. RTE is a combination of datasets(Dagan et al., 2005;Bar-Haim et al., 2006;Giampiccolo et al., 2007).
Figure 1 :
1The AdapterHub Process graph. Adapters Φ are introduced into a pre-trained transformer Θ (step x) and are trained (y). They can then be extracted and open-sourced (z) and visualized ({). Pre-trained adapters are downloaded on-the-fly (|) and stitched into a model that is used for inference (}).
Figure 2 :
2x Adding new adapter weights Φ to pre-trained RoBERTa-Base weights Θ (line 3), and freezing Θ (line 4). z Extracting and storing the trained adapter weights Φ (line 7).
Figure 3 :
3Dynamic customization possibilities where dashed lines in (a) show the current configuration options. These options include the placements of new weights Φ (including down and up projections as well as new LayerNorms), residual connections, bottleneck sizes as well as activation functions. All new weights Φ are illustrated within the pink boxes, everything outside belongs to the pre-trained weights Θ. In addition, we provide pre-set configuration files for architectures in the literature. The resulting configurations for the architecture proposed by Pfeiffer et al. (2020a) and Houlsby et al. (2019) are illustrated in (b) and (c) respectively.
Table 2 :
2Number of additional parameters and compressed storage space of the adapter of Pfeiffer et al. (2020a) in (Ro)BERT(a)-Base and Large transformer architectures. The adapter of Houlsby et al. (2019)requires roughly twice as much space. CRate refers to the adapter's compression rate: e.g., a. rate of 64 means that the adapter's bottleneck layer is 64 times smaller than the underlying model's hidden layer size.
https://github.com/huggingface/transformers
Layer normalization learns to normalize the inputs across the features. This is usually done by introducing a new set of features for mean and variance.
from transformers import AutoModelForSequenceClassification, AdapterType 2 model = AutoModelForSequenceClassification.from_pretrained("roberta-base") 3 model.load_adapter("sst-2", config="pfeiffer")
Learning to compose neural networks for question answering. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein, 10.18653/v1/n16-1181The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego California, USANAACL HLT 2016Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural net- works for question answering. In NAACL HLT 2016, The 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, San Diego Califor- nia, USA, June 12-17, 2016, pages 1545-1554.
. Jimmy Lei, Jamie Ryan Ba, Geoffrey E Kiros, Hinton. 2016. Layer normalization. arXiv preprintLei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hin- ton. 2016. Layer normalization. arXiv preprint.
Simple, scalable adaptation for neural machine translation. Ankur Bapna, Orhan Firat, 10.18653/v1/D19-1165Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAnkur Bapna and Orhan Firat. 2019. Simple, scal- able adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1538- 1548.
The second pascal recognising textual entailment challenge. Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, Idan Szpektor, Proceedings of the PASCAL@ACL. the PASCAL@ACLRoy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising tex- tual entailment challenge. In Proceedings of the PASCAL@ACL 2006.
. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc-Candlish, Alec Radford, Ilya Sutskeverand Dario Amodei. 2020. Language models are few-shot learners. arXiv preprintTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. arXiv preprint.
SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia, 10.18653/v1/S17-2001Proceedings of SemEval-2017. SemEval-2017Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of SemEval-2017.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, Proceedings of the 58th Conference of the Association for Computational Linguistics, ACL 2020, Virtual Conference. the 58th Conference of the Association for Computational Linguistics, ACL 2020, Virtual ConferenceAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Conference of the Associ- ation for Computational Linguistics, ACL 2020, Vir- tual Conference, July 6-8, 2020, pages 8440-8451.
The PASCAL recognising textual entailment challenge. Oren Ido Dagan, Bernardo Glickman, Magnini, 10.1007/11736790_9Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005. Southampton, UKRevised Selected PapersIdo Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Eval- uating Predictive Uncertainty, Visual Object Classi- fication and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, pages 177-190.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USA1Long and Short PapersJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186.
Automatically constructing a corpus of sentential paraphrases. B William, Chris Dolan, Brockett, Proceedings of the Third International Workshop on Paraphrasing. the Third International Workshop on ParaphrasingJeju Island, KoreaWilliam B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing, IWP@IJCNLP 2005, Jeju Island, Korea, October 2005, 2005.
The third pascal recognizing textual entailment challenge. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, Bill Dolan, Proceedings of the PASCAL@ACL. the PASCAL@ACLDanilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the PASCAL@ACL 2007.
Don't stop pretraining: Adapt language models to domains and tasks. Ana Suchin Gururangan, Swabha Marasovic, Kyle Swayamdipta, Iz Lo, Doug Beltagy, Noah A Downey, Smith, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline2020Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8342-8360.
A joint many-task model: Growing a neural network for multiple NLP tasks. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, Richard Socher, 10.18653/v1/d17-1206Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkKazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9- 11, 2017, pages 1923-1933.
Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, Proceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USANeil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkeb- ski, Bruna Morrone, Quentin de Laroussilhe, An- drea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 2790-2799.
Universal Language Model Fine-tuning for Text Classification. Jeremy Howard, Sebastian Ruder, 10.18653/v1/P18-1031Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018. the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018Melbourne, AustraliaLong Papers1Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 328-339.
First quora dataset release: Question pairs. Shankar Iyer, Nikhil Dandekar, Kornel Csernai, Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. First quora dataset release: Question pairs [online].
Jared Kaplan, Sam Mccandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv preprintJared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv preprint.
Specializing unsupervised pretraining models for word-level semantic similarity. Anne Lauscher, Ivan Vulić, Maria Edoardo, Anna Ponti, Goran Korhonen, Glavaš, arXiv preprintAnne Lauscher, Ivan Vulić, Edoardo Maria Ponti, Anna Korhonen, and Goran Glavaš. 2019. Specializing unsupervised pretraining models for word-level se- mantic similarity. arXiv preprint.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint.
Episodic memory in lifelong language learning. Sebastian Cyprien De Masson D'autume, Lingpeng Ruder, Dani Kong, Yogatama, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; Vancouver, BC, CanadaCyprien de Masson d'Autume, Sebastian Ruder, Ling- peng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. In Advances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 13122-13131.
To tune or not to tune? adapting pretrained representations to diverse tasks. Matthew E Peters, Sebastian Ruder, Noah A Smith, 10.18653/v1/w19-4302Proceedings of the 4th Workshop on Representation Learning for NLP. the 4th Workshop on Representation Learning for NLPFlorence, ItalyMatthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP, RepL4NLP@ACL 2019, Florence, Italy, August 2, 2019, pages 7-14.
AdapterFusion: Non-destructive task composition for transfer learning. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych, arXiv preprintJonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterFusion: Non-destructive task composition for transfer learning. arXiv preprint.
MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer. Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language Processing2020Virtual ConferenceJonas Pfeiffer, Ivan Vulić, Iryna Gurevych, and Se- bastian Ruder. 2020b. MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Virtual Conference.
Jason Phang, Thibault Févry, Samuel R Bowman, Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprintJason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint.
. Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel RYada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R.
Intermediate-task transfer learning with pretrained language models: When and why does it work?. Bowman, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020. the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020OnlineBowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 5231-5247.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Lim- its of Transfer Learning with a Unified Text-to-Text Transformer. arXiv preprint.
SQuAD: 100,000+ Questions for Machine Comprehension of Text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/d16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceed- ings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383-2392.
Learning multiple visual domains with residual adapters. Hakan Sylvestre-Alvise Rebuffi, Andrea Bilen, Vedaldi, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USASylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In Advances in Neural Infor- mation Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4- 9 December 2017, Long Beach, CA, USA, pages 506-516.
Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. AdapterDrop: On the Efficiency of Adapters in Transformers. arXiv preprintAndreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. AdapterDrop: On the Efficiency of Adapters in Transformers. arXiv preprint.
An Overview of Multi-Task Learning in Deep Neural Networks. Sebastian Ruder, arXiv preprintSebastian Ruder. 2017. An Overview of Multi-Task Learning in Deep Neural Networks. arXiv preprint.
Transfer learning in natural language processing. Sebastian Ruder, E Matthew, Swabha Peters, Thomas Swayamdipta, Wolf, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USATutorialsSebastian Ruder, Matthew E Peters, Swabha Swayamdipta, and Thomas Wolf. 2019. Trans- fer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Tutorials.
A hierarchical multi-task approach for learning embeddings from semantic tasks. Victor Sanh, Thomas Wolf, Sebastian Ruder, 10.1609/aaai.v33i01.33016949The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. Honolulu, Hawaii, USA2019Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning em- beddings from semantic tasks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 6949-6956.
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V Le, Geoffrey E Hinton, Jeff Dean, Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural net- works: The sparsely-gated mixture-of-experts layer.
5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsIn 5th International Conference on Learning Repre- sentations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013. the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013Seattle, Washington, USAA meeting of SIGDAT, a Special Interest Group of the ACLRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2013, 18-21 October 2013, Grand Hy- att Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631-1642.
BERT and pals: Projected attention layers for efficient adaptation in multi-task learning. Asa Cooper Stickland, Iain Murray, Proceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USAAsa Cooper Stickland and Iain Murray. 2019. BERT and pals: Projected attention layers for efficient adaptation in multi-task learning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 5986-5995.
Energy and policy considerations for deep learning in NLP. Emma Strubell, Ananya Ganesh, Andrew Mccallum, 10.18653/v1/p19-1355Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-Au- gust 2, 2019, Volume 1: Long Papers, pages 3645- 3650.
Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, 10.18653/v1/w18-5446Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP. the Workshop: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAlex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and anal- ysis platform for natural language understand- ing. In Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, Black- boxNLP@EMNLP 2018, Brussels, Belgium, Novem- ber 1, 2018, pages 353-355.
. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprintRuize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xu- anjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2020. K-adapter: Infusing knowl- edge into pre-trained models with adapters. arXiv preprint.
Neural network acceptability judgments. Alex Warstadt, Amanpreet Singh, Samuel R , Transactions of the Association for Computational Linguistics. 7BowmanAlex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel R , 10.18653/v1/n18-1101Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana, USA1Bowman. Long PapersAdina Williams, Nikita Nangia, and Samuel R. Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1112-1122.
Anthony Moi an-dArt Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2020. Hugging-Face's Transformers: State-of-the-art Natural Language Processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingEMNLP 2020, Virtual Conference, 2020 Proceedings of EMNLP: Systems DemonstrationsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi an- dArt Pierric Cistac, Tim Rault, Rémi Louf, Mor- gan Funtowicz, and Jamie Brew. 2020. Hugging- Face's Transformers: State-of-the-art Natural Lan- guage Processing. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2020, Virtual Conference, 2020 Proceedings of EMNLP: Systems Demonstrations.
| [
"https://github.com/huggingface/transformers"
] |
[
"Structured Reordering for Modeling Latent Alignments in Sequence Transduction",
"Structured Reordering for Modeling Latent Alignments in Sequence Transduction"
] | [
"Bailin Wang bailin.wang@ed.ac.uk \nUniversity of Edinburgh\n\n",
"Mirella Lapata \nUniversity of Edinburgh\n\n",
"Ivan Titov ititov@inf.ed.ac.uk \nUniversity of Edinburgh\n\n\nUniversity of Amsterdam\n\n"
] | [
"University of Edinburgh\n",
"University of Edinburgh\n",
"University of Edinburgh\n",
"University of Amsterdam\n"
] | [] | Despite success in many domains, neural models struggle in settings where train and test examples are drawn from different distributions. In particular, in contrast to humans, conventional sequence-to-sequence (seq2seq) models fail to generalize systematically, i.e., interpret sentences representing novel combinations of concepts (e.g., text segments) seen in training. Traditional grammar formalisms excel in such settings by implicitly encoding alignments between input and output segments, but are hard to scale and maintain. Instead of engineering a grammar, we directly model segment-to-segment alignments as discrete structured latent variables within a neural seq2seq model. To efficiently explore the large space of alignments, we introduce a reorder-first align-later framework whose central component is a neural reordering module producing separable permutations. We present an efficient dynamic programming algorithm performing exact marginal and MAP inference of separable permutations, and, thus, enabling end-to-end differentiable training of our model. The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks (i.e., semantic parsing and machine translation). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). arXiv:2106.03257v3 [cs.CL] 26 Oct 2021program. Current seq2seq models fail in this systematic generalization setting[13,27]. In contrast, traditional grammar formalisms decompose correspondences between utterances and programs into compositional mappings of substructures [50], enabling grammar-based parsers to recombine rules acquired during training, as needed for systematic generalization. Grammars have proven essential in statistical semantic parsing in the pre-neural era[56,62], and have gained renewed interest now as a means of achieving systematic generalization[20,45]. However, grammars are hard to create and maintain (e.g., requiring grammar engineering or grammar induction stages) and do not scale well to NLP problems beyond semantic parsing (e.g., machine translation). In this work, we argue that the key property of grammar-based models, giving rise to their improved ood performance, is that a grammar implicitly encodes alignments between input and output segments. For example, inFigure 1, the expected segment-level alignments are 'the length → len' and 'the longest river → longest(river(all))'. The encoded alignments allow for explicit decomposition of input and output into segments, and consistent mapping between input and output segments. In contrast, decision rules employed by conventional seq2seq models do not exhibit such properties. For example, recent work[16]shows that primitive units such as words are usually inconsistently mapped across different contexts, preventing these models from generalizing primitive units to new contexts. Instead of developing a full-fledged grammar-based method, we directly model segment-level alignments as structured latent variables. The resulting alignment-driven seq2seq model remains end-to-end differentiable, and, in principle, applicable to any sequence transduction problem.1Our code and data are available at https://github.com/berlino/tensor2struct-public.2 | null | [
"https://arxiv.org/pdf/2106.03257v3.pdf"
] | 235,358,760 | 2106.03257 | c0e059c46aea358872b4760aed53c4da3beaaeee |
Structured Reordering for Modeling Latent Alignments in Sequence Transduction
Bailin Wang bailin.wang@ed.ac.uk
University of Edinburgh
Mirella Lapata
University of Edinburgh
Ivan Titov ititov@inf.ed.ac.uk
University of Edinburgh
University of Amsterdam
Structured Reordering for Modeling Latent Alignments in Sequence Transduction
Despite success in many domains, neural models struggle in settings where train and test examples are drawn from different distributions. In particular, in contrast to humans, conventional sequence-to-sequence (seq2seq) models fail to generalize systematically, i.e., interpret sentences representing novel combinations of concepts (e.g., text segments) seen in training. Traditional grammar formalisms excel in such settings by implicitly encoding alignments between input and output segments, but are hard to scale and maintain. Instead of engineering a grammar, we directly model segment-to-segment alignments as discrete structured latent variables within a neural seq2seq model. To efficiently explore the large space of alignments, we introduce a reorder-first align-later framework whose central component is a neural reordering module producing separable permutations. We present an efficient dynamic programming algorithm performing exact marginal and MAP inference of separable permutations, and, thus, enabling end-to-end differentiable training of our model. The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks (i.e., semantic parsing and machine translation). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). arXiv:2106.03257v3 [cs.CL] 26 Oct 2021program. Current seq2seq models fail in this systematic generalization setting[13,27]. In contrast, traditional grammar formalisms decompose correspondences between utterances and programs into compositional mappings of substructures [50], enabling grammar-based parsers to recombine rules acquired during training, as needed for systematic generalization. Grammars have proven essential in statistical semantic parsing in the pre-neural era[56,62], and have gained renewed interest now as a means of achieving systematic generalization[20,45]. However, grammars are hard to create and maintain (e.g., requiring grammar engineering or grammar induction stages) and do not scale well to NLP problems beyond semantic parsing (e.g., machine translation). In this work, we argue that the key property of grammar-based models, giving rise to their improved ood performance, is that a grammar implicitly encodes alignments between input and output segments. For example, inFigure 1, the expected segment-level alignments are 'the length → len' and 'the longest river → longest(river(all))'. The encoded alignments allow for explicit decomposition of input and output into segments, and consistent mapping between input and output segments. In contrast, decision rules employed by conventional seq2seq models do not exhibit such properties. For example, recent work[16]shows that primitive units such as words are usually inconsistently mapped across different contexts, preventing these models from generalizing primitive units to new contexts. Instead of developing a full-fledged grammar-based method, we directly model segment-level alignments as structured latent variables. The resulting alignment-driven seq2seq model remains end-to-end differentiable, and, in principle, applicable to any sequence transduction problem.1Our code and data are available at https://github.com/berlino/tensor2struct-public.2
Introduction
Training Examples what is the length of the colorado river ?
len( river( riverid ( 'colorado' ) ) )
what is the longest river ?
longest( river( all ) ) )
what is the length of the longest river ? len( longest( river( all ) ) ) Test Example Figure 1: A semantic parser needs to generalize to test examples which contain segments from multiple training examples (shown in green and blue).
Recent advances in deep learning have led to major progress in many domains, with neural models sometimes achieving or even surpassing human performance [53]. However, these methods often struggle in out-of-distribution (ood) settings where train and test examples are drawn from different distributions. In particular, unlike humans, conventional sequence-to-sequence (seq2seq) models, widely used in natural language processing (NLP), fail to generalize systematically [4,30,31], i.e., correctly interpret sentences representing novel combinations of concepts seen in training. Our goal is to provide a mechanism for encouraging systematic generalization in seq2seq models.
To get an intuition about our method, consider the semantic parsing task shown in Figure 1. A learner needs to map a natural language (NL) utterance to a program which can then be executed on a knowledge base. To process the test utterance, the learner needs to first decompose it into two segments previously observed in training (shown in green and blue), and then combine their corresponding program fragments to create a new Modeling segment-level alignments requires simultaneously inducing a segmentation of input and output sequences and discovering correspondences between the input and output segments. While segment-level alignments have been previously incorporated in neural models [54,60], to maintain tractability, these approaches support only monotonic alignments. The monotonicity assumption is reasonable for certain tasks (e.g., summarization), but it is generally overly restrictive (e.g., consider semantic parsing and machine translation). To relax this assumption, we complement monotonic alignments with an extra reordering step. That is, we first permute the source sequence so that segments within the reordered sequence can be aligned monotonically to segments of the target sequence. Coupling latent permutations with monotonic alignments dramatically increases the space of admissible segment alignments.
The space of general permutations is exceedingly large, so, to allow for efficient training, we restrict ourselves to separable permutations [5]. We model separable permutations as hierarchical reordering of segments using permutation trees. This hierarchical way of modeling permutations reflects the hierarchical nature of language and hence is arguably more appropriate than 'flat' alternatives [36]. Interestingly, recent studies [49,51] demonstrated that separable permutations are sufficient for capturing the variability of permutations in linguistic constructions across natural languages, providing further motivation for our modeling choice.
Simply marginalizing over all possible separable permutations remains intractable. Instead, inspired by recent work on modeling latent discrete structures [10,15], we introduce a continuous relaxation of the reordering problem. The key ingredients of the relaxation are two inference strategies: marginal inference, which yields the expected permutation under a distribution; MAP inference, which returns the most probable permutation. In this work, we propose efficient dynamic programming algorithms to perform exact marginal and MAP inference with separable permutations, resulting in effective differentiable neural modules producing relaxed separable permutations. By plugging these modules into an existing module supporting monotonic segment alignments [60], we obtain end-to-end differentiable seq2seq models, supporting non-monotonic segment-level alignments.
In summary, our contributions are:
• A general seq2seq model for NLP tasks that accounts for latent non-monotonic segment-level alignments.
• Novel and efficient algorithms for exact marginal and MAP inference with separable permutations, allowing for end-to-end training using a continuous relaxation. 1 • Experiments on synthetic problems and NLP tasks (semantic parsing and machine translation) showing that modeling segment alignments is beneficial for systematic generalization.
Background and Related Work
Systematic Generalization
Human learners exhibit systematic generalization, which refers to their ability to generalize from training data to novel situations. This is possible due to the compositionality of natural languagesto a large degree, sentences are built using an inventory of primitive concepts and finite structurebuilding mechanisms [9]. For example, if one understands 'John loves the girl', they should also understand 'The girl loves John' [14]. This is done by 'knowing' the meaning of individual words and the grammatical principle of subject-verb-object composition. As pointed out by Goodwin et al. [16], systematicity entails that primitive units have consistent meaning across different contexts. In contrast, in seq2seq models, the representations of a word are highly influenced by context (see experiments in Lake and Baroni [30]). This is also consistent with the observation that seq2seq models tend to memorize large chunks rather than discover underlying compositional principles [22]. The memorization of large sequences lets the model fit the training distribution but harms out-ofdistribution generalization.
Discrete Alignments as Conditional Computation Graphs
Latent discrete structures enable the incorporation of inductive biases into neural models and have been beneficial for a range of problems. For example, input-dependent module layouts [2] or graphs [40] have been explored in visual question answering. There is also a large body of work on inducing task-specific discrete representations (usually trees) for NL sentences [10,18,39,59]. The trees are induced simultaneously with learning a model performing a computation relying on the tree (typically a recursive neural network [47]), while optimizing a task-specific loss. Given the role the structures play in these approaches -i.e., defining the computation flow -we can think of the structures as conditional computation graphs.
In this work, we induce discrete alignments as conditional computation graphs to guide seq2seq models. Given a source sequence x with n tokens and a target sequence y with m tokens, we optimize the following objective:
X = Encode θ (x) L θ,φ (x, y) = − log E p φ (M |X) p θ (y|X, M )(1)
where Encode is a function that embeds x into X ∈ R n×h with h being the hidden size, M ∈ {0, 1} n×m is the alignment matrix between input and output tokens. In this framework, alignments M are separately predicted by p φ (M |X) to guide the computation p θ (y|X, M ) that maps x to y.
The parameters of both model components (φ and θ) are disjoint.
Relation to Attention Standard encoder-decoder models [3] rely on continuous attention weights i.e., M [:, i] ∈ n−1 for each target token 1 ≤ i ≤ m. Discrete versions of attention (aka hard attention) have been studied in previous work [12,58] and show superior performance in certain tasks. In the discrete case M is a sequence of m categorical random variables. Though discrete, the hard attention only considers word-level alignments, i.e., assumes that each target token is aligned with a single source token. This is a limiting assumption; for example, in traditional statistical machine translation, word-based models (e.g., [7]) are known to achieve dramatically weaker results than phrase-based models (e.g., [29]). In this work, we aim to bring the power of phrase-level (aka segment-level) alignments to neural seq2seq models. 2
Latent Segment Alignments via Separable Permutations
Our method integrates a layer of segment-level alignments with a seq2seq model. The architecture of our model is shown in Figure 2. Central to this model is the alignment network, which decomposes the alignment problem into two stages: (i) input reordering and (ii) monotonic alignment between the reordered sequence and the output. Conceptually, we decompose the alignment matrix from Eq 1 into two parts: Figure 2: The architecture of our seq2seq model for semantic parsing. After encoding the input utterance, our model permutes the input representations using our reordering module. Then, the reordered encodings will be used for decoding the output program in a monotonic manner.
M = M pe M mo(2
where M pe ∈ R n×n is a permutation matrix, and M mo ∈ R n×m represents monotonic alignments. With this conceptual decomposition, we can rewrite the objective in Eq 1 as follows:
L θ,φ (x, y) = − log E p φ (Mpe|x) E p φ (Mmo|MpeX) p θ (y|M pe X, M mo )(3)
where M pe X denotes the reordered representation. With a slight abuse of notation, φ now denotes the parameters of the model generating permutations, and φ denotes the parameters used to produce monotonic alignments. Given the permutation matrix M pe , the second expectation E p φ (Mmo|MpeX) p θ (y|M pe X, M mo ), which we denote as p θ,φ (y|M pe X), can be handled by existing methods, such as SSNT [60] and SWAN [54]. In the rest of the paper, we choose SSNT as the module for handling monotonic alignment. 3 We can rewrite the objective we optimize in the following compact form:
L θ,φ,φ (x, y) = − log E p φ (Mpe|x) p θ,φ (y|M pe X)(4)
Structured Latent Reordering by Binary Permutation Trees
Inspired by Steedman [51], we restrict word reorderings to separable permutations. Formally, separable permutations are defined in terms of binary permutation trees (aka separating trees [5]), i.e., if a permutation can be represented by a permutation tree, it is separable. A binary permutation tree over a permutation of a sequence 1 . . . n is a binary tree in which each node represents the ordering of a segment i . . . j; the children exhaustively split their parent into sub-segments i . . . k and k + 1 . . . j. Each node has a binary label that decides whether the segment of the left child precedes that of the right child.
the girl saw the hedgehog saw the girl the hedgehog re-ordered sentence
X 1,2 X 2,3 X 3,4 X 4,5 X 5,6 X 1,3 X 4,6 X 1,4 X 1,6
Figure 3: The tree represents the reordered sentences 'saw the girl the hedgehog' where , ∧ denotes Inverted and Straight, respectively.
Bracketing transduction grammar [BTG,57], which is proposed in the context of machine translation, is the corresponding context-free grammar to represent binary permutation trees. Specifically, BTG has one non-terminal (X) and three anchored rules:
S i,j,k : X k i Straight −−−−−→ X j i X k j I i,j,k : X k i Inverted − −−−−− → X j i X k j T i : X i+1 i → x i
where X k i is the anchored non-terminal covering the segment from i to k (excluding k). The first two rules decide whether to keep or invert two segments when constructing a larger segment; the last rule states that every word x i in an utterance is associated with a non-terminal X i+1
i . An example is shown in Figure 3. Through this example, we note that the first two rules only signify which segments to inverse; an additional process of interpreting the tree (i.e., performing actual actions of keeping or inverting segments) is needed to obtain the permutated sequence. This hierarchical approach to generating separable permutations reflects the compositional nature of language, and, thus, appears more appealing than using 'flat' alternatives [11,17,36]. Moreover, with BTGs, we can incorporate segment-level features to model separable permutations, and design tractable algorithms for learning and inference.
By assigning a score to each anchored rule using segment-level features, we obtain a distribution over all possible derivations, and use it to compute the objective in Eq 4.
p φ (D|x) = R∈D f φ (R) Z(x, φ) , L θ,φ,φ (x, y) = − log E p φ (D|x) p θ,φ (y|M D pe X) (5)
where f φ is a score function assigning a (non-negative) weight to an anchored rule R ∈ {S, I, T },
Z(x, φ) = D R∈D f φ (R)
is the partition function, which can be computed using the inside algorithm, M D pe is the permutation matrix corresponding to the derivation D. BTG, along with the weight assigned for each rule, is a weighted context-free grammar (WCFG). In this WCFG, the weight is only normalized at the derivation level. As we will see in Algorithm 1, we are interested in normalizing the weight of production rules and converting the WCFG to an equivalent PCFG following Smith and Johnson [46], so that the probability of a derivation can be computed as follows:
p φ (D|x) = R∈D G φ (R)(6)
where G φ (R) is the weight of the production rule R under the transformed PCFG. The details of the conversion are provided in the Appendix.
The challenge with optimizing the objective in Eq 5 is that the search space of possible derivations is exponential, making the estimation of the gradients with respect to parameters of the reordering component (φ) non-trivial. We now present two differentiable surrogates we use.
Soft Reordering: Computing Marginal Permutations
Algorithm 1 Dynamic programming for computing marginals and differentiable sampling of permutation matrix wrt. a parameterized grammar Input: G φ (R): probability of an anchored rule R sampling: whether perform sampling 1: for i := 1 to n do if sampling then 8:Ĝ φ (R) = s_arg max(G φ (R)) 9: else computing marginals 10:Ĝ φ (R) = G φ (R) 11: end if 12:
for j := i + 1 to k − 1 do 13: E k i +=Ĝ φ (S i,j,k )(E j i ⊕ E k j )
14:
E k i +=Ĝ φ (I i,j,k )(E j i E k j ) 15:
end for 16: end for 17: end for
18: return E n+1 1
The first strategy is to use the deterministic expectation of permutations to softly reorder a sentence, analogous to the way standard attention approximates categorical random variables. Specifically, we use the following approximation:
M pe = E p φ (D|x) M D pe L θ,φ,φ (x, y) ≈ − log p θ,φ (y|M pe X)
where M pe is the marginal permutation matrix, and it can be treated as structured attention [28]. Methods for performing marginal inference for anchored rules, i.e., computing the marginal distribution of production rules are well-known in NLP [35]. However, we are interested in the marginal permutation matrix (or equivalently the expectation of the matrix components) as the matrix is the data structure that is ultimately used in our model. As a key contribution of this work, we propose an efficient algorithm to exactly compute the marginal permutation matrix using dynamic programming.
In order to compute the marginal permutation matrix we need to marginalize over the exponentially many derivations of each permutation. We propose to map a derivation of BTG into its corresponding permutation matrix in a recursive manner. Specifically, we first associate word i with an identity permutation matrix M i+1 i = 1; then we associate Straight and Inverted rules with direct ⊕ and skew sums of permutation matrices, respectively:
A ⊕ B = A 0 0 B A B = 0 A B 0
For example, the permutation matrix of the derivation tree shown in Figure 3 can be obtained by:
M 6 1 = (M 2 1 ⊕ M 3 2 ) M 4 3 ⊕ (M 5 4 ⊕ M 6 5 )(7)
Intuitively, the permutation matrix of long segments can be constructed by composing permutation matrices of short segments. Motivated by this, we propose a dynamic programming algorithm, which takes advantage of the observation that we can reuse the permutation matrices of short segments when computing permutation matrices of long segments, as shown in Algorithm 1. While the above equation is defined over discrete permutation matrices encoding a single derivation, the algorithm applies recursive rules to expected permutation matrices. Central to the algorithm is the following recursion:
E k i = i<j<k G φ (S i,j,k )(E j i ⊕ E k j ) + G φ (I i,j,k )(E j i E k j )(8)
where E k i is the expected permutation matrix for the segment from i to k, G φ (R) is the probability of employing the production rule R, defined in Eq 6. Overall, Algorithm 1 is a bottom-up method that constructs expected permutation matrices incrementally in Step 13 and 14, while relying on the probability of the associated production rule. We prove the correctness of this algorithm by induction in the Appendix.
Hard Reordering: Gumbel-Permutation by Differentiable Sampling
During inference, for efficiency, it is convenient to rely on the most probable derivation D and its corresponding most probable y:
arg max y p θ,φ (y|M D pe X)(9)
where D = arg max D p φ (D|x). The use of discrete permutations M D pe during inference and soft reorderings during training lead to a training-inference gap which may be problematic. Inspired by recent Gumbel-Softmax operator [23,34] that relaxes the sampling procedure of a categorical distribution using the Gumbel-Max trick, we propose a differentiable procedure to obtain an approximate sample M D pe from p(D|x). Concretely, the Gumbel-Softmax operator relaxes the perturb-and-MAP procedure [42], where we add noises to probability logits and then relax the MAP inference (i.e., arg max in the categorical case); we denote this operator as s_arg max. In our structured case, we perturb the logits of the probabilities of production rules G φ (R), and relax the structured MAP inference for our problem. Recall that p(D|x) is converted to a PCFG, and MAP inference for PCFG is algorithmically similar to marginal inference. Intuitively, for each segment, instead of marginalizing over all possible production rules in marginal inference, we choose the one with the highest probability (i.e., a local MAP inference with categorical random variables) during MAP inference. By relaxing each local MAP inference with Gumbel-Softmax (Step 8 of Algorithm 1), we obtain a differentiable sampling procedure. 4 We choose Straight-Through Gumbel-Softmax so that the return of Algorithm 1 is a discrete permutation matrix, and in this way we close the training-inference gap faced by soft reordering.
Summary We propose two efficient algorithms for computing marginals and obtaining samples of separable permutations with their distribution parameterized via BTG. In both algorithms, PCFG plays an important role of decomposing a global problem into sub-problems, which explains why we convert p(D|x) into a PCFG in Eq 6. Relying on the proposed algorithms, we present two relaxations of the discrete permutations that let us induce latent reorderings with end-to-end training. We refer to the resulting system as ReMoto, short for a seq2seq model with Reordered-then-Monotone alignments. Soft-ReMoto and Hard-ReMoto denote the versions which use soft marginal permutations and hard Gumbel permutations, respectively.
Segment-Level Alignments
Segments are considered as the basic elements being manipulated in our reordering module. Concretely, permutation matrices are constructed by hierarchically reordering input segments. SSNT, which is the module on top of our reordering module for monotonically generating output, conceptually also considers segments as basic elements. Intuitively, SSNT alternates between consuming an input segment and generating an output segment. Modeling segments provides a strong inductive bias, reflecting the intuition that sequence transduction in NLP can be largely accomplished by manipulations at the level of segments. In contrast, there is no explicit notion of segments in conventional seq2seq methods.
Dataset Input Output
Arithmetic ((1 + 9) * ((7 + 8)/4)) ((19+)((78+)4/) * ) SCAN-SP jump twice after walk around left thrice after (twice (jump), thrice(walk (around, left))) GeoQuery how many states do not have rivers ? count(exclude(state(all), loc_1(river(all)))) Table 2: Accuracy (%) on the arithmetic and SCAN-SP tasks.
However, different from our reordering module where segments are first-class objects during modeling, the alternating process of SSNT is realized by a series of token-level decisions (e.g., whether to keep consuming the next input token). Thus, properties of segments (e.g., segment-level features) are not fully exploited in SSNT. In this sense, one potential way to further improve ReMoto is to explore better alternatives to SSNT that can treat segments as first-class objects as well. We leave this direction for future work.
Reordering in Previous Work
In traditional statistical machine translation (SMT), reorderings are typically handled by a distortion model [e.g., 1] in a pipeline manner. Neubig et al. [38], Nakagawa [37] and Stanojević and Sima'an [48] also use BTGs for modeling reorderings. Stanojević and Sima'an [48] go beyond binarized grammars, showing how to support 5-ary branching permutation trees. Still, they assume the word alignments have been produced on a preprocessing step, using an alignment tool [41]. Relying on these alignments, they induce reorderings. Inversely, we rely on latent reordering to induce the underlying word and segment alignments.
Reordering modules have been previously used in neural models, and can be assigned to the following two categories. First, reordering components [8,21] were proposed for neural machine translation. However, they are not structured or sufficiently constrained in the sense that they may produce invalid reorderings (e.g., a word is likely to be moved to more than one new position). In contrast, our module is a principled way of dealing with latent reorderings. Second, the generic permutations (i.e., one-to-one matchings or sorting), though having differentiable counterparts [11,17,36], do not suit our needs as they are defined in terms of tokens, rather than segments. For comparison, in our experiments, we design baselines that are based on Gumbel-Sinkhorn Network [36], which is used previously in NLP (e.g., [33]).
Experiments
First, we consider two diagnostic tasks where we can test the neural reordering module on its own. Then we further assess our general seq2seq model ReMoto on two real-world NLP tasks, namely semantic parsing and machine translation.
Diagnostic Tasks
Arithmetic We design a task of converting an arithmetic expression in infix format to the one in postfix format. An example is shown in Table 1. We create a synthetic dataset by sampling data from a PCFG. In order to generalize, a system needs to learn how to manipulate internal sub-structures (i.e., segments) while respecting well-formedness constraints. This task can be solved by the shunting-yard algorithm but we are interested to see if neural networks can solve it and generalize ood by learning from raw infix-postfix pairs. For standard splits (IID), we randomly sample 20k infix-postfix pairs whose nesting depth is set to be between 1 and 6; 10k, 5k, 5k of these pairs are used as train, dev and test sets, respectively. To test systematic generalization, we create a Length split (LEN) where training and dev examples remain the same as IID splits, but test examples have a nesting depth of 7. In this way, we test whether a system can generalize to unseen longer input.
SCAN-SP
We use the SCAN dataset [30], which consists of simple English commands coupled with sequences of discrete actions. Here we use the semantic parsing version, SCAN-SP [20], where the goal is to predict programs corresponding to the action sequences. An example is shown in Table 1. As in these experiments our goal is to test the reordering component alone, we remove parentheses and commas in programs. For example, the program after (twice (jump), thrice(walk (around, left))) is converted to a sequence: after twice jump thrice walk around left. In this way, the resulting parentheses-free sequence can be viewed as a reordered sequence of the NL utterance 'jump twice after walk around left thrice'. The grammar of the programs is known so we can reconstruct the original program from the intermediate parentheses-free sequences using the grammar. Apart from the standard split (IID, aka simple split [30]), we create a Length split (LEN) where the training set contains NL utterances with a maximum length 5, while utterances in the dev and test sets have a minimum length of 6. 5 Baselines and Results In both diagnostic tasks, we use ReMoto with a trivial monotonic alignment matrix M mo (an identity matrix) in Eq 3. Essentially, ReMoto becomes a sequence tagging model. We consider three baselines: (1) vanilla Seq2Seq models with Luong attention [32]; (2) an LSTM-based tagging model which learns the reordering implicitly, and can be viewed as a version ReMoto with a trivial M pe and M mo ; (3) Sinkhorn Attention that replaces the permutation matrix of Soft-ReMoto in Eq 4 by Gumbel-Sinkhorn networks [36].
We report results by averaging over three runs in Table 2. In both datasets, almost all methods achieve perfect accuracy in IID splits. However, baseline systems cannot generalize well to the challenging LEN splits. In contrast, our methods, both Soft-ReMoto and Hard-ReMoto, perform very well on LEN splits, surpassing the best baseline system by large margins (> 40%). The results indicate that ReMoto, particularly its neural reordering module, has the right inductive bias to learn reorderings. We also test a variant Soft-ReMoto where parameters θ, φ with shared input embeddings. This variant does not generalize well to the LEN split on the arithmetic task, showing that it is beneficial to split models of the 'syntax' (i.e., alignment) and 'semantics', confirming what has been previously observed [18,44].
Semantic Parsing
Our second experiment is on semantic parsing where ReMoto models the latent alignment between NL utterances and their corresponding programs. We use GeoQuery dataset [61] which contains 880 utterance-programs pairs. The programs are in variable-free form [25]; an example is shown in Table 1. 6 Similarly to SCAN-SP, we transform the programs into parentheses-free form which have better structural correspondence with utterances.Again, we can reconstruct the original programs based on the grammar. An example of such parentheses-free form is shown in Figure 2. Apart from the standard version, we also experiment with the Chinese and German versions of GeoQuery [24,52]. Since different languages exhibit divergent word orders [51], the results in the multilingual setting will tell us if our model can deal with this variability.
In addition to standard IID splits, we create a LEN split where the training examples have parenthesesfree programs with a maximum length 4; the dev and test examples have programs with a minimum length 5. We also experiment with the TEMP split [20] where training and test examples have programs with disjoint templates.
Baselines and Results
Apart from conventional seq2seq models, for comparison, we also implemented the syntactic attention [44]. Our model ReMoto is similar in spirit to the syntactic attention, 'syntax' in their model (i.e., alignment) and 'semantics' (i.e., producing the representation relying EN ZH DE Model IID TEMP LEN IID TEMP LEN IID TEMP Table 3: Exact-match accuracy (%) on three splits of the multilingual GeoQuery dataset. Numbers underlined are significantly better than others (p-value ≤ 0.05 using the paired permutation test). on the alignment) are separately modeled. In contrast to our structured mechanism for modeling alignments, their syntactic attention still relies on the conventional attention mechanism. We also compare with SSNT, which can be viewed as an ablated version of ReMoto by removing our reordering module.
Results are shown in Table 3. For the challenging TEMP and LEN splits, our best performing model Hard-ReMoto achieves consistently stronger performance than seq2seq, syntactic attention and SSNT. Thus, our model bridges the gap between conventional seq2seq models and specialized state-of-the-art grammar-based models [20,45]. 7
Machine Translation
Our final experiment is on small-scale machine translation tasks, where ReMoto models the latent alignments between parallel sentences from two different languages. To probe systematic generalization, we also create a LEN split for each language pair in addition to the standard IID splits.
English-Japanese We use the small en-ja dataset extracted from TANKA Corpus. The original split (IID) has 50k/500/500 examples for train/dev/test with lengths 4-16 words. 8
Baselines and Results
In addition to the conventional seq2seq, we compare with the original SSNT model which only accounts for monotonic alignments. We also implemented a variant that combines SSNT with the local reordering module [21] as our baseline to show the advantage of our structured ordering module.
Results are shown in Table 4. Our model, especially Hard-ReMoto, consistently outperforms other baselines on both splits. In EN-JA translation, the advantage of our best-performance Hard-ReMoto is slightly more pronounced in the LEN split than in the IID split. In ZH-EN translation, while SSNT and its variant do not outperform seq2seq in the LEN split, ReMoto can still achieve better results than seq2seq. These results show that our model is better than its alternatives at generalizing to longer sentences for machine translation. prediction: state 4 next_to_2 9 longest river 6,7,8 loc_2 countryid_ENTITY 5,3,2 ground truth: state next_to_2 longest river loc_2 countryid_ENTITY original input: according 1 to 2 the 3 newspaper 4 , 5 there 6 was 7 a 8 big 9 fire 10 last 11 night 12 reordered input: according 1 to 2 the 3 newspaper 4 , 5 night 12 last 11 big 9 fire 10 a 8 there 6 was 7 prediction: 新によれば、 1,2,3,4,5 昨夜 12 大 11,9 火事 10 があ 8,6 った 7 ground truth: 新によると昨夜大火事があった Table 5: Output examples of Chinese semantic parsing and English-Japanese translation. For clarity, the input words are labeled with position indices, and, for semantic parsing, with English translations. A prediction consists of multiple segments, each annotated with a superscript referring to input tokens.
Interpretability Latent alignments, apart from promoting systematic generalization, also lead to better interpretability as discrete alignments reveal the internal process for generating output. For example, in Table 5, we show a few examples from our model. Each output segment is associated with an underlying rationale, i.e. a segment of the reordered input.
Conclusion and Future Work
In this work, we propose a new general seq2seq model that accounts for latent segment-level alignments. Central to this model is a novel structured reordering module which is coupled with existing modules to handle non-monotonic segment alignments. We model reorderings as separable permutations and propose an efficient dynamic programming algorithm to perform marginal inference and sampling. It allows latent reorderings to be induced with end-to-end training. Empirical results on both synthetic and real-world datasets show that our model can achieve better systematic generalization than conventional seq2seq models.
The strong inductive bias introduced by modeling alignments in this work could be potentially beneficial in weakly-supervised and low-resource settings, such as weakly-supervised semantic parsing and low-resource machine translation where conventional seq2seq models usually do not perform well. Figure 4: The detailed architecture of our seq2seq model for semantic parsing (view in color). First, the structured reordering module genearates a (relaxed) permutation matrix given the input utterrance. Then, the encoding module generates the representations of the input utterance based on the reordered embeddings, which are computed based on the original embedding and the permutation matrix computed in the first step. Finally, the decoding module, namely SSNT, generates the output program monotonically based on the input encodings.
E k i = E p(D k i ) [M (D k i )] = i<j<k G φ (S i,j,k ) E p(D j i ) [M (D j i )]) ⊕ E p(D k j ) [M (D k j )] + G φ (I i,j,k ) E p(D j i ) [M (D j i )] E p(D k j ) [M (D k j )] = i<j<k G φ (S i,j,k )(E j i ⊕ E k j ) + G φ (I i,j,k )(E j i E k j )
where in the second step we consider all the possible expansions of the derivation tree D k i ; in the third step, we obtain the recursion that is used in Step 12-14 of Algorithm 1 by reusing the marginal permutations matrices of shorter segments.
A.3 Architecture and Hyperparameters
The detailed architecture of ReMoto is shown in Figure 4. In the structured reordering module, we compute the scores for BTG production rules using span embeddings [55] followed by a multi-layer perceptron. Specifically, the score function for each rule has form G(R i,j,k ) = MLP(s ij , s jk ), where s ij and s jk are the span embeddings based on [55], MLP is a multi-layer perceptron that outputs a 2-d vector, which corresponds to the score of R=Straight and R=Inverted, respectively. Similar to a conventional LSTM-based encoder-decoder model, LSTMs used in structured reordering and encoding module are bidirectional whereas the LSTM for decoding (within SSNT) is unidirectional. We implemented all models using Pytorch [43]. We list the main hyperparameters we tuned are shown in Table 6. The full hyperparameters for each experiment will be released along with the code.
A.4 Training Strategy
Empirically, we found that during training the structured reordering module tends to converge to a sub-optimal point where it develops a simple reordering strategy and the subsequent modules (i.e., the encoding and decoding module in Figure 4) quickly adapt to naive reorderings. For example, in the EN-JA translation task, the reordering module tends to completely invert the input English translation after training. While this simple strategy proves to be a useful heuristic [26], we would like more accurate reordering to emerge during training. This issue is similar to posterior collapse [6], a common issue in training variational autoencoders.
Inspired by He et al. [19], we speculate that the issue occurred due to that optimization of the structured reordering module usually lags far behind the optimization of subsequent modules during the initial stages of training. We use a simple training strategy to alleviate the issue. Specifically, during the initial M training steps, with a certain probability p, we only update the parameters of the structured reordering module and ignore the gradients of the parameters from the subsequence modules. M and p are treated as hyperparameters. With this strategy, the structured reordering module is updated more often than the subsequent modules, and has a better chance to catch up with the optimization of subsequent modules. We find that this simple training strategy usually leads to better segment alignments and better performance.
.
The marginal permutation matrix for all segments with length n can be obtained by
)count exclude state_all loc 1 river_all
how many states do not have rivers
Structured Reordering
Monotonic Decoding
original encoding
reordered encoding
Table 1 :
1Examples of input-output pairs for parsing tasks.
We create a LEN split where the English sentences of training examples have a maximum length 12 whereas the English sentences in dev/test have a minimum length 13. The LEN split has 50k/538/538 examples for train/dev/test, respectively. Chinese-English We extract a subset from FBIS corpus (LDC2003E14) by filtering English sentences with length 4-30. We randomly shuffle the resulting data to obtain an IID split which has 141k/3k/3k examples for train/dev/test, respectively. In addition, we create a LEN split where English sentences of training examples have a maximum length 29 whereas the English sentences of dev/test examples have a length 30. The LEN split has 140k/4k/4k examples as train/dev/test sets respectively.EN-JA
ZH-EN
IID LEN IID LEN
Seq2Seq
35.6 25.3 21.4 18.1
SSNT [60]
36.3 26.5 20.5 17.3
Local Reordering [21] 36.0 27.1 21.8 17.8
Soft-ReMoto
36.6 27.5 22.3 19.2
Hard-ReMoto
37.4 28.7 22.6 19.5
Table 4 :
4BLEU scores on the EN-JA and ZH-EN translation.
Input: how many states do not have riversEmbedding
LSTM
Parser
Structured
Reordering
Embedding
LSTM
Reordered
Embedding
Monotonic Decoding
SSNT
count exclude state_all loc 1 river_all
Output:
permutation
matrix
input encodings
input/output link
intra-module link
inter-module link
Encoding
with Reordering
: Main hyperparameters of ReMoto.Name
Range
embedding size
[128, 256, 512]
number of encoder LSTM layer
[1,2]
encoder LSTM hidden size
[128, 256, 512]
decoder LSTM layer
[1,2]
decoder LSTM hidden size
[128, 256, 512]
decoder dropout
[0.1, 0.3, 0.5, 0.7, 0.9]
temperature of Gumbel-softmax
[0.1, 1, 2, 10]
label smoothing
[0.0, 0.1]
Table 6
One of our models (see Section 3.2) still has a flavor of standard continuous attention in that it approximates discrete alignments with continuous expectation.
In our initial experiments, we found that SWAN works as well as SSNT but is considerably slower.
If we change s_arg max with arg max in Step 8 of Algorithm 1, we will obtain the algorithm for exact MAP inference.
Since we use the program form, the original length split[30], which is based on the length of action sequence, is not very suitable in our experiments.6 We use the varaible-free form, as opposed to other alternatives such lambda calculus, for two reasons: 1) variable-free programs have been commonly used in systematic generalization settings[20,45], probably it is easier to construct generalization splits using this form; 2) the variable-free form is more suitable for modeling alignments since variables in programs usually make alignments hard to define.
NQG[45] achieves 35.0% in the English LEN, and SBSP[20] (without lexicon) achieves 65.9% in the English TEMP in execution accuracy. Both models are augmented with pre-trained representations (BERT).8 https://github.com/odashi/small_parallel_enja
AcknowledgementsWe thank Miloš Stanojević and Khalil Sima'an for their valuable comments; Lei Yu and Chris Dyer for providing the preprocessed data for machine translation; the anonymous reviewers for their helpful feedback. We gratefully acknowledge the support of the European Research Council (Titov: ERC StG BroadSem 678254; Lapata: ERC CoG TransModal 681760) and the Dutch National Science Foundation (NWO VIDI 639.022.518).A AppendixA.1 WCFG to PCFG ConversionThe algorithm of converting a WCFG to its equivalent PCFG is shown in Algorithm 2. In a bottom-up manner, the algorithm first computes an inner weight β[X k i ] for each segment, which is the total weight of all derivations with root X k i . Then the algorithm normalizes the weight of production rules whose left-hand side is X k i using the inner weight. The resulting normalized weight for a production rule, e.g.,, is the conditional probability of applying the rule X k i S − → X j i X k j given the presence of the segment X k i . The PCFG is equivalent to the original WCFG in the sense that for each derivation D, we have. Full proof of this equivalence can be found in Smith and Johnson[46]. The factorization of the derivation-level probability to rule-level probability facilitates our design of dynamic programming for marginal inference. for i := 0 to n − w do start point7:k := i + w end point8:for j := i + 1 to k − 1 do compute inner weight9: for j := i + 1 to k − 1 do normalize weight13:14:15:end for16:end for 17: end for 18: return G[. . . ]A.2 Proof of the Dynamic Programming for Marginal InferenceWe prove the correctness of the dynamic programming algorithm for computing the marginal permutation matrix of separable permutations by induction as follows.Proof. As a base case, each word (i.e., segment with length 1) is associated with an identity permutation matrix 1. Then we assume that the marginal permutation matrix for all segments with length 1 < k − i < n is E k i , which is defined as E p(D k i ) [M (D k i )] where D k i is the derivation tree of segment i to k, and M (D k i ) is the permutation matrix corresponding to D k i . It is obvious that
Distortion models for statistical machine translation. Yaser Al-Onaizan, Kishore Papineni, Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsYaser Al-Onaizan and Kishore Papineni. Distortion models for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 529-536, 2006.
Neural module networks. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 39-48, 2016.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Systematic generalization: what is required and can it be learned? ICLR. Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Harm Thien Huu Nguyen, Aaron De Vries, Courville, Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. Systematic generalization: what is required and can it be learned? ICLR, 2019.
Pattern matching for permutations. Information Processing Letters. Prosenjit Bose, Jonathan F Buss, Anna Lubiw, 65Prosenjit Bose, Jonathan F Buss, and Anna Lubiw. Pattern matching for permutations. Informa- tion Processing Letters, 65(5):277-283, 1998.
Generating sentences from a continuous space. Luke Samuel R Bowman, Oriol Vilnis, Vinyals, M Andrew, Rafal Dai, Samy Jozefowicz, Bengio, arXiv:1511.06349arXiv preprintSamuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
The mathematics of statistical machine translation: Parameter estimation. Stephen A Della Peter F Brown, Vincent J Della Pietra, Robert L Pietra, Mercer, Computational linguistics. 192Peter F Brown, Stephen A Della Pietra, Vincent J Della Pietra, and Robert L Mercer. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263-311, 1993.
Neural machine translation with reordering embeddings. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsKehai Chen, Rui Wang, Masao Utiyama, and Eiichiro Sumita. Neural machine translation with reordering embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1787-1799, 2019.
Number no. 11 in Massachusetts Institute of Technology. Noam Chomsky, 978-0-262-52740-8Research Laboratory of Electronics. Special technical report. The MIT PressAspects of the theory of syntax. 50th anniversary edition editionNoam Chomsky. Aspects of the theory of syntax. Number no. 11 in Massachusetts Institute of Technology. Research Laboratory of Electronics. Special technical report. The MIT Press, Cambridge, Massachusetts, 50th anniversary edition edition, 1965. ISBN 978-0-262-52740-8.
Learning latent trees with stochastic perturbations and differentiable dynamic programming. Caio Corro, Ivan Titov, 10.18653/v1/P19-1551Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsCaio Corro and Ivan Titov. Learning latent trees with stochastic perturbations and differentiable dynamic programming. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5508-5521, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1551. URL https://www.aclweb.org/ anthology/P19-1551.
Marco Cuturi, Olivier Teboul, Jean-Philippe Vert, arXiv:1905.11885Differentiable ranks and sorting using optimal transport. arXiv preprintMarco Cuturi, Olivier Teboul, and Jean-Philippe Vert. Differentiable ranks and sorting using optimal transport. arXiv preprint arXiv:1905.11885, 2019.
Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, Alexander M Rush, arXiv:1807.03756Latent alignment and variational attention. arXiv preprintYuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander M Rush. Latent alignment and variational attention. arXiv preprint arXiv:1807.03756, 2018.
Improving text-to-SQL evaluation methodology. Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, Dragomir Radev, 10.18653/v1/P18-1033Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. Improving text-to-SQL evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 351-360, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1033. URL https://www.aclweb.org/ anthology/P18-1033.
Connectionism and cognitive architecture: A critical analysis. A Jerry, Fodor, Zenon W Pylyshyn, Cognition. 281-2Jerry A Fodor and Zenon W Pylyshyn. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3-71, 1988.
Yao Fu, Chuanqi Tan, Bin Bi, Mosha Chen, Yansong Feng, Alexander M Rush, arXiv:2011.14244Latent template induction with Gumbel-CRFs. arXiv preprintYao Fu, Chuanqi Tan, Bin Bi, Mosha Chen, Yansong Feng, and Alexander M Rush. Latent template induction with Gumbel-CRFs. arXiv preprint arXiv:2011.14244, 2020.
. Emily Goodwin, Koustuv Sinha, Timothy J O' Donnell, arXiv:2005.04315Probing linguistic systematicity. arXiv preprintEmily Goodwin, Koustuv Sinha, and Timothy J O'Donnell. Probing linguistic systematicity. arXiv preprint arXiv:2005.04315, 2020.
Stochastic optimization of sorting networks via continuous relaxations. Aditya Grover, Eric Wang, Aaron Zweig, Stefano Ermon, arXiv:1903.08850arXiv preprintAditya Grover, Eric Wang, Aaron Zweig, and Stefano Ermon. Stochastic optimization of sorting networks via continuous relaxations. arXiv preprint arXiv:1903.08850, 2019.
Cooperative learning of disjoint syntax and semantics. Serhii Havrylov, Germán Kruszewski, Armand Joulin, 10.18653/v1/N19-1115Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Serhii Havrylov, Germán Kruszewski, and Armand Joulin. Cooperative learning of disjoint syntax and semantics. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1118-1128, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1115. URL https://www.aclweb. org/anthology/N19-1115.
Junxian He, Daniel Spokoyny, Graham Neubig, Taylor Berg-Kirkpatrick, arXiv:1901.05534Lagging inference networks and posterior collapse in variational autoencoders. arXiv preprintJunxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. Lagging inference networks and posterior collapse in variational autoencoders. arXiv preprint arXiv:1901.05534, 2019.
Span-based semantic parsing for compositional generalization. Jonathan Herzig, Jonathan Berant, arXiv:2009.06040arXiv preprintJonathan Herzig and Jonathan Berant. Span-based semantic parsing for compositional general- ization. arXiv preprint arXiv:2009.06040, 2020.
Towards neural phrase-based machine translation. Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng, arXiv:1706.05565arXiv preprintPo-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, and Li Deng. Towards neural phrase-based machine translation. arXiv preprint arXiv:1706.05565, 2017.
Dieuwke Hupkes, Verna Dankers, Mathijs Mul, Elia Bruni, arXiv:1908.08351arXiv: 1908.08351The compositionality of neural networks: integrating symbolism and connectionism. cs, statDieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. The compositionality of neural networks: integrating symbolism and connectionism. arXiv:1908.08351 [cs, stat], August 2019. URL http://arxiv.org/abs/1908.08351. arXiv: 1908.08351.
Categorical reparameterization with gumbel-softmax. Eric Jang, Shixiang Gu, Ben Poole, arXiv:1611.01144arXiv preprintEric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
Semantic parsing with Bayesian tree transducers. Bevan Jones, Mark Johnson, Sharon Goldwater, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, KoreaAssociation for Computational Linguistics1Long Papers)Bevan Jones, Mark Johnson, and Sharon Goldwater. Semantic parsing with Bayesian tree transducers. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 488-496, Jeju Island, Korea, July 2012. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P12-1051.
Learning to transform natural to formal languages. J Rohit, Yuk Wah Kate, Raymond J Wong, Mooney, AAAI. 5Rohit J Kate, Yuk Wah Wong, and Raymond J Mooney. Learning to transform natural to formal languages. In AAAI, volume 5, pages 1062-1068, 2005.
Syntactic reordering in preprocessing for japanese english translation: Mit system description for ntcir-7 patent translation task. Jason Katz, - Brown, Michael Collins, NTCIR. Jason Katz-Brown and Michael Collins. Syntactic reordering in preprocessing for japanese english translation: Mit system description for ntcir-7 patent translation task. In NTCIR, 2008.
Measuring compositional generalization: A comprehensive method on realistic data. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, arXiv:1912.09713arXiv preprintDaniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashu- bin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, et al. Measuring compositional generalization: A comprehensive method on realistic data. arXiv preprint arXiv:1912.09713, 2019.
Yoon Kim, Carl Denton, Luong Hoang, Alexander M Rush, arXiv:1702.00887Structured attention networks. arXiv preprintYoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. Structured attention networks. arXiv preprint arXiv:1702.00887, 2017.
Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, Evan Herbst, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume the Demo and Poster SessionsPrague, Czech RepublicAssociation for Computational LinguisticsPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177- 180, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P07-2045.
Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. Brenden Lake, Marco Baroni, International Conference on Machine Learning. PMLRBrenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International Conference on Machine Learning, pages 2873-2882. PMLR, 2018.
Rearranging the familiar: Testing compositional generalization in recurrent networks. Joao Loula, Marco Baroni, M Brenden, Lake, arXiv:1807.07545arXiv preprintJoao Loula, Marco Baroni, and Brenden M Lake. Rearranging the familiar: Testing composi- tional generalization in recurrent networks. arXiv preprint arXiv:1807.07545, 2018.
Effective approaches to attentionbased neural machine translation. Minh-Thang Luong, Hieu Pham, Christopher D Manning, arXiv:1508.04025arXiv preprintMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
AMR parsing as graph prediction with latent alignment. Chunchuan Lyu, Ivan Titov, 10.18653/v1/P18-1037Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Chunchuan Lyu and Ivan Titov. AMR parsing as graph prediction with latent alignment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 397-407, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1037. URL https://www.aclweb.org/ anthology/P18-1037.
Andriy Chris J Maddison, Yee Whye Mnih, Teh, arXiv:1611.00712The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprintChris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
Foundations of statistical natural language processing. Christopher Manning, Hinrich Schutze, MIT pressChristopher Manning and Hinrich Schutze. Foundations of statistical natural language process- ing. MIT press, 1999.
Learning latent permutations with Gumbel-Sinkhorn networks. Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek, arXiv:1802.08665arXiv preprintGonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. Learning latent permuta- tions with Gumbel-Sinkhorn networks. arXiv preprint arXiv:1802.08665, 2018.
Efficient top-down BTG parsing for machine translation preordering. Tetsuji Nakagawa, 10.3115/v1/P15-1021Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Long Papers)Tetsuji Nakagawa. Efficient top-down BTG parsing for machine translation preordering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 208-218, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1021. URL https://www.aclweb.org/anthology/P15-1021.
Inducing a discriminative parser to optimize machine translation reordering. Graham Neubig, Taro Watanabe, Shinsuke Mori, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsGraham Neubig, Taro Watanabe, and Shinsuke Mori. Inducing a discriminative parser to opti- mize machine translation reordering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 843-853, Jeju Island, Korea, July 2012. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D12-1077.
Sparsemap: Differentiable sparse structured inference. Vlad Niculae, Andre Martins, Mathieu Blondel, Claire Cardie, International Conference on Machine Learning. PMLRVlad Niculae, Andre Martins, Mathieu Blondel, and Claire Cardie. Sparsemap: Differentiable sparse structured inference. In International Conference on Machine Learning, pages 3799- 3808. PMLR, 2018.
Learning conditioned graph structures for interpretable visual question answering. Will Norcliffe-Brown, Efstathios Vafeias, Sarah Parisot, arXiv:1806.07243arXiv preprintWill Norcliffe-Brown, Efstathios Vafeias, and Sarah Parisot. Learning conditioned graph structures for interpretable visual question answering. arXiv preprint arXiv:1806.07243, 2018.
A systematic comparison of various statistical alignment models. Josef Franz, Hermann Och, Ney, 10.1162/089120103321337421Computational Linguistics. 291Franz Josef Och and Hermann Ney. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51, 2003. doi: 10.1162/089120103321337421. URL https://www.aclweb.org/anthology/J03-1002.
Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. George Papandreou, Alan L Yuille, 2011 International Conference on Computer Vision. IEEEGeorge Papandreou and Alan L Yuille. Perturb-and-map random fields: Using discrete optimiza- tion to learn and sample from energy models. In 2011 International Conference on Computer Vision, pages 193-200. IEEE, 2011.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, arXiv:1912.01703An imperative style, high-performance deep learning library. arXiv preprintAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703, 2019.
Compositional generalization in a deep seq2seq model by separating syntax and semantics. Jake Russin, Jason Jo, C Randall, Yoshua O'reilly, Bengio, arXiv:1904.09708arXiv preprintJake Russin, Jason Jo, Randall C O'Reilly, and Yoshua Bengio. Compositional generalization in a deep seq2seq model by separating syntax and semantics. arXiv preprint arXiv:1904.09708, 2019.
Compositional generalization and natural language variation: Can a semantic parsing approach handle both?. Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova, arXiv:2010.12725arXiv preprintPeter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. Compositional generalization and natural language variation: Can a semantic parsing approach handle both? arXiv preprint arXiv:2010.12725, 2020.
Weighted and probabilistic context-free grammars are equally expressive. A Noah, Mark Smith, Johnson, Computational Linguistics. 334Noah A Smith and Mark Johnson. Weighted and probabilistic context-free grammars are equally expressive. Computational Linguistics, 33(4):477-491, 2007.
Semi-supervised recursive autoencoders for predicting sentiment distributions. Richard Socher, Jeffrey Pennington, H Eric, Andrew Y Huang, Christopher D Ng, Manning, Proceedings of the 2011 conference on empirical methods in natural language processing. the 2011 conference on empirical methods in natural language processingRichard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 conference on empirical methods in natural language processing, pages 151-161, 2011.
Reordering grammar induction. Miloš Stanojević, Khalil Sima'an, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingMiloš Stanojević and Khalil Sima'an. Reordering grammar induction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 44-54, 2015.
Formal basis of a language universal. Miloš Stanojević, Mark Steedman, Computational Linguistics. Miloš Stanojević and Mark Steedman. Formal basis of a language universal. Computational Linguistics, pages 1-34, 2018.
The syntactic process. Mark Steedman, MIT press24Cambridge, MAMark Steedman. The syntactic process, volume 24. MIT press Cambridge, MA, 2000.
A formal universal of natural language grammar. Mark Steedman, Language. 963Mark Steedman. A formal universal of natural language grammar. Language, 96(3):618-660, 2020.
Semantic parsing with neural hybrid trees. Raymond Hendy Susanto, Wei Lu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence31Raymond Hendy Susanto and Wei Lu. Semantic parsing with neural hybrid trees. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017.
Superglue: A stickier benchmark for general-purpose language understanding systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1905.00537arXiv preprintAlex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019.
Sequence modeling via segmentations. Chong Wang, Yining Wang, Po-Sen Huang, Abdelrahman Mohamed, Dengyong Zhou, Li Deng, International Conference on Machine Learning. PMLRChong Wang, Yining Wang, Po-Sen Huang, Abdelrahman Mohamed, Dengyong Zhou, and Li Deng. Sequence modeling via segmentations. In International Conference on Machine Learning, pages 3674-3683. PMLR, 2017.
Graph-based dependency parsing with bidirectional lstm. Wenhui Wang, Baobao Chang, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Wenhui Wang and Baobao Chang. Graph-based dependency parsing with bidirectional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2306-2315, 2016.
Learning for semantic parsing with statistical machine translation. Yuk Wah Wong, Raymond Mooney, Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. the Human Language Technology Conference of the NAACL, Main ConferenceNew York City, USAAssociation for Computational LinguisticsYuk Wah Wong and Raymond Mooney. Learning for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 439-446, New York City, USA, June 2006. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/N06-1056.
Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Dekai Wu, Computational linguistics. 233Dekai Wu. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3):377-403, 1997.
Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, International conference on machine learning. PMLRKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048-2057. PMLR, 2015.
Learning to compose words into sentences with reinforcement learning. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, Wang Ling, arXiv:1611.09100arXiv preprintDani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. Learning to compose words into sentences with reinforcement learning. arXiv preprint arXiv:1611.09100, 2016.
Online segment to segment neural transduction. Lei Yu, Jan Buys, Phil Blunsom, arXiv:1609.08194arXiv preprintLei Yu, Jan Buys, and Phil Blunsom. Online segment to segment neural transduction. arXiv preprint arXiv:1609.08194, 2016.
Learning to parse database queries using inductive logic programming. M John, Raymond J Zelle, Mooney, Proceedings of the national conference on artificial intelligence. the national conference on artificial intelligenceJohn M Zelle and Raymond J Mooney. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence, pages 1050-1055, 1996.
Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. S Luke, Michael Zettlemoyer, Collins, arXiv:1207.1420arXiv preprintLuke S Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420, 2012.
| [
"https://github.com/berlino/tensor2struct-public.2",
"https://github.com/odashi/small_parallel_enja"
] |
[
"WIDAR -Weighted Input Document Augmented ROUGE",
"WIDAR -Weighted Input Document Augmented ROUGE"
] | [
"Raghav Jain raghavjain106@gmail.com \nIndian Institute of Technology Patna\nIndia\n",
"Vaibhav Mavi vaibhavg152@gmail.com \nNew York University\nUnited States\n",
"Anubhav Jangra anubhav0603@gmail.com \nIndian Institute of Technology Patna\nIndia\n",
"Sriparna Saha sriparna.saha@gmail.com \nIndian Institute of Technology Patna\nIndia\n"
] | [
"Indian Institute of Technology Patna\nIndia",
"New York University\nUnited States",
"Indian Institute of Technology Patna\nIndia",
"Indian Institute of Technology Patna\nIndia"
] | [] | The task of automatic text summarization has gained a lot of traction due to the recent advancements in machine learning techniques. However, evaluating the quality of a generated summary remains to be an open problem. The literature has widely adopted Recall-Oriented Understudy for Gisting Evaluation (ROUGE) as the standard evaluation metric for summarization. However, ROUGE has some long-established limitations; a major one being its dependence on the availability of good quality reference summary. In this work, we propose the metric WIDAR which in addition to utilizing the reference summary uses also the input document in order to evaluate the quality of the generated summary. The proposed metric is versatile, since it is designed to adapt the evaluation score according to the quality of the reference summary. The proposed metric correlates better than ROUGE by 26%, 76%, 82%, and 15%, respectively, in coherence, consistency, fluency, and relevance on human judgement scores provided in the SummEval dataset. The proposed metric is able to obtain comparable results with other state-of-the-art metrics while requiring a relatively short computational time 3 . | 10.1007/978-3-030-99736-6_21 | [
"https://arxiv.org/pdf/2201.09282v1.pdf"
] | 246,240,436 | 2201.09282 | e4fb01a4459d6c8c4f3c5bacf1fdc5a893512f87 |
WIDAR -Weighted Input Document Augmented ROUGE
Raghav Jain raghavjain106@gmail.com
Indian Institute of Technology Patna
India
Vaibhav Mavi vaibhavg152@gmail.com
New York University
United States
Anubhav Jangra anubhav0603@gmail.com
Indian Institute of Technology Patna
India
Sriparna Saha sriparna.saha@gmail.com
Indian Institute of Technology Patna
India
WIDAR -Weighted Input Document Augmented ROUGE
summarizationevaluation metricROUGE
The task of automatic text summarization has gained a lot of traction due to the recent advancements in machine learning techniques. However, evaluating the quality of a generated summary remains to be an open problem. The literature has widely adopted Recall-Oriented Understudy for Gisting Evaluation (ROUGE) as the standard evaluation metric for summarization. However, ROUGE has some long-established limitations; a major one being its dependence on the availability of good quality reference summary. In this work, we propose the metric WIDAR which in addition to utilizing the reference summary uses also the input document in order to evaluate the quality of the generated summary. The proposed metric is versatile, since it is designed to adapt the evaluation score according to the quality of the reference summary. The proposed metric correlates better than ROUGE by 26%, 76%, 82%, and 15%, respectively, in coherence, consistency, fluency, and relevance on human judgement scores provided in the SummEval dataset. The proposed metric is able to obtain comparable results with other state-of-the-art metrics while requiring a relatively short computational time 3 .
Introduction
Accessibility of internet has led to massive increase in content available to a user, making it difficult to obtain the required information. This seemingly perpetual growth of information necessitates the need for automatic text summarization tools. Text summarization can be described as the task of generating fluent and human readable summary while preserving the essence of the original text documents. Evaluation of these automatically generated summaries has been actively explored by the research community for over 5 decades [8]. Since then, various attempts have been made to quantify the effectivenes of the summarization systems; however the evaluation task still remains an open problem till this day.
The most widely adopted evaluation metric for text summarization in the community is Recall-Oriented Understudy for Gisting Evaluation (ROUGE) [25] which is mainly based on the n-gram overlap between the generated summary and reference summary. However, ROUGE's dependency on a good quality reference summary is one of it's biggest drawback. Fabbri et al. [10] highlighted means equal contribution. 3 Implementation for WIDAR can be found at -https://github.com/ Raghav10j/WIDAR the inconsistency in quality of some reference summaries in the CNN/DailyMail dataset [26] by describing the summaries consisting of clickbaits instead of being truthful and informative with respect to the input article (refer to Fig. 1).
Kryscinski et al. [21] also reported this issue of reference summaries containing irrelevant information such as links to other articles or factual inconsistency in the Newsroom dataset [14]. Even if a reference summary is of satisfactory quality, it is highly unlikely that it is the only acceptable summary of that document as different people tend to produce different summaries for the same document [27,34]. Therefore, all the above-mentioned claims imply that sole dependence on reference summary for an evaluation metric is not optimal. Therefore, we propose an evaluation metric that also considers an input source document while evaluating the quality of its summary.
Input Document 1:
Last week she was barely showing -but Demelza Poldark is now the proud mother to the show's latest addition. Within ten minutes of tomorrow night's episode, fans will see Aidan Turner's dashing Ross Poldark gaze lovingly at his new baby daughter. As Sunday night's latest heartthrob, women across the country have voiced their longing to settle down with the brooding Cornish gentleman -but unfortunately it seems as if his heart is well and truly off the market. Scroll down for video Last week she was barely showing -but Demelza Poldark is now the proud mother to the show's latest addition He may have married his ……...
Reference Summary:
SPOILER ALERT: Maid gives birth to baby on Sunday's episode. Only announced she was pregnant with Poldark's baby last week.
Generated Summary:
demelza poldark is now the proud mother to the show's latest addition . fans will see aidan turner's dashing ross poldark gaze lovingly at his new baby daughter . sunday night's latest heartthrob , women across the country have voiced their longing to settle down with the brooding cornish gentleman .
Input Document 2:
Eight Iranian border guards have been killed in clashes with militants near the border with Pakistan, Iranian state media reported. Three of the militants were killed by Iranian forces in the fighting Monday in the southeastern town of Negur, the state-run news agency IRNA reported. The news agency cited Ali Asghar Mirshekari, the deputy governor of Iran's Sistan-Baluchestan province, who said the militants crossed into the country from Pakistan. Iranian officials ……...
Reference Summary:
The Pakistani government says its security agencies are investigating. A group believed to be based in Pakistan's Balochistan province claims responsibility.
Generated Summary:
three of the militants were killed by iranian forces in the southeastern town of negur . a militant group called jaish al adal claimed responsibility for the attack . jaish al adal has also claimed responsibility for attacks on iranian territory . [26] where ground truth is unsatisfactory either due to clickbaits (Eg.-1), or information incompleteness (Eg.-2).
In order to design an evaluation metric, it is important to study what comprises of a good summary. Ideally, a summary must be coherent, non-redundant, fluent, consistent and relevant to the input article [6]. Using these characteristics, recent works have attempted to quantify and compare the performance of existing evaluation metrics [1,10]. These works highlight the limitations of existing metrics and offer various resources for conducting further research on the evaluation task. One such work is the SummEval dataset [10] that provides human annotation scores for -coherence, consistency, fluency and relevance.
In this paper, we propose an evaluation metric WIDAR (Weighted Input Document Augmented ROUGE) in an attempt to overcome the above-mentioned limitations of ROUGE (refer to Fig. 2). The proposed metric utilizes the reference summary and input document to measure the quality of a generated summary. WIDAR introduces the idea of weighted ROUGE that relies on weighting sentences in reference summary based on information coverage and redundancy within the summary. Through experiments, we illustrate that WIDAR is able to outperform ROUGE by a large margin, and is able to obtain comparable results with other state-of-the-art metrics while requiring relatively short computational time.
Related Works
The approaches to text summarization can be broadly classified into two categories, extractive methods [22,35,29] and abstractive methods [4,38,16]. Summarization research today has expanded into more complex problems like multilingual summarization [15,36], multi-modal summarization [17,18,19,20], acrosstime summarization [7] etc.
Numerous evaluation metrics have been proposed to assess summarization systems. Some of them are based on text matching between predicted summary and reference summary such as Recall-Oriented Understudy for Gisting Evaluation (ROUGE) [25], ParaEval [46], ROUGE 2.0 [11], Metric for Evaluation of Translation with Explicit ORdering (METEOR) [24], Bilingual Evaluation Understudy (BLEU) score [30], Character n-gram F-score (CHRF) [33], Consensus-based Image Description Evaluation (CIDEr) [43] etc. There are also evaluation metrics that try to capture semantic similarity including word embeddings based techniques such as Word Mover similarity (WMS) [23], Mover-Score [45], Sentence Mover Similarity (SMS) [5], ROUGE-WE [28], ELMo-m [41], automated pyramid metric [31] and graph based techniques such as graph based ROUGE (ROUGE-G) [39] and AUTOmatic SUMMary Evaluation based on N-gram Graphs (AutoSummENG) [13]. Other than these, there are also model based learned metrics such as Supervised Summarization Scorer (S 3 ) [32], BERTScore [44], NeuralTD [2], Support Vector Regression (SVR) [40] and question answering based metrics such as Answering Performance for Evaluation of Summaries (APES) [9] and Semantic QA [3]. In unsupervised settings where evaluation is carried out on the basis of input document rather than depending on a reference summary, SummaQA [37], summarization evaluation with pseudo references and BERT (SUPERT) [12] and BLANC [42] are some of the most recent and state-of-the-art metrics.
WIDAR Evaluation Metric
We propose WIDAR (Weighted Input Document Augmented ROUGE), an evaluation metric that utilizes both reference summary (R) and input document (D) to judge the quality of a generated summary (S). For better understanding, we divide our approach into two steps: 1) calculation of Weighted ROUGE (Section 3.1), and 2) combination of Weighted ROUGE with similarity score computed between generated summary and input document to obtain WIDAR (Section 3.2). Table 1 lists the notations used in the remainder of this paper. coverage weight assigned to i th generated summary sentence w red i redundancy weight assigned to i th generated summary sentence wi overall weight assigned to i th generated summary sentence
Weighted ROUGE
As discussed in Section 1, ROUGE is highly dependent on the quality of reference summary to perform effectively. However, in real world scenarios, high quality of reference summary is not assured.. Therefore, we introduce two special weights for each reference summary sentence to penalize/reward the quality of information present in this sentence. Each reference summary sentence r i is assigned two scores: 1) Coverage weight (w covi ) -based on the input document information that is covered by r i , and 2) Redundancy weight (w redi ) -based on the uniqueness of information presented by r i in the reference summary. We use Algorithm 1 4 to compute the redundancy weights and coverage weights for all sentences in the reference summary. We then obtain the overall weight 5 (w i ) for r i by computing the average of w covi and w redi .
w i = (w covi + w redi ) 2 × |R| (1) 4
Here θ1 and θ2 are ROUGE-L thresholds for coverage and redundancy respectively. 5 We multiply the final weights by the number of sentences in the reference summary |R| to ensure that the sum of weights remains the same as in plain ROUGE, i.e.,
i wi = |R|.
Algorithm 1: Calculating the coverage and redundancy weights.
Input: R = {r i }, D = {d j } Output: W cov = {w covi } W red = {w redi } W cov , W red ← emptyList; for r i in R do w covi = 0; for d j in D do if ROU GE-L r (r i ,d j )≥ θ 1 then w covi ++; end end W cov ← w covi /|D|; end for r i in R do w redi = 0; for r j in R do if r i = r j & ROU GE-L r (r i ,r j ) ≥ θ 2 then w redi ++; end end W red ← 1-(w redi / |R|); end
We propose sentence-level ROUGE-N (ROU GE-N SL ) and sentence-level ROUGE-L (ROU GE-L SL ), variations of ROUGE in order to incorporate the sentence-level redundancy and coverage weights (Eq. 1), respectively. Sentence-level ROUGE-N: Typically, ROUGE-N measures the number of overlapping n-grams between the reference summary and the generated summary. However, to compute the sentence-level ROUGE-N (ROU GE-N SL ) we take into account sentence level n-grams for the overlap count, viz. we discard the bridge n-grams (that share words from two or more sentences) 6 . We use the following equations to measure the precision (ROU GE-N p SL ), recall (ROU GE-N r SL ), and f-score (ROU GE-N f SL ), respectively.
ROU GE-N r SL = s-grami r-gramj count(s-gram i , r-gram j ) r-gramj |r-gram j | (2) ROU GE-N p SL = s-grami r-gramj count(s-gram i , r-gram j ) s-gramj |s-gram j | (3) ROU GE-N f SL = 2 × (ROU GE-N r SL ) × (ROU GE-N p SL ) (ROU GE-N r SL ) + (ROU GE-N p SL )(4)
where s-gram i and r-gram i denote the sentence-level n-grams for i th sentence in the generated summary and in the reference summary, respectively; count(s-gram i , r-gram j ) calculates the number of overlapping n-grams in s-gram i and r-gram i , and |.| denotes the cardinality of a set. Sentence-level ROUGE-L: ROUGE-L computes the longest common sub-sequence of words between the generated summary and the reference summary. Sentencelevel ROUGE-L (ROU GE-L SL ) is computed as follows:
ROU GE-L r SL = ri∈R U nionLCS(r i , S) |R| (5) ROU GE-L p SL = ri∈R U nionLCS(r i , S) |S| (6) ROU GE-L f SL = 2 × (ROU GE-L r SL ) × (ROU GE-L p SL ) (ROU GE-L r SL ) + (ROU GE-L p SL )(7)
where U nionLCS(r i , S) is the union of the longest common sub-sequence computed between a reference summary sentence (r i ∈ R) and each sentence of generated summary (s i ∈ S), and |R| and |S| denote the number of sentences in reference summary and generated summary, respectively. We integrate into these sentence-level ROUGE metrics the weights (Eq. 1) to obtain Weighted ROUGE-N (ROU GE-N W ) and Weighted ROUGE-L (ROU GE-L W ) scores. ROU GE-N W is obtained by multiplying w i in each summation term in Eqs. 2 to 4, and ROU GE-L W is obtained by multiplying w i in each summation term in Eqs. 5 to 7.
Combining Weighted ROUGE with Input Document Similarity
Input Document Similarity Score (IDSS) We incorporate information overlap of generated summary with input document to make the proposed metric more robust and applicable to the real-world situations where the quality of reference summary might be sometimes inadequate. For simplicity, we use ROUGE-L Fscore as the similarity measure, because it performed better than other ROUGE variants in our experiments (refer to Section 4.3). Therefore,
IDSS = ROU GE-L f (S, D)(8)
The last step of the evaluation process is to combine the ROU GE W and IDSS scores in such a way that the final score retains the individual characteristic of both the individual scores. We define W IDAR as follow:
W IDAR X K = (1 − λ) × IDSS + λ × ROU GE-K X W(9)
where x ∈ {r, p, f} and K ∈ {1, 2, L}; λ is a hyper-parameter directly proportional to the quality of coverage in reference summary 7 .
Experiments
Dataset
For all the experiments conducted in this work, we have used the SummEval dataset [10]. It contains the summaries generated by 23 recent summarization models trained on CNN/DailyMail dataset [26]. The dataset contains human annotation scores for 16 generated summaries of 100 source news articles giving us 1600 summary-text pairs. Each summary is annotated by 3 experts and 5 crowd-source annotators to evaluate the quality of a summary on a range of 1-5 across 4 different characteristics: 1) Coherence: measures the quality of smooth transition between different summary sentences such that sentences are not completely unrelated or completely same, 2) Consistency: measures the factual correctness of summary with respect to input document, 3) Fluency: measures the grammatical correctness and readability of sentences, 4) Relevance: measures the ability of a summary to capture important and relevant information from the input document. Apart from human annotation scores, 11 reference summaries for each example, and evaluation scores for generated summaries across different evaluation metrics are also made available in the dataset repository 8 .
Evaluation of Evaluation Metric
In order to measure the performance of the proposed evaluation metric, we calculate the correlation between the scores of that metric and the average annotation scores for each characteristic of each summary for 1600 summary-text examples provided in the SummEval dataset [10] (described in Section 4.1). We have used the average of expert annotation scores for our experiments because of the inconsistency between expert and crowd-source scores reported by Fabbri et al. [10]. We use the Kendall's tau correlation coefficient as the correlation metric in our experiments. Kendall's tau correlation between two sequences X = {x i } and Y = {y i } is defined as follows:
τ = C − D C + D(10)
where C is the number of all those pairs that are concordant and D is the number of all those pairs that are discordant in sequences, X and Y .
Experimental Settings
In this section, we discuss various hyperparameters used in the proposed methodology, along with the tuning experiments carried out to justify them 9 . Weighted sum of IDSS and ROU GE W (λ): λ is used to get the weighted sum of information overlap of the generated summary with the input document (IDSS) and the reference summary (ROU GE W ). We attempted to investigate the optimal value of λ using a data-driven technique. To be more precise, since λ indicates the balance between the degree of attention given to the input document and the reference summary, we hypothesize that making λ adapt to the information shared in reference summary and input document should give us better performance since the higher the overlap, the better the quality of summary, and the higher the λ should be. Hence we perform two different experiments with λ = max(w covi ) and λ = mean(w covi ). To compare performance of a fixed λ value with the defined data-driven strategy, we plot performance of the proposed technique with fixed values of λ ∈ {0.0, 0.1, 0.2, ..., 1.0} (see Fig. 3). Even though both of these λ defining strategies outperform the baseline metric ROUGE, we notice that the d value of λ = 0.5 is able to outperform these data-driven strategies as well as most of the fixed λ values 10 . Thresholds for w covi and w redi : θ 1 and θ 2 are the hyperparameters used in the calculation of coverage weights (w covi ) and redundancy weights (w redi ) (Algorithm 1), respectively. To obtain the optimal range of these hyperparameters; we first performed an individual search for both θ 1 and θ 2 (see Fig 4). As per these experiments, θ 1 = 0.0 or 0.1 and θ 2 = 0.4 yielded the best results when analyzed individually. However, on further experimentation, it was found that the best performance was obtained at θ 1 = 0.1 and θ 2 = 0.3. 10 It was also noticed that λ = mean(Wcov) outperforms λ = max(Wcov) in fluency and consistency; while the opposite happens for coherence and relevance. The reason for this can be explained by the fact that mean(Wcov) < max(Wcov); therefore the λ = mean(Wcov) variation always gives more weight to the input document similarity, giving higher fluency and consistency scores because input document consists of all the informationally rich and grammatically correct sentences. Similarity function for IDSS: In order to find the most suitable similarity function to compute the information overlap between input document and generated summary, we performed an isolated experiment where correlation coefficient of similarity function candidates was computed with the human judgement scores ( Table 2). ROUGE-L f score was the best performing model, and hence chosen as the similarity function.
Results and Discussions
We evaluate the performance of our metric with other state-of-the-art techniques using correlation coefficient described in Section 4.2. Table 3 11 lists the correlation of WIDAR and other state-of-the art-metric scores available in Sum-mEval with human judgement scores 12 . These scores illustrate the superiority of WIDAR over its predecessor, ROUGE, by a wide margin in all the three variants. It can be deduced from the results that we need a way to combine these scores to better evaluate the performance of each metric, since a metric like SMS [5] performs well in aspects like consistency and fluency, yet it gives mediocre performance in coherence and relevance. Therefore, we also provide the average of these four scores in an attempt to ascertain the overall performance of each metric. We find out that all three variants of WIDAR are able to perform satisfactory, as they appear as 2 nd , 3 rd and 4 th in the overall rankings; as opposed to their ROUGE counter-parts that end up in the middle-bottom section of the rankings. Table 3: Evaluation of the proposed metric WIDAR against other state-of-the-art methods using Kendall's Tau correlation coefficient over human judgement scores of individual summary components described in SummEval dataset [10]. Average denotes the average score over coherence, consistency, fluency and relevance. (.) denotes the rank of metric for the corresponding column.
Metric
Coherence Consistency Fluency Relevance Average Text matching-based metrics ROUGE-1 [25] 0.137 (8) 0.111 (14) 0.067 (13) 0.228 (4) 0.135 (9) ROUGE-2 [25] 0.110 (13) 0.107 (15) 0.054 (15) 0.184 (13) 0.113 (15) ROUGE-L [25] 0.109 (14) 0.090 (16) 0.067 (13) 0.216 (8) 0.120 (14) BLEU [30] 0.119 (11) 0.126 (9) 0.104 (8) 0.185 (12) 0.133 (10) METEOR [24] 0.112 (12) 0.118 (12) 0.079 (12) 0.210 (10) 0.129 (12) CHRF [33] 0.168 (1) 0.121 (10) The fact that SUPERT [12] is a model-based metric that evaluates the quality of a summary by taking as input the generated summary and the input document might be the reason for it having high correlation scores with consistency and fluency. Since input document comprises of grammatically correct and factually rich sentences, high performances on fluency and consistency are to be expected. CHRF [33] and S 3 [32] on the other-hand perform well in coherence and relevance; which can be somewhat credited to their evaluation strategy that computes information overlap between generated summary and reference summary. Since reference summary contains only the most significant information from the input document put together in a presentable manner, it results in high relevance and coherence scores. We believe that since WIDAR uses information overlap of generated summary with both the input document and the reference summary efficiently, it performs optimally in all the four characteristics. Table 4 shows the comparison of computational time taken by WIDAR with respect to 5 state-of-the-art models or embedding based metrics computed using a single reference summary. The experiment is conducted for 100 randomly chosen summaries for all the metrics 13 . It is noticed that WIDAR takes about 0.6% of the computational time as compared to the average time taken by all these 5 metrics, while giving similar performance.
Computational Time Analysis
Metric
Time-taken BLANC [42] 1076.35 s SUPERT [12] 30.40 s MoverScore [12] 37.60 s BERTScore [44] 1410.37 s SummaQA [37] 910. 26
Ablation Study
WIDAR comprises of two key-components: (1) weighted ROUGE (ROUGE W ) between reference summary and generated summary and (2) similarity overlap (IDSS) between input document and generated summary. In order to establish the necessity of both of these components, we conduct an ablation study. When we consider only ROUGE-L W , we notice a major drop in correlation with consistency (38%) and fluency (30%) (refer to the top two rows in Table 5). We reason that consistency being the measure of factual correctness in the summary justifies the decrease in consistency scores. An argument can be made regarding fluency that since WIDAR is effectively a string-matching based technique; the input document usually comprises of sentences which are more grammatically sound than ones in reference summary [10,21]) could explain the drop in fluency scores. This argument can be further bolstered when comparing the correlation scores obtained for ROUGE-L and IDSS. IDSS uses ROUGE-L to compute information overlap between generated summary and input document, and ROUGE-L is used to compute information overlap between generated summary and reference summary. We can see that IDSS outperforms ROUGE-L in consistency (by 124%) and fluency (by 82%), supporting the previously mentioned argument. If we remove W red from weighted ROUGE, we observe drops in coherence (by 18%) and relevance (by 15%) as expected; but we also observe that without these redundancy weights, correlation with consistency and fluency also drop by 30% and 22%, respectively. Removing W cov however yields mixed results in an isolated setting. Yet, together with W red , the weighted ROUGE is able to outperform the sentence-level baseline. This can be noticed from the relevance scores in Table 5; ROUGE-L SL attains 0.204 score, while adding W red yields an increase to 0.218 (shown in row −W cov ) and adding W cov drops the score to 0.201 (shown in row −W red ). However, combining these two to obtain ROUGE-L W attains 0.239 score, better than in the case of the individual components.
Study of Human Judgement Scores
To analyze how humans have perceived these four characteristics of a summary, we compute and study the Kendall's Tau correlation coefficient between them. The results (refer to Table 6) revealed that coherence and relevance are moderately correlated, while other characteristic pairs do not yield any significant correlation score. This high correlation between coherence and relevance can be attributed to the fact that both relevance and coherence are related to nonredundancy. Coherence explicitly captures the non-redundancy in a summary, since a coherent summary must not have high information overlap across the sentences. Relevance on the other hand implicitly captures the notion of nonredundancy, since a summary that is highly relevant will cover up a major portion of input document, which is not achievable for a redundant summary. This reasoning can also be backed by the results from the ablation study (refer to Table 5), where removing the redundancy weight (W red ) from weighted ROUGE affects both the relevance and the coherence scores, implying that humans directly or indirectly consider redundancy of sentences within summary while providing these scores.
Conclusion
We propose a novel evaluation metric WIDAR that utilizes both input document and reference summary to estimate the quality of the generated summary. We discuss why metrics like ROUGE, METEOR, BLUE etc. that solely depend on reference summary for evaluation do not perform well in real-world situations. We illustrate how the proposed metric is able to outperform its predecessor, ROUGE, by a large margin, and is also able to achieve performance comparable to huge model-based metrics like BERTScore, S 3 , SUPERT etc. We also perform an ablation study to establish the necessity of each component in the proposed metric. We believe that the community needs computationally fast and lightweight metrics like WIDAR that can work well in real-world situations.
Fig. 1 :
1Examples from DailyMail/CNN dataset
Fig. 2 :
2Model figure for WIDAR.
Fig. 3 :
3Correlation plots of WIDAR with human judgement scores (from Sum-mEval dataset[10]) for different λ values.
Fig. 4 :
4Correlation plots of WIDAR with human judgement scores (from Sum-mEval dataset[10]) with varying θ 1 values (on left) and θ 2 (on right) values.
Table 1 :
1Notation of each variable and its corresponding meaning.Notation Meaning
D
input document
R
reference summary
S
generated summary
di
i th input document's sentence
ri
i th input reference summary's sentence
si
i th input generated summary's sentence
wcov i
Table 2 :
2Various ROUGE-based similarity functions for IDSS.IDSS
Coherence Consistency Fluency Relevance
ROUGE-1 r
0.033
0.101
0.050
0.123
ROUGE-1 f
0.035
0.108
0.055
0.117
ROUGE-2 r
0.066
0.183
0.111
0.149
ROUGE-2 f
0.072
0.194
0.118
0.153
ROUGE-L r
0.088
0.187
0.112
0.158
ROUGE-L f
0.097
0.202
0.122
0.164
Table 4 :
4Computation time taken by WIDAR and various model-based metrics.
Table 5 :
5Ablation Study.Metric
Coherence Consistency Fluency Relevance
WIDARL
0.149
0.176
0.119
0.250
ROUGE-LW
0.129
0.108
0.083
0.239
−W red
0.105
0.075
0.064
0.201
−Wcov
0.119
0.115
0.087
0.218
ROUGE-LSL
0.102
0.087
0.062
0.204
ROUGE-L
0.109
0.090
0.067
0.216
IDSS
0.097
0.202
0.122
0.164
Table 6 :
6Kendall's Taus correlation between various summary characteristics.Coherence Consistency Fluency Relevance
Coherence
1.00
0.25
0.27
0.53
Consistency
0.25
1.00
0.38
0.27
Fluency
0.27
0.38
1.00
0.23
Relevance
0.53
0.27
0.23
1.00
Note that ROU GE-1 and ROU GE-1SL denote the same metrics.
λ is a fixed hyper-parameter, which is set to 0.5 in our final experiments. We attempted to make λ a data-driven parameter by setting λ = max(wcov i ) or λ = mean(wcov i ), but this setting was not able to outperform the fixed λ = 0.5 value (refer to Section 4.3).
https://github.com/Yale-LILY/SummEval 9 All the hyperparameter tuning experiments were performed using ROU GE-L f unless stated otherwise.
In case a metric has more than one variation, the version that corresponds to f-score was used.12 All the reported metrics inTable 3have been computed in a multi-reference setting using 11 reference summaries per generated summary.
This experiment was conducted on a Tyrone machine with Intel's Xeon W-2155 Processor having 196 Gb DDR4 RAM and 11 Gb Nvidia 1080Ti GPU. GPU was only used for BLANC, SUPERT, BERTScore and SummaQA evaluation metrics.
Re-evaluating evaluation in text summarization. M Bhandari, P Narayan Gour, A Ashfaq, P Liu, G Neubig, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingEMNLPBhandari, M., Narayan Gour, P., Ashfaq, A., Liu, P., Neubig, G.: Re-evaluating evaluation in text summarization. In: Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP) (2020)
F Böhm, Y Gao, C M Meyer, O Shapira, I Dagan, I Gurevych, ArXiv abs/1909.01214Better rewards yield better summaries: Learning to summarise without references. Böhm, F., Gao, Y., Meyer, C.M., Shapira, O., Dagan, I., Gurevych, I.: Better rewards yield better summaries: Learning to summarise without references. ArXiv abs/1909.01214 (2019)
A semantic qa-based approach for text summarization evaluation. P Chen, F Wu, T Wang, AAAIChen, P., Wu, F., Wang, T.: A semantic qa-based approach for text summarization evaluation. In: AAAI (2018)
Abstractive sentence summarization with attentive recurrent neural networks. S Chopra, M Auli, A M Rush, 10.18653/v1/N16-1012Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsChopra, S., Auli, M., Rush, A.M.: Abstractive sentence summarization with at- tentive recurrent neural networks. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies. pp. 93-98. Association for Computational Linguis- tics, San Diego, California (Jun 2016). https://doi.org/10.18653/v1/N16-1012, https://aclanthology.org/N16-1012
Sentence mover's similarity: Automatic evaluation for multi-sentence texts. E Clark, A Celikyilmaz, N A Smith, 10.18653/v1/P19-1264Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsFlorence, ItalyClark, E., Celikyilmaz, A., Smith, N.A.: Sentence mover's similarity: Automatic evaluation for multi-sentence texts. In: Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics. Association for Computa- tional Linguistics, Florence, Italy (Jul 2019). https://doi.org/10.18653/v1/P19- 1264, https://aclanthology.org/P19-1264
Overview of duc. H T Dang, Proceedings of the document understanding conference. the document understanding conferenceDang, H.T.: Overview of duc 2005. In: Proceedings of the document understanding conference. vol. 2005, pp. 1-12 (2005)
Across-time comparative summarization of news articles. Y Duan, A Jatowt, Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningACMDuan, Y., Jatowt, A.: Across-time comparative summarization of news articles. In: Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. pp. 735-743. ACM (2019)
New methods in automatic extracting. H P Edmundson, J. ACM. 16Edmundson, H.P.: New methods in automatic extracting. J. ACM 16, 264-285 (1969)
Question answering as an automatic evaluation metric for news article summarization. M Eyal, T Baumel, M Elhadad, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Eyal, M., Baumel, T., Elhadad, M.: Question answering as an automatic evaluation metric for news article summarization. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 3938-3948. Association for Computational Linguistics, Minneapolis, Minnesota (Jun 2019).
. 10.18653/v1/N19-1395https://doi.org/10.18653/v1/N19-1395, https://aclanthology.org/N19-1395
Summeval: Reevaluating summarization evaluation. A R Fabbri, W Kryscinski, B Mccann, R Socher, D Radev, Transactions of the Association for Computational Linguistics. 9Fabbri, A.R., Kryscinski, W., McCann, B., Socher, R., Radev, D.: Summeval: Re- evaluating summarization evaluation. Transactions of the Association for Compu- tational Linguistics 9, 391-409 (2021)
Rouge 2.0: Updated and improved measures for evaluation of summarization tasks. K A Ganesan, ArXiv abs/1803.01937Ganesan, K.A.: Rouge 2.0: Updated and improved measures for evaluation of sum- marization tasks. ArXiv abs/1803.01937 (2018)
SUPERT: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization. Y Gao, W Zhao, S Eger, 10.18653/v1/2020.acl-main.124Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsGao, Y., Zhao, W., Eger, S.: SUPERT: Towards new frontiers in un- supervised evaluation metrics for multi-document summarization. In: Pro- ceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics, Online (Jul 2020). https://doi.org/10.18653/v1/2020.acl-main.124, https://aclanthology. org/2020.acl-main.124
Autosummeng and memog in evaluating guided summaries. G Giannakopoulos, V Karkaletsis, Theory and Applications of Categories. Giannakopoulos, G., Karkaletsis, V.: Autosummeng and memog in evaluating guided summaries. Theory and Applications of Categories (2011)
Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. M Grusky, M Naaman, Y Artzi, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana1Long Papers). Association for Computational LinguisticsGrusky, M., Naaman, M., Artzi, Y.: Newsroom: A dataset of 1.3 million sum- maries with diverse extractive strategies. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Compu- tational Linguistics, New Orleans, Louisiana (Jun 2018), https://aclanthology. org/N18-1065
Xl-sum: Large-scale multilingual abstractive summarization for 44 languages. T Hasan, A Bhattacharjee, M S Islam, K Mubasshir, Y F Li, Y B Kang, M S Rahman, R Shahriyar, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Hasan, T., Bhattacharjee, A., Islam, M.S., Mubasshir, K., Li, Y.F., Kang, Y.B., Rahman, M.S., Shahriyar, R.: Xl-sum: Large-scale multilingual abstractive sum- marization for 44 languages. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. pp. 4693-4703 (2021)
A Jangra, R Jain, V Mavi, S Saha, P Bhattacharyya, arXiv:2105.01296Semantic extractorparaphraser based abstractive summarization. arXiv preprintJangra, A., Jain, R., Mavi, V., Saha, S., Bhattacharyya, P.: Semantic extractor- paraphraser based abstractive summarization. arXiv preprint arXiv:2105.01296 (2021)
Text-image-video summary generation using joint integer linear programming. A Jangra, A Jatowt, M Hasanuzzaman, S Saha, European Conference on Information Retrieval. SpringerJangra, A., Jatowt, A., Hasanuzzaman, M., Saha, S.: Text-image-video summary generation using joint integer linear programming. In: European Conference on Information Retrieval. pp. 190-198. Springer (2020)
A survey on multi-modal summarization. A Jangra, A Jatowt, S Saha, M Hasanuzzaman, Jangra, A., Jatowt, A., Saha, S., Hasanuzzaman, M.: A survey on multi-modal summarization (2021)
Multi-modal summary generation using multi-objective optimization. A Jangra, S Saha, A Jatowt, M Hasanuzzaman, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalJangra, A., Saha, S., Jatowt, A., Hasanuzzaman, M.: Multi-modal summary gener- ation using multi-objective optimization. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1745-1748 (2020)
Multi-modal supplementarycomplementary summarization using multi-objective optimization. A Jangra, S Saha, A Jatowt, M Hasanuzzaman, Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalJangra, A., Saha, S., Jatowt, A., Hasanuzzaman, M.: Multi-modal supplementary- complementary summarization using multi-objective optimization. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 818-828 (2021)
Neural text summarization: A critical evaluation. W Kryscinski, N S Keskar, B Mccann, C Xiong, R Socher, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsKryscinski, W., Keskar, N.S., McCann, B., Xiong, C., Socher, R.: Neural text summarization: A critical evaluation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China (Nov 2019).
. 10.18653/v1/D19-1051https://doi.org/10.18653/v1/D19-1051, https://aclanthology.org/D19-1051
A trainable document summarizer. J Kupiec, J Pedersen, F Chen, Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. p. 68-73. SIGIR '95. the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. p. 68-73. SIGIR '95New York, NY, USAAssociation for Computing MachineryKupiec, J., Pedersen, J., Chen, F.: A trainable document summarizer. In: Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. p. 68-73. SIGIR '95, Association for Computing Machinery, New York, NY, USA (1995).
. 10.1145/215206.215333https://doi.org/10.1145/215206.215333, https://doi.org/10.1145/215206.
From word embeddings to document distances. M Kusner, Y Sun, N Kolkin, K Weinberger, PMLRProceedings of the 32nd International Conference on Machine Learning. Proceedings of Machine Learning Research. Bach, F., Blei, D.the 32nd International Conference on Machine Learning. Machine Learning ResearchLille, France37Kusner, M., Sun, Y., Kolkin, N., Weinberger, K.: From word embeddings to document distances. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd In- ternational Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 37, pp. 957-966. PMLR, Lille, France (07-09 Jul 2015), https: //proceedings.mlr.press/v37/kusnerb15.html
METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. A Lavie, A Agarwal, Proceedings of the Second Workshop on Statistical Machine Translation. the Second Workshop on Statistical Machine TranslationPrague, Czech RepublicAssociation for Computational LinguisticsLavie, A., Agarwal, A.: METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In: Proceedings of the Second Workshop on Statistical Machine Translation. pp. 228-231. Association for Compu- tational Linguistics, Prague, Czech Republic (Jun 2007), https://aclanthology. org/W07-0734
ROUGE: A package for automatic evaluation of summaries. C Y Lin, Association for Computational Linguistics. Barcelona, SpainText Summarization Branches OutLin, C.Y.: ROUGE: A package for automatic evaluation of summaries. In: Text Summarization Branches Out. pp. 74-81. Association for Computational Linguis- tics, Barcelona, Spain (Jul 2004), https://aclanthology.org/W04-1013
Abstractive text summarization using sequence-to-sequence rnns and beyond. R Nallapati, B Zhou, C D Santos, Gülçehre, B Xiang, CoNLLNallapati, R., Zhou, B., Santos, C.D., Ç aglar Gülçehre, Xiang, B.: Abstractive text summarization using sequence-to-sequence rnns and beyond. In: CoNLL (2016)
Summarization evaluation for text and speech: issues and approaches. A Nenkova, INTERSPEECHNenkova, A.: Summarization evaluation for text and speech: issues and approaches. In: INTERSPEECH (2006)
Better summarization evaluation with word embeddings for ROUGE. J P Ng, V Abrecht, 10.18653/v1/D15-1222Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsNg, J.P., Abrecht, V.: Better summarization evaluation with word embeddings for ROUGE. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pp. 1925-1930. Association for Computational Linguistics, Lisbon, Portugal (Sep 2015). https://doi.org/10.18653/v1/D15-1222, https://aclanthology.org/D15-1222
Constructing literature abstracts by computer: Techniques and prospects. C D Paice, Inf. Process. Manag. 26Paice, C.D.: Constructing literature abstracts by computer: Techniques and prospects. Inf. Process. Manag. 26, 171-186 (1990)
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W J Zhu, ACLPapineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: ACL (2002)
Automated pyramid scoring of summaries using distributional semantics. R J Passonneau, E Chen, W Guo, D Perin, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational Linguistics2Short Papers)Passonneau, R.J., Chen, E., Guo, W., Perin, D.: Automated pyramid scoring of summaries using distributional semantics. In: Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers). pp. 143-147. Association for Computational Linguistics, Sofia, Bulgaria (Aug 2013), https://aclanthology.org/P13-2026
Learning to score system summaries for better content selection evaluation. M Peyrard, T Botschen, I Gurevych, 10.18653/v1/W17-4510Proceedings of the Workshop on New Frontiers in Summarization. Association for Computational Linguistics. the Workshop on New Frontiers in Summarization. Association for Computational LinguisticsCopenhagen, DenmarkPeyrard, M., Botschen, T., Gurevych, I.: Learning to score system summaries for better content selection evaluation. In: Proceedings of the Workshop on New Frontiers in Summarization. Association for Computational Linguistics, Copen- hagen, Denmark (Sep 2017). https://doi.org/10.18653/v1/W17-4510, https:// aclanthology.org/W17-4510
chrF: character n-gram F-score for automatic MT evaluation. M Popović, Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsPopović, M.: chrF: character n-gram F-score for automatic MT evaluation. In: Proceedings of the Tenth Workshop on Statistical Machine Translation. pp. 392- 395. Association for Computational Linguistics, Lisbon, Portugal (Sep 2015).
. 10.18653/v1/W15-3049https://doi.org/10.18653/v1/W15-3049, https://aclanthology.org/W15-3049
The formation of abstracts by the selection of sentences. G J Rath, S Resnick, T R Savage, Rath, G.J., Resnick, S., Savage, T.R.: The formation of abstracts by the selection of sentences (1961)
Extractive single document summarization using multi-objective optimization: Exploring self-organized differential evolution, grey wolf optimizer and water cycle algorithm. Knowledge-Based Systems. N Saini, S Saha, A Jangra, P Bhattacharyya, 164Saini, N., Saha, S., Jangra, A., Bhattacharyya, P.: Extractive single document summarization using multi-objective optimization: Exploring self-organized differ- ential evolution, grey wolf optimizer and water cycle algorithm. Knowledge-Based Systems 164, 45-67 (2019)
Mlsum: The multilingual summarization corpus. T Scialom, P A Dray, S Lamprier, B Piwowarski, J Staiano, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Scialom, T., Dray, P.A., Lamprier, S., Piwowarski, B., Staiano, J.: Mlsum: The multilingual summarization corpus. In: Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP). pp. 8051-8067 (2020)
Answers unite! unsupervised metrics for reinforced summarization models. T Scialom, S Lamprier, B Piwowarski, J Staiano, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsScialom, T., Lamprier, S., Piwowarski, B., Staiano, J.: Answers unite! unsuper- vised metrics for reinforced summarization models. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China (Nov 2019).
. 10.18653/v1/D19-1320https://doi.org/10.18653/v1/D19-1320, https://aclanthology.org/D19-1320
Get to the point: Summarization with pointergenerator networks. A See, P Liu, C Manning, Association for Computational LinguisticsSee, A., Liu, P., Manning, C.: Get to the point: Summarization with pointer- generator networks. In: Association for Computational Linguistics (2017), https: //arxiv.org/abs/1704.04368
A graph-theoretic summary evaluation for ROUGE. E Shafieibavani, M Ebrahimi, R Wong, F Chen, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShafieiBavani, E., Ebrahimi, M., Wong, R., Chen, F.: A graph-theoretic summary evaluation for ROUGE. In: Proceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium (Oct-Nov 2018), https://aclanthology.org/D18-1085
Summarization evaluation in the absence of human model summaries using the compositionality of word embeddings. E Shafieibavani, M Ebrahimi, R K Wong, F Chen, COLINGShafieiBavani, E., Ebrahimi, M., Wong, R.K., Chen, F.: Summarization evaluation in the absence of human model summaries using the compositionality of word embeddings. In: COLING (2018)
The feasibility of embedding based automatic evaluation for single document summarization. S Sun, A Nenkova, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsSun, S., Nenkova, A.: The feasibility of embedding based automatic evaluation for single document summarization. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 1216- 1221. Association for Computational Linguistics, Hong Kong, China (Nov 2019).
. 10.18653/v1/D19-1116https://doi.org/10.18653/v1/D19-1116, https://aclanthology.org/D19-1116
Is human scoring the best criteria for summary evaluation?. O Vasilyev, J Bohannon, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational LinguisticsVasilyev, O., Bohannon, J.: Is human scoring the best criteria for summary eval- uation? In: Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021. Association for Computational Linguistics, Online (Aug 2021).
. 10.18653/v1/2021.findings-acl.192https://doi.org/10.18653/v1/2021.findings-acl.192, https://aclanthology.org/ 2021.findings-acl.192
Cider: Consensus-based image description evaluation. R Vedantam, C L Zitnick, D Parikh, IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Vedantam, R., Zitnick, C.L., Parikh, D.: Cider: Consensus-based image description evaluation. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 4566-4575 (2015)
Bertscore: Evaluating text generation with bert. * Zhang, T Kishore, * , V Wu, * , F Weinberger, K Q Artzi, Y , International Conference on Learning Representations. Zhang*, T., Kishore*, V., Wu*, F., Weinberger, K.Q., Artzi, Y.: Bertscore: Eval- uating text generation with bert. In: International Conference on Learning Repre- sentations (2020), https://openreview.net/forum?id=SkeHuCVFDr
MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. W Zhao, M Peyrard, F Liu, Y Gao, C M Meyer, S Eger, 10.18653/v1/D19-1053Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 563-578. Association for Computational Linguistics. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 563-578. Association for Computational LinguisticsHong Kong, ChinaZhao, W., Peyrard, M., Liu, F., Gao, Y., Meyer, C.M., Eger, S.: MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 563-578. Association for Computational Lin- guistics, Hong Kong, China (Nov 2019). https://doi.org/10.18653/v1/D19-1053, https://aclanthology.org/D19-1053
ParaEval: Using paraphrases to evaluate summaries automatically. L Zhou, C Y Lin, D S Munteanu, E Hovy, Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. the Human Language Technology Conference of the NAACL, Main ConferenceNew York City, USAAssociation for Computational LinguisticsZhou, L., Lin, C.Y., Munteanu, D.S., Hovy, E.: ParaEval: Using paraphrases to evaluate summaries automatically. In: Proceedings of the Human Language Technology Conference of the NAACL, Main Conference. pp. 447-454. Associ- ation for Computational Linguistics, New York City, USA (Jun 2006), https: //aclanthology.org/N06-1057
| [
"https://github.com/Yale-LILY/SummEval"
] |
[
"Conveying the Predicted Future to Users: A Case Study of Story Plot Prediction",
"Conveying the Predicted Future to Users: A Case Study of Story Plot Prediction"
] | [
"Chieh-Yang Huang \nPennsylvania State University\nUniversity ParkPAUSA\n",
"Saniya Naphade \nGumGum Inc\nLos AngelesCAUSA\n",
"Kavya Laalasa Karanam \nIntel Corporation\nSanta ClaraCAUSA\n",
"* Ting-Hao ' ",
"Kenneth ' Huang \nPennsylvania State University\nUniversity ParkPAUSA\n"
] | [
"Pennsylvania State University\nUniversity ParkPAUSA",
"GumGum Inc\nLos AngelesCAUSA",
"Intel Corporation\nSanta ClaraCAUSA",
"Pennsylvania State University\nUniversity ParkPAUSA"
] | [] | Creative writing is hard: Novelists struggle with writer's block daily. While automatic story generation has advanced recently, it is treated as a "toy task" for advancing artificial intelligence rather than helping people. In this paper, we create a system that produces a short description that narrates a predicted plot using existing story generation approaches. Our goal is to assist writers in crafting a consistent and compelling story arc. We conducted experiments on Amazon Mechanical Turk (AMT) to examine the quality of the generated story plots in terms of consistency and storiability. The results show that short descriptions produced by our frame-enhanced GPT-2 (FGPT-2) were rated as the most consistent and storiable among all models; FGPT-2's outputs even beat some random story snippets written by humans. Next, we conducted a preliminary user study using a story continuation task where AMT workers were given access to machine-generated story plots and asked to write a follow-up story. FGPT-2 could positively affect the writing process, though people favor other baselines more. Our study shed some light on the possibilities of future creative writing support systems beyond the scope of completing sentences. Our code is available at: https://github.com/appleternity/Story-Plot-Generation. | 10.48550/arxiv.2302.09122 | [
"https://export.arxiv.org/pdf/2302.09122v1.pdf"
] | 257,038,389 | 2302.09122 | fd8ebaa67ee959b9e794af179feaa2c26a65e86f |
Conveying the Predicted Future to Users: A Case Study of Story Plot Prediction
Chieh-Yang Huang
Pennsylvania State University
University ParkPAUSA
Saniya Naphade
GumGum Inc
Los AngelesCAUSA
Kavya Laalasa Karanam
Intel Corporation
Santa ClaraCAUSA
* Ting-Hao '
Kenneth ' Huang
Pennsylvania State University
University ParkPAUSA
Conveying the Predicted Future to Users: A Case Study of Story Plot Prediction
Creative writing is hard: Novelists struggle with writer's block daily. While automatic story generation has advanced recently, it is treated as a "toy task" for advancing artificial intelligence rather than helping people. In this paper, we create a system that produces a short description that narrates a predicted plot using existing story generation approaches. Our goal is to assist writers in crafting a consistent and compelling story arc. We conducted experiments on Amazon Mechanical Turk (AMT) to examine the quality of the generated story plots in terms of consistency and storiability. The results show that short descriptions produced by our frame-enhanced GPT-2 (FGPT-2) were rated as the most consistent and storiable among all models; FGPT-2's outputs even beat some random story snippets written by humans. Next, we conducted a preliminary user study using a story continuation task where AMT workers were given access to machine-generated story plots and asked to write a follow-up story. FGPT-2 could positively affect the writing process, though people favor other baselines more. Our study shed some light on the possibilities of future creative writing support systems beyond the scope of completing sentences. Our code is available at: https://github.com/appleternity/Story-Plot-Generation.
Introduction
Storytelling is an important human activity. People engage in storytelling to communicate, teach, entertain, establish identity, or relate to each other in meaningful ways. However, creative writing is known to be a cognitively demanding task, and writers struggle with writer's block daily. Researchers and the industry have created a series of techniques that support human writing. Many techniques focus on lower-level language support, such as auto-completion, grammar checking, or typo detection, and these have proven helpful and are widely used. On the other hand, the techniques aiming to provide higher-level guidance, such as story generation, have long been treated only as in-thelab artificial intelligence tasks. Automatic story generation, for example, was primarily developed and tested using toy datasets composed of stories that are (i) extremely short (for example, containing five sentences, such as ROCStories (Mostafazadeh et al. 2016)), (ii) written under arti-Figure 1: We view a long novel as a sequence of fixed-sized story blocks. The goal of the proposed task is to consider the previous story block (i.e., B n ) and generate a short description for the next story block (i.e., B n+1 ). We define a short description as a three-sentence summary of a story block. ficial constraints to make it easier for machines to learn (e.g., GLUCOSE (Mostafazadeh et al. 2020)), or (iii) based on the assumption of a story starter prompt (e.g., Writing-Prompt (Fan, Lewis, and Dauphin 2018)). However, realworld writers compose novels with over 10,000 words, work with blank pages with few constraints, and can get stuck anywhere in the middle of a draft. As the models trained on toy datasets inevitably generate stories inheriting the data's characteristics, it is unclear how well modern story generation models can be used to support writers in practice.
In this paper, we aim to support creative writing in practical terms. We view a long novel as a sequence of fixedsized story blocks (e.g., 20 sentences). The goal is to generate a short description that narrates future story plots for the next story block (i.e., B n+1 ) using the previous story block (i.e., B n ). We define a story plot as a short summary over a story block that illustrates the key follow-up idea instead of the detailed full text. Three existing story generation models, Fusion-based seq2seq (Fan, Lewis, and Dauphin 2018), Plan-and-Write (Yao et al. 2019), and GPT-2 (Radford et al. 2019) enhanced with semantic frame representation (Huang and Huang 2021), are adapted to predict the follow-up story plot given the context of the previous story block.
We first conduct a quality assessment study on Amazon Mechanical Turk (AMT) to measure the quality of the machine-generated story plots. In this study, crowd workers are recruited to (i) read a previous story block and six follow-up story plots, and (ii) rank the quality in terms of consistency and storiability (Roemmele 2021). The experiment shows that story plots generated by our frameenhanced GPT-2 are more consistent than randomly selected plots written by humans and are competitive with them in the sense of storiability. The result suggests that human-written plots are still strong baselines, especially the ground truth, but frame-enhanced GPT-2 is capable of generating consistent and storiable story plots to a certain level.
We further conduct a writing task study on AMT to understand how much humans can benefit from machinegenerated plots. In this study, crowd workers are asked to develop a 100-word follow-up story given the previous story block and the four follow-up story plots as hints. After finishing the writing task, we collect crowd workers' selfreported judgments on four aspects: degree of inspiration, helpfulness, readability, and creativity. The result shows that frame-enhanced GPT-2 produces output that is less inspiring when compared to strong baselines such as ground truth and GPT-3. However, analyses of the written stories also suggest that, despite being less favored by humans, frame-enhanced GPT-2 still has a positive influence on the written story draft. This finding also echoes Roemmele (2021)'s inspirationthrough-observation paradigm: Human writing can still be improved even with less storiable machine-generated texts.
Related Work
Our work is mainly related to (i) supporting creative writing and (ii) story generation.
Supporting Creative Writing
Prior research has supported creative writing in different ways. InkWell mimics a specified writer's personality traits and revises the draft to provide stylistic variations (Gabriel, Chen, and Nichols 2015). Metaphoria generates metaphorical connections according to the user's input to help create metaphors (Gero and Chilton 2019). Creative Help generates a follow-up sentence as a suggestion for creative writing using a recurrent neural network (Roemmele and Gordon 2015). Heteroglossia collects story plot ideas using a crowdsourcing approach to support writers in continuing the story when stuck due to writer's block . Scheherazade is built for interactive narrative generation with a crowd-powered system to collect narrative examples (Li and Riedl 2015). Clark et al. (2018) explore the process of machine-in-the-loop creative writing and find that machine-generated suggestions should achieve a balance between coherency and surprise. Roemmele (2021) studies the inspiration-through-observation paradigm and finds that people produce appealing sentences when observing the generated examples. Compass identifies and fills in unnoticed missing information in stories (Mori et al. 2022). Padmakumar and He (2022) build a machine-in-the-loop system to rewrite a span of text or fill in sentences between two pieces of text when requested.
Recently, large language models (LLMs) have shown incredible power in text continuation, rewriting, few-shot learning, and so on. Many researchers have explored how LLMs can be used to support creativity. Storium fine-tunes GPT-2 to consider complicated contextual information (intro, character, and so on) to generate a few follow-up sentences to continue the story (Akoury et al. 2020). Story Centaur provides an interface where users can provide fewshot learning examples to teach LLMs new functions (Swanson et al. 2021). CoPoet, a collaborative poetry writing system, allows users to control an LLM by specifying the attributes of the desired text (Chakrabarty, Padmakumar, and He 2022). TaleBrush allows users to control a protagonist's fortune through a line sketching interaction, and the userspecified fortune curve is used to guide an LLM's story generation process (Chung et al. 2022). CoAuthor supports writing by providing a sentence to continue the given draft by GPT-3 (Lee, Liang, and Yang 2022). Sparks inspires scientific writing by using LLMs to generate sentences that could spark users' ideas (Gero, Liu, and Chilton 2022). Wordcraft allows users to interact with LLMs through a chatbot interface (Ippolito et al. 2022). Dramatron, built with an LLM with prompt-chaining mechanism, could write theatre scripts and screenplays together with users (Mirowski et al. 2022). Unlike most of the prior works, where generated sentences are ready to use in the story, our work aims to generate a short summary for the follow-up story and expects users to develop exact story content manually.
Story Generation
Traditional story generation focuses on producing logically coherent stories using planning or reasoning-based approaches (Riedl and Young 2010;Li et al. 2013). Recently, neural story generation models (Peng et al. 2018;Fan, Lewis, and Dauphin 2018) and pre-trained models (Radford et al. 2019;Keskar et al. 2019) have been used for story generation in an end-to-end manner. However, these models still suffer from the issue of generating repetitive and insufficiently diverse stories (See et al. 2019). To further enhance the coherence among sentences and events, researchers design a variety of intermediate representations to guide the story generation process, including event triplets (Martin et al. 2018), keyword storylines (Yao et al. 2019), critical phrases (Xu et al. 2018), action plans with semantic role labeling (SRL) (Fan, Lewis, and Dauphin 2019), content planning (keyphrase and sentence-level position) (Hua and Wang 2020), and plot structure based on SRL (Goldfarb-Tarrant et al. 2020). However, most of these work on short stories, such as WritingPrompt (Fan, Lewis, and Dauphin 2018), ROCStories (Mostafazadeh et al. 2016), or WikiPlots (Bamman, O'Connor, andSmith 2013). Unlike real-world novelswhich usually have more than 10,000 words-the stories from these datasets often end up with under 1,000 or even 100 words.
Plot Prediction
We follow Huang and Huang to split a full story into a sequence of story blocks and each story block contains a fixed number of sentences. Note that the size of the story block can vary to fulfill different purposes. Large story blocks (200 sentences or beyond) can capture the high-level ideas among chapters; whereas small story blocks (five or ten sentences) can be used to model the event relationships in a near future. In this paper, we focus on medium size story blocks (20 sentences).
Next, we define a story plot as a three-sentence summary of a huge story block. The plot prediction task, thus, is defined as using the story plot in story block n to predict the story plot in story block n+1. In this section, we first describe how we collect the story plots using the extractive summarization model; and then detail how we adapt the three existing story generation models to our problem.
Collecting Story Plots
To generate such a summary for every story block, we use Matchsum (Zhong et al. 2020), an extractive summarization model. We train the Matchsum model on the Booksum dataset (Kryściński et al. 2021) where each paragraph is paired with a one-or two-sentence summary. To ensure the training instances from Booksum are similar to those in our story block setup (20 sentences), only 20,709 paragraphs with more than 10 sentences are kept for training. The finetuned Matchsum model is then applied to our Bookcorpus dataset to generate the story plot for all the 900k story blocks. We randomly select 1k instances as the validation set in order to observe if the model converges or not in the training process.
Story Plot Generation Models
Here, we adapt three existing models to our story plot generation task: (i) Fusion-based Seq2Seq (Fan, Lewis, and Fusion-Based Seq2seq. The fusion-based mechanism (Fan, Lewis, and Dauphin 2018) is a hierarchical model where a seq2seq model is trained on top of a pre-trained seq2seq model. The underlying Convolutional Seq2Seq model is trained on a premise or prompts, the plot of story block n in our case. The fusion model, another convolutional seq2seq model is then trained on top of the previous seq2seq model to encourage the model to focus on the link between the prompt and the generated story, making it easier to generate consistent stories and reducing the tendency to drift off-topic. Given the prompt, the model generates one of the possible directions story block n+1 in which the story could progress further.
We tokenize the plots using NLTK, turn words that appear less than 10 times in the training set to <UNK>, and follow the paper's default hyper-parameters to train the model (Fan, Lewis, and Dauphin 2018). The base seq2seq model is trained for 20 epochs and then used to train the fusion model which takes 15 epochs. We use top-k sampling to generate story plots with k = 100, temperature = 0.8, and unknown token penalty = 100. Plot lengths are limited to 31 to 71 tokens as the average length in the training set is 51.08.
Plan-and-Write (P&W). Plan-and-Write (Yao et al. 2019) makes use of static planning which generates storylines as a standard intermediate representation to create coherent and diverse stories. Storylines are depicted as a sequence of important words that estimate structures for a real story plot. The P&W model takes a prompt as the input to (i) first plan the storylines and (ii) then generate the whole story. Following Yao et al. (2019)'s setup, we apply RAKE algorithm (Rose et al. 2010) to extract keywords from the plot of story block n+1 to form the storylines.
We tokenize the plots using NLTK and turn words that appear less than 10 times in the training set to <UNK>. The storyline generation model is based on a 3-layer LSTM with embedding size = 300, hidden size = 300; and the plot generation model is based on a 5-layer LSTM with embedding size = 512, hidden size = 512; We use Adam (Kingma and Ba 2015) as the optimizer to train the model. For the rest of the hyper-parameters, we follow the setting in the original paper. The storyline model is trained for 100 epochs and the plot generation model is trained for 40 epochs. We use temperature sampling to generate storylines and the final story plots with temperature = 0.8. Plan-and-Write does not handle unknown tokens so we add our implementation by setting the sampling probability of <UNK> to zero. Again, plot lengths are limited to 31 to 71 tokens. After obtaining the generated story plots, to remove the artifact, we further apply the Treebank detokenizer (Bird, Klein, and Loper 2009) to remove extra spaces and Truecase (Lita et al. 2003) to capitalize necessary letters.
Frame-enhanced GPT-2 (FGPT-2). Semantic frame representations (Huang and Huang 2021) encode high-level semantic units into vectors using their corresponding importance, which has been shown to be effective for representing a longer context. We take advantage of the semantic frame representation to encode longer contextual information and Huang and Huang (2021)'s semantic frame forecast model to generate the guidance of the follow-up story block. Note that the predicted frame representation contains semantic units that are expected to happen in the next story block which serves as a goal-setting function in the writing process theory (Flower and Hayes 1981). We build a sequence-to-sequence model using the pre-trained GPT-2 model where two GPT-2 models are used for the encoder and decoder respectively. The frame representation of story block n and the predicted frame representation of story block n+1 is passed through a linear layer to fit the GPT-2's word embedding dimension and inputted into GPT-2 encoder as two words. The encoder input can be described as:
X = [x 1 , x 2 , · · · , x n , f n ,f n+1 ] where x i is the i-th word in
the story plot of story block n, f n is the adapted frame representation of story block n, andf n+1 is the LGBM's prediction of the frame representation for story block n+1. We added the START and <PAD> tokens to the model to enable the sequence-to-sequence training framework. The sequence Table 2: Consistency ranking result shows that Ground-Truth FGPT-2 Random-Future < Random-History Fusion-Seq < P&W, where stands for "significantly better". (n=1000, 0.001 * * * is a p-value smaller than 0.001)
[<START>, y 1 , y 2 , · · · , y n ,<|endoftext|>], where y i is the i-th word in the story plot of story block n+1, is used as the target to train the decoder.
The model is built using HuggingFace's GPT-2 implementationand is trained using AdamW optimizer (Loshchilov and Hutter 2019) with initial learning rate = 3e − 4, weight decay factor = 1e − 4. The model is trained for 800,000 steps with batch size = 32. Top-k sampling is used for generating story plots with top-k = 100, temperature = 0.8, and repetition penalty = 3.0. Plot lengths are limited to 36 to 76 tokens as the average plot length using GPT-2's tokenizer is 56.
Quality Assessment
In this study, we examine the quality of plot generation.
Plot Prediction Assessment
In this plot prediction assessment task, the goal is to measure the quality of the automatically generated plots by the three models described in the Plot Prediction Section. We take the human-written plots extracted from the real book as the baselines. A total of six different models are compared.
• Ground-Truth (GT). The gold standard story plot of the story block n+1. We expect Ground-Truth to be the upper bound. Note that the story plot is obtained by applying the Matchsum model to extract a three-sentence summary on the target story block. Table 3: Storiability ranking result shows that Ground-Truth Random-History < Random-Future < FGPT-2 P&W < Fusion-Seq, where stands for "significantly better". (n=1000, 0.001 * * * is a p-value smaller than 0.001) Future is expected to be a strong baseline as it is a followup story plot that happened later. • Random-History (RH). The story plot of a randomly selected story block n-t, where 10 ≤ t ≤ 20. We expect Random-History to be a slightly weaker baseline as it is something that happened. • Fusion-Seq. The fusion-based Seq2seq model described in the Plot Prediction section. • P&W. The plan-and-write model described in the Plot Prediction section. • FGPT-2. The sequence-to-sequence GPT-2 model guided by frame representations as described in the Plot Prediction section.
The evaluation data is built from the Huang and Huang's Bookcorpus testing set (Huang and Huang 2021) where 958 qualified books are collected. We randomly sample one story block as the target story block n from each book to create a total of 958 testing instances. In the following sections, we describe how we assess the story plot quality using both automatic evaluation metrics and human judgments.
Automatic Evaluation
We evaluate the five generated plots by using the Ground-Truth as the reference. NLG-eval package (Sharma et al. 2017) is used and four common metrics, BLEU-4, ME-TEOR, ROUGE-L, and SkipThought Cosine Similarity (ST), are reported in Table 1. The results show that FGPT-2 outperforms other models in the metrics based on token-overlapping, BLEU-4, METEOR, and ROUGE-L. In the semantic-based metric, ST, the human-written plots, Random-History and Random-Future, still perform better. However, prior works have shown that automatic evaluation metrics, especially token-overlapping metrics, are not entirely suitable for evaluating the story generation domain as stories that differed from the target can still be good stories (Hsu et al. 2019). Therefore, we also conduct a human evaluation to measure the quality.
Human Evaluation
We conduct a human evaluation on AMT to evaluate two aspects of the story plot quality, consistency and storiability. In this task, workers are instructed to read a story snippet (20 sentences) along with six follow-up story plots. We then ask workers to rank the story plots according to consistency and storiability. Consistency assesses whether the given story plot makes sense in its context (story snippet); storiability measures whether readers would be curious to read the complete story developed from the given story plot (Roemmele 2021). Note that we only ask one single question in a HIT so that workers would not get confused. To make sure workers spend enough time reading the story snippet and the story plots, we add a 30-second submission lock to the interface. To alleviate the negative effect and bias caused by the decoding process (such as an unfinished sentence) and control the reading time, we apply the following three rules to create the evaluation set: (i) all the six story plots have to be within 25-65 words; (ii) the story block n has to be within 150-300 words; (iii) the story block n and n+1 do not cross chapters. Out of the 247 qualified instances, we randomly select 200 instances for evaluation. For each instance, we collect five assignments which result in a total of 1,000 rankings per aspect. Given that each HIT contains around 500 words to read, we estimate the task to take around 2 minutes and thus we pay $0.33 per assignment.
The consistency ranking results in Table 2 show that Ground-Truth FGPT-2 Random-Future < Random-History Fusion-Seq < P&W, where stands for "significantly better". Surprisingly, FGPT-2 is ranked to be more consistent than the two human-written story plots, Random-Future and Random-History. The other two story plot generation models, Fusion-Seq and P&W, are believed to have lower consistency. Table 3 shows the storibility rankings: Ground-Truth Random-History < Random-Future < FGPT-2 P&W < Fusion-Seq. Again, Ground-Truth serves as the upper bound and is considered to be the most storiable one. Although FGPT-2 does not outperform the two human-written story plot baselines here, it achieves the same level of storiability as them. We thus conclude that FGPT-2 is a good choice for plot generation as it can produce consistent and storiable story plots. Table 4 shows an example output of the six models for comparison.
Human Evaluation through a Writing Task
The quality assessment experiment suggests that FGPT-2 could generate story plots that achieve the level of random human-written baselines (Random-Future and Random-History) in terms of consistency and storability. To understand how the generated story plots influence people's story writing, we conduct a preliminary study on AMT using a story continuation task. In this section, we first describe the study protocol and discuss the result.
Baselines
Since writing task is difficult in general, to prevent adding too much workload to workers, we only compare four different models.
• Ground-Truth (GT). As described above.
• Random-Future (RF). As described above.
• FGPT-2. As described above. Figure 2: A section of the interface used for the story continuation task. We blur the story plots to prevent workers from copying the exact story plots. Workers would need to click on the story plots to unblur them.
• GPT-3. We use OpenAI's GPT-3 API (Brown et al. 2020) to generate follow-up story plots with the prompt, "Given the story snippet: [story] Describe a follow-up story arc within 30 words". We fill in the full story block n into [story]. Parameters used was model = textdavinci-002, temperature = 0.95, max-tokens = 76, top-p = 0.95, frequency-penalty = 0.5, presence-penalty = 0.5, and best-of = 5.
Study Protocol
In this story continuation task, workers are asked to finish three steps: (i) reading through a story snippet; (ii) writing a 100-word follow-up story to continue the given story snippet with access to story plots; and (iii) answering questions about their experience. We first show a 20-sentence story snippet (story block n) and ask workers to carefully read through it in step (i). In step (ii), four story plots generated by four different methods are provided. Note that the order of the four story plots is randomized. After finishing reading all the story plots, workers are instructed to write a 100-word story to continue the given story snippet. As shown in Figure 2, to prevent workers from simply copying-and-pasting or typing in the exact story plot ideas, we blur the four story plots. Workers need to click on the story plots to see the exact texts (the texts would get blurred when the mouse leaves.) After finishing the writing task, we ask workers to fill-up a short questionnaire regarding their experience with the story plots. In the questionnaire, we ask seven questions to measure four different aspects:
1. Inspiringness. All story plots that inspire the writing (Q1). Multiple selections are allowed. 2. Helpfulness. The most/least helpful story plot (Q2, Q3). 3. Readability. The easiest/hardest story plot to comprehend (Q4, Q5). 4. Creativity. The most/least creative story plot (Q6, Q7).
We add a 180-second submission lock (workers are allowed to submit after 180 seconds) to the interface to ensure workers spend enough time working on the task. Each task is estimated to be finished within six to nine minutes. Aiming at paying $10 dollars per hour, we set the payment for each task to $1.5 dollars.
Story Snippet. He looked away, blinking back a tear. The bunker was more centered than Josue had realized, protected on all sides by as much of the manor as possible. He liked that. Anywhere he would walk within the new manor, he would be close to his father's final resting place. John looked down at the square section. "We could rebuild the bunker, if you'd like." Josue was sure, even if they rebuilt it, he would never find the wherewithal to use it. He couldn't think of a more fitting memorial to his father than to leave it the way it was. "Please, don't change a thing." He looked out at the perimeter of the compound. They had almost finished the walls. "Those will protect us." "They will hold for the immediate need." John led Josue through the compound to the eastern hillside overlooking the manor. The shimmer of the obfuscator remained above them. A separate wall surrounded a new plot. He felt a sense of reverence as the Elder led him into the enclosure. In the last rays of the day sun, symmetric monuments reflected pink, planted in neat rows and columns along the hillside. "We took the liberty of adding a memorial garden." John walked to the top of the hill, where a solitary statue stood apart from the others, reminding Josue of a General reviewing his troops.
Ground-Truth. Josue wiped his damp cheek. A fire burned in his breast at the very sight of the name. He died doing what he could to protect those he loved."
Random-History. Josue tried to remember if he had been on Omri property. Keep away from the mine for a while. His father caught himself and put on a tired smile.
Random-Future. Master Hector raised an eyebrow. Josue pushed the thoughts of Timeos out of his mind. Hector pulled a longpole from the rack and threw it to Josue. Table 5: Questionnaire results from the story continuation task. GPT3 is rated the most inspiring one.
As a preliminary study, we only test on five instances. We simply take the first five instances from the human evaluation of the quality assessment experiment. For each instance, we collect five assignments, resulting in a total of 25 stories along with 25 questionnaires.
Questionnaire Result
After obtaining all 25 assignments, we first check out all the written stories. Despite adding a 180-second submission lock and the blurring function, there are a lot of spamming submissions, such as irrelevant texts, random online stories, and random keystrokes. We read through all the written stories and manually remove the spamming assignments, resulting in a total of 17 assignments remaining. Table 6: Semantic similarity between the story plot idea and the written story.
as workers are asked to select all story plots that inspired their written story, we report the percentage of a model being considered inspiring over the 17 assignments. GPT-3 inspires more than half of the stories (0.647) while FGPT-2 (0.176) is the least inspiring one. For helpfulness, readability, and creativity, we report the percentage of a model being selected as the most/least helpful, the easiest/hardest readable, and the most/least creative one. The overall score is computed by Most − Least (or Easiest − Hardest). Ground-Truth, Random-Future, and GPT-3 are considered the most helpful, the easiest readable, and the most creative, respectively. We notice that people do not favor FGPT-2 in all three aspects when compared to other baselines. However, FGPT-2 still inspires 17.6% of the stories. Such phenomenon echoes Roemmele's finding from the inspiration-through-observation paradigm, where human writing would be enhanced by machine-generated texts even though the machine-generated texts are less storible.
We also notice that although GPT-3 gets the highest "most helpful" votes, it also gets the highest "least helpful" votes. This would require more analysis to understand why.
How do story plots affect story writing?
The questionnaire serves as self-reported results. To understand how the story plots influence story writing, we analyze the relationship between the story plots and the follow-up story written by workers. Note that each HIT assignment comes with one human-written follow-up story and four machine-generated story plots. Using this data, we measure two aspects: (i) semantic similarity and (ii) token overlap.
Semantic Similarity. We encode the follow-up stories and the story plots using Sentence-BERT (Reimers and Gurevych 2019) 1 . Cosine similarity is then used to compute the semantic similarity between a follow-up story and a story plot. To get a better sense of this cosine similarity value, we also include a Random baseline for comparison. The Random baseline reports the semantic similarity between the follow-up stories and a set of randomly sampled 40word paragraphs. We choose 40 words to match the length of the story plots. These paragraphs are randomly selected from NLTK's Gutenberg corpus. We start with 3,400 random paragraphs (two sentences) and remove those shorter than 30 words or longer than 50 words. A total of 1,113 valid short paragraphs are then included in this random set. As shown in Table 6, story plots generated by GPT-3 have the highest semantic similarity (0.840) with the humanwritten follow-up stories, suggesting that workers do get inspiration from GPT-3. Ground-Truth (0.816), Random-Future (0.795), and FGPT-2 (0.795) are slightly lower. However, all of the models are higher than the Random baseline suggesting that workers do get inspired more or less, which again echos the inspiration-through-observation paradigm (Roemmele 2021).
Token Overlap. To understand the effect in the token level, we use awesome-align (Dou and Neubig 2021) to get the token alignment between the follow-up story and the story plots. Awesome-align computes the similarity over tokens' contextual embeddings to assign alignments. Compared to the exact token overlap, awesome-align provides a soft overlap where semantically similar words can be identified. Upon getting the alignment, we compute two scores. 1. Story Coverage: the percentage of the follow-up stories that can be found in the story plots. This also means the amount of information contributed by the story plots. 2. Plot Coverage: the percentage of the story plots that can be found in the follow-up stories. This also indicates the helpfulness of the story plots. Note that when computing the coverage score, we exclude punctuation and stop words. Again, we add a Random base-1 sentence-transformers/sentence-t5-base line to help us interpret the scores. As shown in Table 7, Ground-Truth and Random-Future help more at the token level. This is probably because they provide wording and terms more useful in the context. Although FGPT-2 is less helpful compared to other strong baselines, it still somewhat affects workers' writing.
The difference between semantic helpfulness and tokenlevel helpfulness probably causes workers to vote GPT-3 as the most helpful but also the least helpful.
Discussion
The preliminary experiment on the story continuation task somewhat suggests that the proposed FGPT-2 approach only provides limited help in real writing tasks. We identify a few possible reasons.
Limitations of our story plot formulation. To provide support in long stories, we extract only three sentences from a story block to form a story plot. We expect story plots to capture all the essential information and represent the story block, but much of the information is indeed missing. This is probably caused by the use of the extractive summarization method, which essentially selects a few sentences from the story block. Therefore, if we would like to include all the essential information, the story would need to contain a few very informative sentences. This is not the case for stories. In the future, we would explore other ways to condense information for a story block.
The study on the short story writing task might not capture the difficulty of novel writing. The proposed method is built for stories, such as novels, that are long enough to break recent story generation models. However, conducting experiments with people on such long texts is hard. To evaluate the system in the desired context, we will conduct a formal user study in the future.
Conclusion
In this paper, we generate short future story plots as followup story ideas to help writers continue their story. Results on AMT confirm that using FGPT-2 to serve as the plot prediction model yields plots that are significantly more consistent than the ones generated by the two human-written baselines, Random-History and Random-Future. When comparing the storiability, how appealing a story plot is for readers, FGPT-2 is competitive to the two random baselines. A preliminary human study with a story continuation task suggests that FGPT-2 could positively affect story writing but there are still difficulties to overcome. In the future, we will (i) explore better ways to build story plots and (ii) integrate the proposed function to a real editor (e.g., Google Docs) and conduct studies to measure whether writers can get benefits for fiction writing in practice.
anonymous reviewers, who provided constructive feedback to enhance the quality of this work.
Dauphin 2018), (ii) Plan-and-Write (Yao et al. 2019), and (iii) GPT-2 (Radford et al. 2019) guided by semantic frame representation (Huang and Huang 2021).
•
Random-Future (RF). The story plot of a randomly selected story block n+u, where 5 ≤ u ≤ 15. Random-
Fusion-
Seq. he shouted. The mob rushed up to the door, and with a sickening clang, the doors began to snap open. With a flash, it was the assassin, the assassin.P&W. John thought it was a best thing. Bill said, "I have to go home and start a new life." Tom asked, looking at the sign.FGPT-2. John walked back to the edge of the building and looked down. He could see the end of the main staircase on the other side of the wall. It was lined with columns, each leading up to the roof.
Table 4 :
4Example of the story snippet and the six follow-up story plots.Aspect
GT
RF
FGPT-2 GPT-3
Inspiringness ↑
0.294 0.294
0.176
0.647
Helpfulness
Most ↑
0.235 0.353
0.059
0.353
Least ↓
0.000 0.294
0.294
0.412
Overall ↑
0.235 0.059 −0.235
−0.059
Readability
Easiest ↑
0.353 0.235
0.176
0.235
Hardest ↓ 0.294 0.059
0.471
0.176
Overall ↑
0.059 0.176 −0.294
0.059
Creativity
Most ↑
0.353 0.176
0.000
0.471
Least ↓
0.176 0.294
0.353
0.176
Overall ↑
0.176 −0.118 −0.353
0.294
Table 5
5shows the questionnaire results. For inspiringness,GT
RF
FGPT-2 GPT-3 Random
Similarity 0.816 0.795
0.795
0.840
0.787
Table 7 :
7GT and RF contribute more in token level.
AcknowledgmentsThe authors extend their heartfelt appreciation to the late Dr. Arzoo Katiyar for her unwavering support and insightful suggestions throughout the course of this project. The authors also acknowledge the valuable contributions of the
STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. N Akoury, S Wang, J Whiting, S Hood, N Peng, M Iyyer, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnlineAkoury, N.; Wang, S.; Whiting, J.; Hood, S.; Peng, N.; and Iyyer, M. 2020. STORIUM: A Dataset and Evaluation Plat- form for Machine-in-the-Loop Story Generation. In Pro- ceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 6470-6484. On- line: Association for Computational Linguistics.
Learning Latent Personas of Film Characters. D Bamman, B O'connor, N A Smith, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational Linguistics1Bamman, D.; O'Connor, B.; and Smith, N. A. 2013. Learn- ing Latent Personas of Film Characters. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), 352-361. Sofia, Bulgaria: Association for Computational Linguistics.
Natural Language Processing with Python. S Bird, E Klein, E Loper, O'Reilly MediaBird, S.; Klein, E.; and Loper, E. 2009. Natural Language Processing with Python. O'Reilly Media.
Language Models are Few-Shot Learners. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, R Child, A Ramesh, D Ziegler, J Wu, C Winter, C Hesse, M Chen, E Sigler, M Litwin, S Gray, B Chess, J Clark, C Berner, S Mccandlish, A Radford, I Sutskever, D Amodei, H Larochelle, M Ranzato, R Hadsell, M Balcan, H Lin, Advances in Neural Information Processing Systems. Curran Associates, Inc33Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Mod- els are Few-Shot Learners. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neu- ral Information Processing Systems, volume 33, 1877-1901. Curran Associates, Inc.
Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing. T Chakrabarty, V Padmakumar, H He, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingChakrabarty, T.; Padmakumar, V.; and He, H. 2022. Help me write a poem: Instruction Tuning as a Vehicle for Collabora- tive Poetry Writing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
TaleBrush: sketching stories with generative pretrained language models. J J Y Chung, W Kim, K M Yoo, H Lee, E Adar, M Chang, Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. the 2022 CHI Conference on Human Factors in Computing SystemsChung, J. J. Y.; Kim, W.; Yoo, K. M.; Lee, H.; Adar, E.; and Chang, M. 2022. TaleBrush: sketching stories with genera- tive pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1-19.
Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories. E Clark, A S Ross, C Tan, Y Ji, N A Smith, 978-1-4503-4945-123rd International Conference on Intelligent User Interfaces, IUI '18. New York, NY, USAACMClark, E.; Ross, A. S.; Tan, C.; Ji, Y.; and Smith, N. A. 2018. Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories. In 23rd International Conference on Intelligent User Interfaces, IUI '18, 329-340. New York, NY, USA: ACM. ISBN 978-1-4503-4945-1.
Word Alignment by Finetuning Embeddings on Parallel Corpora. Z.-Y Dou, G Neubig, Conference of the European Chapter of the Association for Computational Linguistics (EACL). Dou, Z.-Y.; and Neubig, G. 2021. Word Alignment by Fine- tuning Embeddings on Parallel Corpora. In Conference of the European Chapter of the Association for Computational Linguistics (EACL).
Hierarchical Neural Story Generation. A Fan, M Lewis, Y Dauphin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Fan, A.; Lewis, M.; and Dauphin, Y. 2018. Hierarchical Neural Story Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 889-898. Melbourne, Australia: Association for Computational Linguistics.
Strategies for Structuring Story Generation. A Fan, M Lewis, Y Dauphin, Proceedings of the 57th. the 57thFan, A.; Lewis, M.; and Dauphin, Y. 2019. Strategies for Structuring Story Generation. In Proceedings of the 57th
2650-2660Annual Meeting of the Association for Computational Linguistics. Florence, ItalyAssociation for Computational LinguisticsAnnual Meeting of the Association for Computational Lin- guistics, 2650-2660. Florence, Italy: Association for Com- putational Linguistics.
A cognitive process theory of writing. College composition and communication. L Flower, J R Hayes, 32Flower, L.; and Hayes, J. R. 1981. A cognitive process the- ory of writing. College composition and communication, 32(4): 365-387.
InkWell: A Creative Writer's Creative Assistant. R P Gabriel, J Chen, J Nichols, Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition. the 2015 ACM SIGCHI Conference on Creativity and CognitionACMGabriel, R. P.; Chen, J.; and Nichols, J. 2015. InkWell: A Creative Writer's Creative Assistant. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cogni- tion, 93-102. ACM.
Metaphoria: An Algorithmic Companion for Metaphor Creation. K I Gero, L B Chilton, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19. the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19New York, NY, USA: Association for Computing Machinery. ISBN 9781450359702Gero, K. I.; and Chilton, L. B. 2019. Metaphoria: An Algo- rithmic Companion for Metaphor Creation. In Proceedings of the 2019 CHI Conference on Human Factors in Comput- ing Systems, CHI '19, 1-12. New York, NY, USA: Associa- tion for Computing Machinery. ISBN 9781450359702.
Sparks: Inspiration for Science Writing Using Language Models. K I Gero, V Liu, L Chilton, Designing Interactive Systems Conference, DIS '22. New York, NY, USAAssociation for Computing Machinery. ISBN 9781450393584Gero, K. I.; Liu, V.; and Chilton, L. 2022. Sparks: Inspiration for Science Writing Using Language Models. In Design- ing Interactive Systems Conference, DIS '22, 1002-1019. New York, NY, USA: Association for Computing Machin- ery. ISBN 9781450393584.
Content Planning for Neural Story Generation with Aristotelian Rescoring. S Goldfarb-Tarrant, T Chakrabarty, R Weischedel, N Peng, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online: Association for Computational LinguisticsGoldfarb-Tarrant, S.; Chakrabarty, T.; Weischedel, R.; and Peng, N. 2020. Content Planning for Neural Story Gen- eration with Aristotelian Rescoring. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), 4319-4338. Online: Associa- tion for Computational Linguistics.
Visual Story Post-Editing. T.-Y Hsu, C.-Y Huang, Y.-C Hsu, T.-H Huang, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsHsu, T.-Y.; Huang, C.-Y.; Hsu, Y.-C.; and Huang, T.-H. 2019. Visual Story Post-Editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, 6581-6586.
PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation. X Hua, L Wang, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online: Association for Computational LinguisticsHua, X.; and Wang, L. 2020. PAIR: Planning and Itera- tive Refinement in Pre-trained Transformers for Long Text Generation. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), 781-793. Online: Association for Computational Linguis- tics.
Heteroglossia: In-Situ Story Ideation with the Crowd. C.-Y Huang, S.-H Huang, T.-H K Huang, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsHuang, C.-Y.; Huang, S.-H.; and Huang, T.-H. K. 2020. Het- eroglossia: In-Situ Story Ideation with the Crowd. In Pro- ceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1-12.
Semantic Frame Forecast. C.-Y Huang, T.-H Huang, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsHuang, C.-Y.; and Huang, T.-H. 2021. Semantic Frame Forecast. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, 2702- 2713. Online: Association for Computational Linguistics.
Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers. D Ippolito, A Yuan, A Coenen, S Burnam, arXiv:2211.05030arXiv preprintIppolito, D.; Yuan, A.; Coenen, A.; and Burnam, S. 2022. Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers. arXiv preprint arXiv:2211.05030.
N S Keskar, B Mccann, L R Varshney, C Xiong, R Socher, arXiv:1909.05858Ctrl: A conditional transformer language model for controllable generation. arXiv preprintKeskar, N. S.; McCann, B.; Varshney, L. R.; Xiong, C.; and Socher, R. 2019. Ctrl: A conditional transformer lan- guage model for controllable generation. arXiv preprint arXiv:1909.05858.
Adam: A Method for Stochastic Optimization. D P Kingma, J Ba, 3rd International Conference on Learning Representations. Bengio, Y.and LeCun, Y.San Diego, CA, USAConference Track ProceedingsKingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Confer- ence Track Proceedings.
W Kryściński, N Rajani, D Agarwal, C Xiong, D Radev, arXiv:2105.08209BookSum: A Collection of Datasets for Long-form Narrative Summarization. arXiv preprintKryściński, W.; Rajani, N.; Agarwal, D.; Xiong, C.; and Radev, D. 2021. BookSum: A Collection of Datasets for Long-form Narrative Summarization. arXiv preprint arXiv:2105.08209.
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. M Lee, P Liang, Yang , Q , Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22. the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22New York, NY, USAAssociation for Computing MachineryISBN 9781450391573Lee, M.; Liang, P.; and Yang, Q. 2022. CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22. New York, NY, USA: Association for Computing Machinery. ISBN 9781450391573.
Story generation with crowdsourced plot graphs. B Li, S Lee-Urban, G Johnston, M Riedl, Twenty-Seventh AAAI Conference on Artificial Intelligence. Li, B.; Lee-Urban, S.; Johnston, G.; and Riedl, M. 2013. Story generation with crowdsourced plot graphs. In Twenty- Seventh AAAI Conference on Artificial Intelligence.
Scheherazade: Crowd-powered interactive narrative generation. B Li, M Riedl, Twenty-Ninth AAAI Conference on Artificial Intelligence. Li, B.; and Riedl, M. 2015. Scheherazade: Crowd-powered interactive narrative generation. In Twenty-Ninth AAAI Con- ference on Artificial Intelligence.
Truecasing. L V Lita, A Ittycheriah, S Roukos, N Kambhatla, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsLita, L. V.; Ittycheriah, A.; Roukos, S.; and Kambhatla, N. 2003. Truecasing. In Proceedings of the 41st Annual Meet- ing of the Association for Computational Linguistics, 152- 159.
Decoupled Weight Decay Regularization. I Loshchilov, F Hutter, International Conference on Learning Representations. Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight De- cay Regularization. In International Conference on Learn- ing Representations.
Event representations for automated story generation with deep neural nets. L J Martin, P Ammanabrolu, X Wang, W Hancock, S Singh, B Harrison, M O Riedl, Thirty-Second AAAI Conference on Artificial Intelligence. Martin, L. J.; Ammanabrolu, P.; Wang, X.; Hancock, W.; Singh, S.; Harrison, B.; and Riedl, M. O. 2018. Event repre- sentations for automated story generation with deep neural nets. In Thirty-Second AAAI Conference on Artificial Intel- ligence.
Co-writing screenplays and theatre scripts with language models: An evaluation by industry professionals. P Mirowski, K W Mathewson, J Pittman, R Evans, arXiv:2209.14958arXiv preprintMirowski, P.; Mathewson, K. W.; Pittman, J.; and Evans, R. 2022. Co-writing screenplays and theatre scripts with language models: An evaluation by industry professionals. arXiv preprint arXiv:2209.14958.
Y Mori, H Yamane, R Shimizu, Y Mukuta, T Harada, COMPASS: a Creative. Support System that Alerts Novelists to the Unnoticed Missing Contents. CoRR, abs/2202.13151Mori, Y.; Yamane, H.; Shimizu, R.; Mukuta, Y.; and Harada, T. 2022. COMPASS: a Creative Support System that Alerts Novelists to the Unnoticed Missing Contents. CoRR, abs/2202.13151.
A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories. N Mostafazadeh, N Chambers, X He, D Parikh, D Batra, L Vanderwende, P Kohli, Allen , J , NAACL'16. San Diego, CaliforniaACLMostafazadeh, N.; Chambers, N.; He, X.; Parikh, D.; Ba- tra, D.; Vanderwende, L.; Kohli, P.; and Allen, J. 2016. A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories. In NAACL'16, 839-849. San Diego, California: ACL.
Online: Association for Computational Linguistics. Padmakumar, V.; and He, H. 2022. Machine-in-the-Loop Rewriting for Creative Image Captioning. N Mostafazadeh, A Kalyanpur, L Moon, D Buchanan, L Berkowitz, O Biran, J Chu-Carroll, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattleUnited States: Association for Computational LinguisticsMostafazadeh, N.; Kalyanpur, A.; Moon, L.; Buchanan, D.; Berkowitz, L.; Biran, O.; and Chu-Carroll, J. 2020. GLU- COSE: GeneraLized and COntextualized Story Explana- tions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 4569- 4586. Online: Association for Computational Linguistics. Padmakumar, V.; and He, H. 2022. Machine-in-the-Loop Rewriting for Creative Image Captioning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, 573-586. Seattle, United States: Asso- ciation for Computational Linguistics.
Towards Controllable Story Generation. N Peng, M Ghazvininejad, J May, K Knight, Proceedings of the First Workshop on Storytelling. the First Workshop on StorytellingNew Orleans, LouisianaAssociation for Computational LinguisticsPeng, N.; Ghazvininejad, M.; May, J.; and Knight, K. 2018. Towards Controllable Story Generation. In Proceedings of the First Workshop on Storytelling, 43-49. New Orleans, Louisiana: Association for Computational Linguistics.
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, OpenAI Blog. 81Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language models are unsupervised mul- titask learners. OpenAI Blog, 1(8).
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. N Reimers, I Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsReimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3982-3992. Hong Kong, China: Asso- ciation for Computational Linguistics.
Narrative planning: Balancing plot and character. M O Riedl, R M Young, Journal of Artificial Intelligence Research. 39Riedl, M. O.; and Young, R. M. 2010. Narrative planning: Balancing plot and character. Journal of Artificial Intelli- gence Research, 39: 217-268.
Inspiration through Observation: Demonstrating the Influence of Automatically Generated Text on Creative Writing. M Roemmele, 12th International Conference on Computational Creativity. Roemmele, M. 2021. Inspiration through Observation: Demonstrating the Influence of Automatically Generated Text on Creative Writing. In 12th International Conference on Computational Creativity.
Creative help: a story writing assistant. M Roemmele, A S Gordon, S Rose, D Engel, N Cramer, W Cowley, International Conference on Interactive Digital Storytelling. SpringerISBN 9780470689646Roemmele, M.; and Gordon, A. S. 2015. Creative help: a story writing assistant. In International Conference on In- teractive Digital Storytelling, 81-92. Springer. Rose, S.; Engel, D.; Cramer, N.; and Cowley, W. 2010. Au- tomatic Keyword Extraction from Individual Documents, 1 - 20. ISBN 9780470689646.
Do Massively Pretrained Language Models Make Better Storytellers?. A See, A Pappu, R Saxena, A Yerukola, C D Manning, Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Hong Kong, ChinaAssociation for Computational LinguisticsSee, A.; Pappu, A.; Saxena, R.; Yerukola, A.; and Man- ning, C. D. 2019. Do Massively Pretrained Language Mod- els Make Better Storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learn- ing (CoNLL), 843-861. Hong Kong, China: Association for Computational Linguistics.
Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation. S Sharma, L El Asri, H Schulz, J Zumer, abs/1706.09799CoRRSharma, S.; El Asri, L.; Schulz, H.; and Zumer, J. 2017. Relevance of Unsupervised Metrics in Task-Oriented Dia- logue for Evaluating Natural Language Generation. CoRR, abs/1706.09799.
Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool. B Swanson, K Mathewson, B Pietrzak, S Chen, M Dinalescu, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. the 16th Conference of the European Chapter of the Association for Computational Linguistics: System DemonstrationsOnlineAssociation for Computational LinguisticsSwanson, B.; Mathewson, K.; Pietrzak, B.; Chen, S.; and Dinalescu, M. 2021. Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool. In Proceed- ings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demon- strations, 244-256. Online: Association for Computational Linguistics.
A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation. J Xu, X Ren, Y Zhang, Q Zeng, X Cai, X Sun, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsXu, J.; Ren, X.; Zhang, Y.; Zeng, Q.; Cai, X.; and Sun, X. 2018. A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4306-4315. Brussels, Bel- gium: Association for Computational Linguistics.
Plan-and-write: Towards better automatic storytelling. L Yao, N Peng, R Weischedel, K Knight, D Zhao, Yan , R , Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Yao, L.; Peng, N.; Weischedel, R.; Knight, K.; Zhao, D.; and Yan, R. 2019. Plan-and-write: Towards better automatic sto- rytelling. In Proceedings of the AAAI Conference on Artifi- cial Intelligence, volume 33, 7378-7385.
Extractive Summarization as Text Matching. M Zhong, P Liu, Y Chen, D Wang, X Qiu, X Huang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline: Association for Computational LinguisticsZhong, M.; Liu, P.; Chen, Y.; Wang, D.; Qiu, X.; and Huang, X. 2020. Extractive Summarization as Text Matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 6197-6208. Online: Associ- ation for Computational Linguistics.
| [
"https://github.com/appleternity/Story-Plot-Generation."
] |
[
"MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages",
"MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages"
] | [
"Zhiruo Wang zhiruow@cs.cmu.edu \n♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n\n",
"Grace Cuenca gcuenca@princeton.edu \n♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n\n",
"Shuyan Zhou shuyanzh@cs.cmu.edu \n♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n\n",
"♠ Frank \n♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n\n",
"F Xu \n♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n\n",
"Graham Neubig gneubig@cs.cmu.edu \n♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n\n"
] | [
"♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n",
"♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n",
"♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n",
"♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n",
"♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n",
"♠♣ ♠ Carnegie\nMellon University ♦ Princeton University ♣ Inspired Cognition\n"
] | [
"Association for Computational Linguistics: EACL 2023"
] | While there has been a recent burgeoning of applications at the intersection of natural and programming languages, such as code generation and code summarization, these applications are usually English-centric. This creates a barrier for program developers who are not proficient in English. To mitigate this gap in technology development across languages, we propose a multilingual dataset, MCoNaLa, to benchmark code generation from natural language commands extending beyond English. Modeled off of the methodology from the English Code/Natural Language Challenge (CoNaLa) dataset, we annotated a total of 896 NL-Code pairs in three languages: Spanish, Japanese, and Russian. We present a systematic evaluation on MCoNaLa by testing state-of-the-art code generation systems. Although the difficulties vary across three languages, all systems lag significantly behind their English counterparts, revealing the challenges in adapting code generation to new languages. | 10.48550/arxiv.2203.08388 | [
"https://www.aclanthology.org/2023.findings-eacl.20.pdf"
] | 247,475,896 | 2203.08388 | 9076a345f26d2e147de584861f38915424c0560b |
MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages
May 2-6, 2023
Zhiruo Wang zhiruow@cs.cmu.edu
♠♣ ♠ Carnegie
Mellon University ♦ Princeton University ♣ Inspired Cognition
Grace Cuenca gcuenca@princeton.edu
♠♣ ♠ Carnegie
Mellon University ♦ Princeton University ♣ Inspired Cognition
Shuyan Zhou shuyanzh@cs.cmu.edu
♠♣ ♠ Carnegie
Mellon University ♦ Princeton University ♣ Inspired Cognition
♠ Frank
♠♣ ♠ Carnegie
Mellon University ♦ Princeton University ♣ Inspired Cognition
F Xu
♠♣ ♠ Carnegie
Mellon University ♦ Princeton University ♣ Inspired Cognition
Graham Neubig gneubig@cs.cmu.edu
♠♣ ♠ Carnegie
Mellon University ♦ Princeton University ♣ Inspired Cognition
MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages
Association for Computational Linguistics: EACL 2023
May 2-6, 2023
While there has been a recent burgeoning of applications at the intersection of natural and programming languages, such as code generation and code summarization, these applications are usually English-centric. This creates a barrier for program developers who are not proficient in English. To mitigate this gap in technology development across languages, we propose a multilingual dataset, MCoNaLa, to benchmark code generation from natural language commands extending beyond English. Modeled off of the methodology from the English Code/Natural Language Challenge (CoNaLa) dataset, we annotated a total of 896 NL-Code pairs in three languages: Spanish, Japanese, and Russian. We present a systematic evaluation on MCoNaLa by testing state-of-the-art code generation systems. Although the difficulties vary across three languages, all systems lag significantly behind their English counterparts, revealing the challenges in adapting code generation to new languages.
Introduction
There are an increasing number of applications related to "code intelligence", such as code summarization (Allamanis et al., 2016;Hu et al., 2018;Ahmad et al., 2020) and natural language (NL) to code generation Rabinovich et al., 2017;Norouzi et al., 2021;Wang et al., 2021), accompanied by code-specific tasks and benchmarks Zhong et al., 2017;. However, in the cases where these benchmarks include natural language, that language is almost invariably English.
There are a few exceptions, but most of them either focus on languages of specific domains (Sherborne and Lapata, 2021;Sherborne et al., 2020;Figure 1: Examples in the MCoNaLa dataset, that aim to generate general-purpose Python code snippets from source intent of multiple natural languages. Moradshahi et al., 2020) or types of code Liang et al., 2021), or contain NL intents collected via automatic translation (Li et al., 2021) (Appendix A). However, similarly to how Kwiatkowski et al. (2019) argue that "natural questions" are necessary to appropriately benchmark QA systems, we argue that ensuring the naturalness and coverage of questions is essential for benchmarking code generation systems as well.
A dataset for English code generation based on natural programming questions is the CoNaLa dataset . It is based on natural developer questions harvested from the Stack Overflow (SO) question answering forum. In fact, in addition to English, SO also supports four other languages (Spanish, Portuguese, Japanese, and Russian) that have strong developer communities and engage in non-English programming environments. In this work, we utilize this resource to construct the MCoNaLa dataset, consisting of 341, 210, and 345 manually curated parallel samples with natural intents in Spanish, Japanese, and Russian, along with corresponding Python code snippets. Like CoNaLa, these snippets are collected from language-specific SO sites and annotated by na-tive speakers who are also proficient in the Python programming language.
To provide insights in the state of code generation on this new resource, we conduct comprehensive experiments with three state-of-the-art text generation models in the context of crosslingual transfer, by unifying training and testing NL via translation (Ruder and Sil, 2021;Shi et al., 2021;Shima and Mitamura, 2010;Hartrumpf et al., 2008), or utilizing a multilingual NL encoder such as MBART . Our results suggest that cross-lingual NL-to-Code generation is challenging. Among all languages and experiment settings, the highest average BLEU score is 7.28, far behind that of English, which achieves 33.41, presumably because English resembles Python more than other NLs. In addition, we find models with task-specific modules and training outperform generic seq2seq models, yet the discrepancy between languages are consistent across all baseline models. In all, our corpus and experiments demonstrate the varied difficulty of the NL-to-Code generation task under different languages, emphasizing the need to develop a language-comprehensive approach to code intelligence.
The MCoNaLa Dataset
Task Definition
Concerning the task of answering natural language questions with machine-executable programs, our focus is to build a benchmark dataset to evaluate models for their ability to encode NL intents in multiple languages and generate code snippets. For each example in Figure 1, the intent above asks how to achieve a particular goal, and the snippet below responds with a piece of Python code.
Annotation Workflow
Our goal is to collect intent-snippet parallel data in multiple natural languages. In this section, we outline the main workflow for data annotation: (1) language source selection, (2) valid SO post identification, and (3) parallel sample annotation.
Language source and selection Besides the English version, Stack Overflow also has forums available in four other languages: Spanish, Portuguese, Japanese, and Russian. Data annotation in each language requires a native speaker of that language, who should also be proficient in both English and Python. Due to the high cost and difficulty of hiring intent question answer snippet Verificar que un archivo `fname` exista Verify that a file `fname` exists rewritten intent pair Figure 2: Illustration of the annotation process.
reliable annotators with such a specialized skill set, we only employ one Upwork annotator for each of Spanish, Japanese, and Russian. From the official SO data dump 2 dated March 2021, we obtained all posts in these languages. However, we were unsuccessful in finding a Portuguese-speaking annotator at the time of corpus collection.
Identifying how-to questions Following , annotators are first asked to identify valid posts that contain how-to type questions, which are imperative utterances seeking particular goals achievable by code. They are often in the post title or description, such as the example in Figure 2.
Posts are sent in 100-sample batches, and then categorized by annotators. To improve annotation efficiency, we bootstrapped a MBART how-to question classifier, with English examples, then iteratively multilingual samples. It achieves an accuracy of 72.50%. We then automatically filter the probable invalid posts using this classifier and designate the rest for manual annotation. We collect all valid posts and extract questions as raw intents, for subsequent parallel data annotation.
Collecting intent-snippet pairs For each post, we ask the annotators to find at most three snippets of Python code that correctly answer the extracted question. However, questions from post title or description are often ambiguous, especially in respective context of answer snippet, such as the example in Figure 2, that the question does not specify the names of "data" and "list" variables to allow precise code implementation. To disambiguate the intent and align it with a snippet, we ask annotators to rewrite the intent by:
(1) specifying variable names appearing in the answer snippet, and (2) need to be surrounded by the ASCII grave accent marks (e.g.,`data`), string literals or file paths should use singular typographic quotation marks (e.g., 'file1.txt', 'https://www.abc.com/'). The final MCoNaLa dataset consists of 341, 210, and 345 intent-snippet pairs in Spanish, Japanese, and Russian. It is noteworthy that the goal of MCoNaLa is to benchmark cross-lingual NL-to-Code generation task and mainly for testing purposes, instead of curating large-scale dataset for training. While its size is relatively small given the collection difficulty, we show that it can reliably inform significant method improvements ( § 3.3). We believe it is important for our dataset to be representative of the naturally occurring questions in respective language environments.
Quality Analysis
To ensure high data quality as intended, we checked 15 random samples from each language subset. Each rater score NL intents and code snippets from 1 to 5 based on their correctness and specificity.
The results demonstrate the high quality of our dataset, achieving 4.78, 4.65, 4.78 points on Spanish, Japanese, and Russian intents; and 4.84, 4.89, 4.78 points on their corresponding code snippets. Meanwhile, three raters present high agreementthe Fleiss' Kappa measure is 64.29 for NL intents and 69.49 for code snippets -both numbers indicate substantial agreement among the raters.
Method
To provide insights about evaluating on MCoNaLa, we demonstrate potential dataset usage in three train-test settings ( § 3.1), and propose to adapt three baseline models from either multilingual (NL) or code understanding to achieve both ends ( § 3.2).
Because the size of MCoNaLa allows only testing purposes, we resort to its larger English counter-part, CoNaLa , to allow training. CoNaLa contains 2,879 manually annotated samples and 600k samples extracted from English SO forum and API documents, which can serve as a sufficient source for training. Given this usage, we denote the three test languages as target languages.
Train-Test Settings
We adopt three settings from two paradigms (Hu et al., 2020) as illustrated in Figure 3: (1) translating intents in train (translate-train) or test (translate-test) sets to bridge the language gap, (2) using multilingual encoder to transfer from English to target languages (zero-shot).
For each target language, we can align the languages of training and testing intents and use a monolingual encoder. The translate-train setting translates English intents in CoNaLa to each target language for training and then tests with MCoNaLa samples. translate-test translates MCoNaLa intents in three target languages into English. Because it is not feasible to manually translate 600K+ intents, we use existing multilingual machine translation (MMT) models to automate translation. We benchmarked several open-source options, as elaborated in § 4.2, and settled on the M2M-124 model used on the FLORES-101 dataset (Goyal et al., 2022).
Also, we can train models on English samples and directly evaluate on MCoNaLa samples in target languages zero-shot, requiring models to encode multiple NLs, further, transfer the code generation ability from English context to target ones.
Baseline Models
We introduce three baseline methods targeting the above train-test settings. We encourage readers to refer to the original papers for more details.
In a monolingual context, models should function in target languages for translate-train and En-glish for translate-test. TRANX (Yin and Neubig, 2018) is a BiLSTM-based encoder-decoder model that uses a transition-based abstract syntax parser to map NLs into formal meaning representations (MR) such as Python programs. It is agnostic to input languages and hence can be evaluated on both translated settings. TAE (Norouzi et al., 2021) is the state-of-the-art method on CoNaLa by training a generic transformer with an added target autoencoder (TAE) objective. However, it is built with (English-)BERT and is intended for English scenarios, therefore only tested on translate-test.
As is required by zero-shot evaluation, we adopt a multilingual model, MBART , which is a seq2seq model pre-trained on 25 natural languages including our target ones. Note that MBART can also function in monolingual contexts, for both translate-train and translate-test settings.
Experiment
We train baseline models in their available settings, then tokenize the generated and reference code snippets following Yin and Neubig (2018) to evaluate the BLEU-4 scores. We report the average scores of five rounds using different random seeds. In Table 1, first, scores on target languages average to at most 7.28, much lower than 33.41 on English, revealing the similarity of English and Python, and the difficulty of generating code from other languages. Second, models with codespecific designs and training (TRANX and TAE) performs better in general. The lower scores of MBART potentially suggest a certain representation gap between NL and PL. Third, results on two code-specific models show consistent variations across languages: scores are lower for Spanish, but rise similarly on Japanese and Russian. As we will discuss in § 4.1, this is possibly due to the distributional gap between languages with varied complexity.
Significance Test
To verify the effectiveness of MCoNaLa, we perform significance tests (Dror et al., 2018) to show its capability of showing significant differences between systems. We conduct paired bootstrap resampling tests with each pair of models in their available settings, using a sample rate of 0.5 and a sample size of 10, 000. In both translate-test and translate-train settings of Table 2, code-specific systems (TRANX and TAE) significantly outperform MBART on Japanese and Russian. However, no significant differences are shown in Spanish, as expected given its relative difficulty. With significance testing, one can obtain reliable results even on a small dataset. While small sizes are not entirely desirable for informative evaluation, we view them as practical reflections of data scarcity, supporting our call for more non-English resources.
Analysis
Variations between Languages
We first study the differences in size and snippet length between languages subsets in MCoNaLa. As listed in Table 3, snippet lengths vary across languages, and the average snippet length in Spanish is around 2.5/1.3 times of that in Japanese/Russian. A longer snippet is presumably more complex, suggesting that snippets in Spanish samples are harder to generate, and hence models perform poorer.
Intent Auto-translation
In § 3.1 we use MMT models for intent translation. To optimize translation quality, we compare three best performing MMT models: OPUS-MT (Tiedemann and Thottingal, 2020) Analysis, Translation Quality 100 (Fan et al., 2021), and M2M-124 used in FLORES-101 (Goyal et al., 2022 As in Table 4, their results are close, but M2M-124 tends to be more stable across languages and baselines. Despite its relative superiority, its translation quality may still lag behind human performance, with more examples in § 4.3.
Quality of Auto-translation
To better measure the quality of translated intents, we manually check the semantic alignment be-tween the original and translated intents, with the assistance of the Google Translate API and dictionaries. Concretely, we take 20 English CoNaLa intents and check if their semantics preserve after being translated into three target languages (translatetrain). We similarly examine 20 MCoNaLa intents in each target language and check their English translations (translate-test). We use the M2M-124 translations given its best results. As shown in Figure 4, MMT translations are still sub-optimal: often mis-translate, even omit, the key words. This is especially severe on verbs that indicate certain Python operations. Hence, the translation step may impair intent-snippet alignment, being one of the major factors to the poor results in translated settings.
Conclusion
In this work, we extend the task of NL-to-Code generation from English-centric to multilingual scenarios. We establish the MCoNaLa benchmark that contains NL intent and code snippet pairs available in Spanish, Japanese, and Russian. Our benchmark serves for the multilingual code generation task, requiring models of both multilingual understanding and code synthesis. We conduct systematic experiments on three baseline models and show varying difficulty across languages and settings. We hope to reveal the necessity to develop, and serve as a solid test bed for language-comprehensive approaches regarding code intelligence.
Limitations
Although the MCoNaLa dataset makes a first step to include more natural languages aside from English, it is currently limited to the languages supported by the StackOverflow forum, since SO provides the source data for the MCoNaLa creation. This can be mitigated by extending to more languages using programming forums in other languages that have a similar purpose to SO. Besides, MCoNaLa dataset only supports literal evaluation methods such as BLEU. Given the executable nature of Python programs, it is beneficial to support more evaluation metrics such as functional correctness, robustness, and conciseness.
Ethics Statement
The MCoNaLa dataset is built to serve as a testbed for evaluating code generation systems from natural languages extending beyond English, given that an English-centric setting can harm universal accessibility to language technologies.
We hire annotators who are proficient in target languages and assist them with clearly documented instructions, flexible annotation interfaces (e.g., Google Sheets), and automated methods (e.g., using a neural classifier to filter out possibly invalid cases) to optimize the annotation efficiency. We carefully check in line with our instructions and standards, to ensure the quality of both the question posts given and the annotation results back from our annotators. We emphasize the differences between samples in different languages, because they are natural reflections of the questions that programmers asked in each specific language, similar to many works in fields such as multilingual question answering (Clark et al., 2020) and named entity recognition (Nothman et al., 2013). We reckon that it is of paramount importance to evaluate on data that was originally produced in the target language, and results may be less reliable otherwise.
Nevertheless, with the advances in models capable of generating code from natural language inputs, we should be aware of the potentially harmful usage such as concealing malicious code (Wallace et al., 2020), or generating code with security vulnerabilities (Verdi et al., 2020;Pearce et al., 2021).
Figure 3 :
3Example usage on the original English and Multilingual samples in three settings.
Figure 4 :
4Examples showing that the translation errors or omits critical words in the original intent.
clarifying commands with reference question descriptions. Concretely, variable names and data types in the rewritten intent 2 https://archive.org/details/stackexchange 266 <English> Concatenate elements of a list `x` of multiple integers to a single integer. sum(d * 10 ** i for i, d in enumerate(x[::-1])) <English>, translated from Spanish How to sum the `precio` field of all elements of the `Precompra`<Spanish>, translated from English
Concatena los elementos de una lista `x` de
varios enteros en un solo entero.
sum(d * 10 ** i for i, d in enumerate(x[::-1]))
train
test
<Spanish>
Cómo sumar el campo `precio` de todos los elementos del modelo
`Precompra` en Django?
totaldos = Precompra.objects.aggregate(Sum(precio)).values()[0])
model in Django?
totaldos = Precompra.objects.aggregate(Sum(precio)).values()[0])
translate-test
translate-train
z e r o -s h o t
Table 1 :
1BLEU scores of baselines for various train-test settings in English (en) and target languages (es, ja, ru).
Table 2 :
2Significance testing results between each pair of baseline models. '-' marks the model not in the pair.
Table 3 :
3Data size and snippet length (in number of tokens) of MCoNaLa samples between target languages.
Table 4 :
4Comparing MMT models under translate-test.
AcknowledgementsWe thank all the annotators for their hard work. This work was supported by the National Science Foundation under grant number 1815287.
A transformer-based approach for source code summarization. Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang, 10.18653/v1/2020.acl-main.449Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineWasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based ap- proach for source code summarization. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4998-5007, On- line. Association for Computational Linguistics.
A convolutional attention network for extreme summarization of source code. Miltiadis Allamanis, Hao Peng, Charles Sutton, Proceedings of the 33nd International Conference on Machine Learning. the 33nd International Conference on Machine LearningNew York City, NY, USAJMLR.org48Workshop and Conference ProceedingsMiltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 2091-2100. JMLR.org.
. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. ArXiv preprint, abs/2108.07732Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. ArXiv preprint, abs/2108.07732.
Sniff: A search engine for java using freeform queries. Sudeep Shaunak Chatterjee, Koushik Juvekar, Sen, International Conference on Fundamental Approaches to Software Engineering. SpringerShaunak Chatterjee, Sudeep Juvekar, and Koushik Sen. 2009. Sniff: A search engine for java using free- form queries. In International Conference on Funda- mental Approaches to Software Engineering, pages 385-400. Springer.
. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. ArXiv preprint, abs/2107.03374Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large lan- guage models trained on code. ArXiv preprint, abs/2107.03374.
TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, Jennimaria Palomaki, 10.1162/tacl_a_00317Transactions of the Association for Computational Linguistics. 8Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typo- logically diverse languages. Transactions of the As- sociation for Computational Linguistics, 8:454-470.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, 10.18653/v1/2020.acl-main.747Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.
Multilingual bert readme. Jacob Devlin, Jacob Devlin. 2018. Multilingual bert readme.
The hitchhiker's guide to testing statistical significance in natural language processing. Rotem Dror, Gili Baumer, Segev Shlomov, Roi Reichart, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392.
Understanding back-translation at 270 scale. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, 10.18653/v1/D18-1045Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at 270 scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.
Beyond english-centric multilingual machine translation. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Journal of Machine Learning Research. 22107Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric mul- tilingual machine translation. Journal of Machine Learning Research, 22(107):1-48.
The flores-101 evaluation benchmark for low-resource and multilingual machine translation. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'aurelio Ranzato, Francisco Guzmán, Angela Fan, Transactions of the Association for Computational Linguistics. 10Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Kr- ishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The flores-101 evaluation benchmark for low-resource and multilingual ma- chine translation. Transactions of the Association for Computational Linguistics, 10:522-538.
Efficient question answering with question decomposition and multiple answer streams. Sven Hartrumpf, Ingo Glöckner, Johannes Leveling, Workshop of the Cross-Language Evaluation Forum for European Languages. SpringerSven Hartrumpf, Ingo Glöckner, and Johannes Level- ing. 2008. Efficient question answering with ques- tion decomposition and multiple answer streams. In Workshop of the Cross-Language Evaluation Forum for European Languages, pages 421-428. Springer.
. Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Songet al. 2021. Measuring coding challenge competence with apps. ArXiv preprint, abs/2105.09938Dan Hendrycks, Steven Basart, Saurav Kadavath, Man- tas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021. Measuring coding challenge competence with apps. ArXiv preprint, abs/2105.09938.
XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 4411-4421. PMLR.
Summarizing source code with transferred API knowledge. Xing Hu, Ge Li, Xin Xia, David Lo, Shuai Lu, Zhi Jin, 10.24963/ijcai.2018/314Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018Stockholm, Swedenijcai.orgXing Hu, Ge Li, Xin Xia, David Lo, Shuai Lu, and Zhi Jin. 2018. Summarizing source code with trans- ferred API knowledge. In Proceedings of the Twenty- Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stock- holm, Sweden, pages 2269-2275. ijcai.org.
Summarizing source code using a neural attention model. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer, 10.18653/v1/P16-1195Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsSrinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2073-2083, Berlin, Germany. Association for Com- putational Linguistics.
Mapping language to code in programmatic context. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer, 10.18653/v1/D18-1192Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSrinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1643-1652, Brussels, Bel- gium. Association for Computational Linguistics.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M Dai, Jakob Uszkoreit, Quoc Le, Slav Petrov, 10.1162/tacl_a_00276Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics. 7Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics, 7:452-466.
MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, Yashar Mehdad, 10.18653/v1/2021.eacl-main.257Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeHaoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021. MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 2950-2962, Online. Association for Computa- tional Linguistics.
Lyra: A benchmark for turducken-style code generation. Qingyuan Liang, Zeyu Sun, Qihao Zhu, Wenjie Zhang, Lian Yu, Yingfei Xiong, Lu Zhang, ArXiv preprint, abs/2108.12144Qingyuan Liang, Zeyu Sun, Qihao Zhu, Wenjie Zhang, Lian Yu, Yingfei Xiong, and Lu Zhang. 2021. Lyra: A benchmark for turducken-style code generation. ArXiv preprint, abs/2108.12144.
Latent predictor networks for code generation. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Fumin Wang, Andrew Senior, 10.18653/v1/P16-1057Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany1Long Papers). Association for Computational LinguisticsWang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 599-609, Berlin, Germany. Association for Computational Lin- guistics.
Multilingual denoising pretraining for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, 10.1162/tacl_a_00343Transactions of the Association for Computational Linguistics. 8Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre- training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.
Codexglue: A machine learning benchmark dataset for code understanding and generation. Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, ArXiv preprint, abs/2102.04664Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. ArXiv preprint, abs/2102.04664.
Localizing open-ontology QA semantic parsers in a day using machine translation. Mehrad Moradshahi, Giovanni Campagna, Sina Semnani, Silei Xu, Monica Lam, 10.18653/v1/2020.emnlp-main.481Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsMehrad Moradshahi, Giovanni Campagna, Sina Sem- nani, Silei Xu, and Monica Lam. 2020. Localizing open-ontology QA semantic parsers in a day using machine translation. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 5970-5983, Online. As- sociation for Computational Linguistics.
Natural language models for predicting programming comments. Dana Movshovitz, - Attias, William W Cohen, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational Linguistics2Dana Movshovitz-Attias and William W. Cohen. 2013. Natural language models for predicting programming comments. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 35-40, Sofia, Bul- garia. Association for Computational Linguistics.
Code generation from natural language with less prior knowledge and more monolingual data. Sajad Norouzi, Keyi Tang, Yanshuai Cao, 10.18653/v1/2021.acl-short.98Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics2Short Papers)Sajad Norouzi, Keyi Tang, and Yanshuai Cao. 2021. Code generation from natural language with less prior knowledge and more monolingual data. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 776-785, Online. Association for Computational Linguistics.
Learning multilingual named entity recognition from wikipedia. Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran, Artificial Intelligence. 194Joel Nothman, Nicky Ringland, Will Radford, Tara Mur- phy, and James R Curran. 2013. Learning multilin- gual named entity recognition from wikipedia. Artifi- cial Intelligence, 194:151-175.
Learning to generate pseudo-code from source code using statistical machine translation. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura, 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEEYusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical ma- chine translation. In 2015 30th IEEE/ACM Interna- tional Conference on Automated Software Engineer- ing (ASE), pages 574-584. IEEE.
Mining source code descriptions from developer communications. Sebastiano Panichella, Jairo Aponte, Massimiliano Di Penta, Andrian Marcus, Gerardo Canfora, 20th IEEE International Conference on Program Comprehension (ICPC). IEEESebastiano Panichella, Jairo Aponte, Massimiliano Di Penta, Andrian Marcus, and Gerardo Canfora. 2012. Mining source code descriptions from de- veloper communications. In 2012 20th IEEE In- ternational Conference on Program Comprehension (ICPC), pages 63-72. IEEE.
An empirical cybersecurity evaluation of github copilot's code contributions. Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, Ramesh Karri, arXiv:2108.09293arXiv preprintHammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri. 2021. An empirical cybersecurity evaluation of github copilot's code contributions. arXiv preprint arXiv:2108.09293.
Language to code: Learning semantic parsers for if-this-then-that recipes. Chris Quirk, Raymond Mooney, Michel Galley, 10.3115/v1/P15-1085Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsLong Papers)Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 878-888, Beijing, China. Association for Computational Linguistics.
Abstract syntax networks for code generation and semantic parsing. Maxim Rabinovich, Mitchell Stern, Dan Klein, 10.18653/v1/P17-1105Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Association for Computational LinguisticsMaxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1139- 1149, Vancouver, Canada. Association for Computa- tional Linguistics.
Multi-domain multilingual question answering. Sebastian Ruder, Avi Sil, 10.18653/v1/2021.emnlp-tutorials.4Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts. the 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial AbstractsPunta Cana, Dominican Republic& Online. Association for Computational LinguisticsSebastian Ruder and Avi Sil. 2021. Multi-domain mul- tilingual question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 17- 21, Punta Cana, Dominican Republic & Online. As- sociation for Computational Linguistics.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1009Proceedings of the 54th. the 54thRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th
Annual Meeting of the Association for Computational Linguistics. Berlin, Germany1Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Lin- guistics.
Zero-shot cross-lingual semantic parsing. Tom Sherborne, Mirella Lapata, abs/2104.07554ArXiv preprintTom Sherborne and Mirella Lapata. 2021. Zero-shot cross-lingual semantic parsing. ArXiv preprint, abs/2104.07554.
Bootstrapping a crosslingual semantic parser. Tom Sherborne, Yumo Xu, Mirella Lapata, 10.18653/v1/2020.findings-emnlp.45Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsTom Sherborne, Yumo Xu, and Mirella Lapata. 2020. Bootstrapping a crosslingual semantic parser. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 499-517, Online. As- sociation for Computational Linguistics.
Cross-lingual training with dense retrieval for document retrieval. Peng Shi, Rui Zhang, He Bai, Jimmy Lin, abs/2109.01628ArXiv preprintPeng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2021. Cross-lingual training with dense retrieval for docu- ment retrieval. ArXiv preprint, abs/2109.01628.
Bootstrap pattern learning for open-domain CLQA. Hideki Shima, Teruko Mitamura, Proceedings of NTCIR-8 Workshop Meeting. NTCIR-8 Workshop MeetingHideki Shima and Teruko Mitamura. 2010. Bootstrap pattern learning for open-domain CLQA. In Proceed- ings of NTCIR-8 Workshop Meeting.
Leveraging monolingual data with self-supervision for multilingual neural machine translation. Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Firat, Mia Chen, Sneha Kudugunta, Naveen Arivazhagan, Yonghui Wu, 10.18653/v1/2020.acl-main.252Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsAditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Firat, Mia Chen, Sneha Kudugunta, Naveen Arivazhagan, and Yonghui Wu. 2020. Leveraging monolingual data with self-supervision for multilingual neural ma- chine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 2827-2835, Online. Association for Computational Linguistics.
OPUS-MT -building open translation services for the world. Jörg Tiedemann, Santhosh Thottingal, Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. the 22nd Annual Conference of the European Association for Machine TranslationLisboa, PortugalEuropean Association for Machine TranslationJörg Tiedemann and Santhosh Thottingal. 2020. OPUS- MT -building open translation services for the world. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 479-480, Lisboa, Portugal. European Associa- tion for Machine Translation.
Gias Uddin, and Alireza Karami Motlagh. 2020. An empirical study of c++ vulnerabilities in crowd-sourced code examples. Morteza Verdi, Ashkan Sami, Jafar Akhondali, Foutse Khomh, IEEE Transactions on Software Engineering. Morteza Verdi, Ashkan Sami, Jafar Akhondali, Foutse Khomh, Gias Uddin, and Alireza Karami Motlagh. 2020. An empirical study of c++ vulnerabilities in crowd-sourced code examples. IEEE Transactions on Software Engineering.
Eric Wallace, Tony Z Zhao, Shi Feng, Sameer Singh, arXiv:2010.12563Concealed data poisoning attacks on nlp models. arXiv preprintEric Wallace, Tony Z Zhao, Shi Feng, and Sameer Singh. 2020. Concealed data poisoning attacks on nlp mod- els. arXiv preprint arXiv:2010.12563.
CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. Yue Wang, Weishi Wang, Shafiq Joty, Steven C H Hoi, 10.18653/v1/2021.emnlp-main.685Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsYue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pre- trained encoder-decoder models for code understand- ing and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Lan- guage Processing, pages 8696-8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Clocom: Mining existing source code for automatic comment generation. Edmund Wong, Taiyue Liu, Lin Tan, IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER). IEEEEdmund Wong, Taiyue Liu, and Lin Tan. 2015. Clocom: Mining existing source code for automatic comment generation. In 2015 IEEE 22nd International Con- ference on Software Analysis, Evolution, and Reengi- neering (SANER), pages 380-389. IEEE.
Autocomment: Mining question and answer sites for automatic comment generation. Edmund Wong, Jinqiu Yang, Lin Tan, 28th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEEEdmund Wong, Jinqiu Yang, and Lin Tan. 2013. Auto- comment: Mining question and answer sites for auto- matic comment generation. In 2013 28th IEEE/ACM International Conference on Automated Software En- gineering (ASE), pages 562-567. IEEE.
Generalized data augmentation for low-resource translation. Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig, 10.18653/v1/P19-1579Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsMengzhou Xia, Xiang Kong, Antonios Anastasopou- los, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5786- 5796, Florence, Italy. Association for Computational Linguistics.
Incorporating external knowledge through pre-training for natural language to code generation. F Frank, Zhengbao Xu, Pengcheng Jiang, Bogdan Yin, Graham Vasilescu, Neubig, 10.18653/v1/2020.acl-main.538Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsFrank F. Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, and Graham Neubig. 2020. Incorporating external knowledge through pre-training for natural language to code generation. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 6045-6052, Online. Asso- ciation for Computational Linguistics.
Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, 10.18653/v1/2021.naacl-main.41Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlineLinting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, On- line. Association for Computational Linguistics.
Staqc: A systematically mined questioncode dataset from stack overflow. Ziyu Yao, Daniel S Weld, Wei-Peng Chen, Huan Sun, 10.1145/3178876.3186081Proceedings of the 2018 World Wide Web Conference on World Wide Web. the 2018 World Wide Web Conference on World Wide WebLyon, FranceACMZiyu Yao, Daniel S. Weld, Wei-Peng Chen, and Huan Sun. 2018. Staqc: A systematically mined question- code dataset from stack overflow. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pages 1693-1703. ACM.
Learning to mine aligned code and natural language pairs from stack overflow. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, Graham Neubig, 10.1145/3196398.3196408International Conference on Mining Software Repositories, MSR. ACMPengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018a. Learning to mine aligned code and natural language pairs from stack overflow. In International Conference on Min- ing Software Repositories, MSR, pages 476-486. ACM.
Learning to mine aligned code and natural language pairs from stack overflow. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, Graham Neubig, 2018 IEEE/ACM 15th international conference on mining software repositories (MSR). IEEEPengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018b. Learning to mine aligned code and natural language pairs from stack overflow. In 2018 IEEE/ACM 15th interna- tional conference on mining software repositories (MSR), pages 476-486. IEEE.
TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation. Pengcheng Yin, Graham Neubig, 10.18653/v1/D18-2002Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2018 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsBrussels, BelgiumAssociation for Computational LinguisticsPengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstra- tions, pages 7-12, Brussels, Belgium. Association for Computational Linguistics.
Example overflow: Using social media for code recommendation. Alexey Zagalsky, Ohad Barzilay, Amiram Yehudai, 2012 Third International Workshop on Recommendation Systems for Software Engineering (RSSE). IEEEAlexey Zagalsky, Ohad Barzilay, and Amiram Yehudai. 2012. Example overflow: Using social media for code recommendation. In 2012 Third International Workshop on Recommendation Systems for Software Engineering (RSSE), pages 38-42. IEEE.
Seq2sql: Generating structured queries from natural language using reinforcement learning. Victor Zhong, Caiming Xiong, Richard Socher, abs/1709.00103ArXiv preprintVictor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. ArXiv preprint, abs/1709.00103.
A number of methods have been proposed to mine intent-snippet pairs for the purpose of code search, summarization, or generation. While our work falls in the line of mining from SO. Ling, A Related Work Natural Language to Code Generation Datasets There have been several benchmark datasets for NL-to-Code generation, such as Hearthstone. Other examples include datasets for problem solving. other work also attempts to exploit other data sources such as API documentation (Chatterjee et al.A Related Work Natural Language to Code Generation Datasets There have been several benchmark datasets for NL-to-Code generation, such as Hearthstone (Ling et al., 2016), Django (Oda et al., 2015), CON- CODE (Iyer et al., 2018), and CoNaLa (Yin et al., 2018a). Other examples include datasets for problem solving, such as HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and APPS (Hendrycks et al., 2021). A number of meth- ods have been proposed to mine intent-snippet pairs for the purpose of code search, summarization, or generation. While our work falls in the line of mining from SO (Wong et al., 2013; Iyer et al., 2016; Yao et al., 2018; Yin et al., 2018b), other work also attempts to exploit other data sources such as API documentation (Chatterjee et al., 2009;
Our work resorts to a manual annotation strategy that often yields accurately aligned intent-snippet pairs. Multilingual Learning While the bulk of coderelated tasks have their NL components in English, program developers native in other languages cannot enjoy the advances in code intelligence techniques, leading to the current lacunae in multilingual learning. Our work intends to mitigate this gap by facilitating NL-to-Code generation in multiple languages beyond English. To enable language understanding across multiple languages, a number of works propose to train language models with corpus in multiple languages (Devlin. -Attias Movshovitz, ; Cohen, Xu, and developer communications. In addition to multilingual training, other data augmentation techniques commonly used in machine translation (MT). such as back-translation. monolingual. Sennrich et al.Movshovitz-Attias and Cohen, 2013; Xu et al., 2020), code comments (Wong et al., 2015), special- ized sites (Quirk et al., 2015), and developer com- munications (Panichella et al., 2012). One prior methodology to automatically collect large-scale parallel data is using heuristics to extract intent- snippet pairs (Chatterjee et al., 2009; Wong et al., 2013; Zagalsky et al., 2012), but this often results in compromised data quality (Xu et al., 2020). Our work resorts to a manual annotation strategy that often yields accurately aligned intent-snippet pairs. Multilingual Learning While the bulk of code- related tasks have their NL components in English, program developers native in other languages can- not enjoy the advances in code intelligence tech- niques, leading to the current lacunae in multilin- gual learning. Our work intends to mitigate this gap by facilitating NL-to-Code generation in multi- ple languages beyond English. To enable language understanding across multiple languages, a number of works propose to train language models with corpus in multiple languages (Devlin, 2018; Liu et al., 2020; Conneau et al., 2020; Xue et al., 2021). In addition to multilingual training, other data aug- mentation techniques commonly used in machine translation (MT), such as back-translation (Edunov et al., 2018), monolingual (Sennrich et al., 2016;
2020) or generalized data augmentation (Xia et al., 2019), also inspired our experiments. However, these techniques have rarely been utilized for NL-conditioned code generation. Siddhant, We present preliminary attempts in the experimentsSiddhant et al., 2020) or generalized data augmen- tation (Xia et al., 2019), also inspired our experi- ments. However, these techniques have rarely been utilized for NL-conditioned code generation. We present preliminary attempts in the experiments.
| [] |
[
"CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions",
"CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions"
] | [
"Runtao Liu runtao219@gmail.com \nPeking University\n\n",
"Chenxi Liu cxliu@jhu.edu \nJohns Hopkins University\n\n",
"Yutong Bai ytongbai@gmail.com \nNorthwestern Polytechnical University\n\n",
"Alan Yuille alan.l.yuille@gmail.com \nJohns Hopkins University\n\n"
] | [
"Peking University\n",
"Johns Hopkins University\n",
"Northwestern Polytechnical University\n",
"Johns Hopkins University\n"
] | [] | Referring object detection and referring image segmentation are important tasks that require joint understanding of visual information and natural language. Yet there has been evidence that current benchmark datasets suffer from bias, and current state-of-the-art models cannot be easily evaluated on their intermediate reasoning process. To address these issues and complement similar efforts in visual question answering, we build CLEVR-Ref+, a synthetic diagnostic dataset for referring expression comprehension. The precise locations and attributes of the objects are readily available, and the referring expressions are automatically associated with functional programs. The synthetic nature allows control over dataset bias (through sampling strategy), and the modular programs enable intermediate reasoning ground truth without human annotators.In addition to evaluating several state-of-the-art models on CLEVR-Ref+, we also propose IEP-Ref, a module network approach that significantly outperforms other models on our dataset. In particular, we present two interesting and important findings using IEP-Ref:(1) the module trained to transform feature maps into segmentation masks can be attached to any intermediate module to reveal the entire reasoning process step-by-step; (2) even if all training data has at least one object referred, IEP-Ref can correctly predict no-foreground when presented with false-premise referring expressions. To the best of our knowledge, this is the first direct and quantitative proof that neural modules behave in the way they are intended. 1 | 10.1109/cvpr.2019.00431 | [
"https://arxiv.org/pdf/1901.00850v2.pdf"
] | 57,375,765 | 1901.00850 | d25cd48bc89db2d221429bddeaa253c1b2cecc51 |
CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions
Runtao Liu runtao219@gmail.com
Peking University
Chenxi Liu cxliu@jhu.edu
Johns Hopkins University
Yutong Bai ytongbai@gmail.com
Northwestern Polytechnical University
Alan Yuille alan.l.yuille@gmail.com
Johns Hopkins University
CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions
Referring object detection and referring image segmentation are important tasks that require joint understanding of visual information and natural language. Yet there has been evidence that current benchmark datasets suffer from bias, and current state-of-the-art models cannot be easily evaluated on their intermediate reasoning process. To address these issues and complement similar efforts in visual question answering, we build CLEVR-Ref+, a synthetic diagnostic dataset for referring expression comprehension. The precise locations and attributes of the objects are readily available, and the referring expressions are automatically associated with functional programs. The synthetic nature allows control over dataset bias (through sampling strategy), and the modular programs enable intermediate reasoning ground truth without human annotators.In addition to evaluating several state-of-the-art models on CLEVR-Ref+, we also propose IEP-Ref, a module network approach that significantly outperforms other models on our dataset. In particular, we present two interesting and important findings using IEP-Ref:(1) the module trained to transform feature maps into segmentation masks can be attached to any intermediate module to reveal the entire reasoning process step-by-step; (2) even if all training data has at least one object referred, IEP-Ref can correctly predict no-foreground when presented with false-premise referring expressions. To the best of our knowledge, this is the first direct and quantitative proof that neural modules behave in the way they are intended. 1
Introduction
There has been significant research interest in the joint understanding of vision and natural language. While image captioning [17,5,25,22] focuses on generating a sentence with image being the only input, visual question answering (VQA) [2,6,38] and referring expressions (REF) [24,13] require comprehending both an image and a sentence, before generating an output. In this paper, we focus on refer-ring expressions, which is to identify the particular objects (in the form of segmentation mask or bounding box) in a given scene from natural language.
In order to study referring expressions, various datasets have been proposed [24,35,18]. These are real-world images annotated by crowdsource workers. The advantage of these datasets is that they, to a certain extent, reflect the complexity and nuances of the real world. Yet inevitably, they also have limitations. First, they usually exhibit strong biases that may be exploited by the models [3]. Roughly speaking, this means simply selecting the salient foreground object (i.e., discarding the referring expression) will yield a much higher baseline than random. This casts doubts on the true level of understanding within current REF models. Second, evaluation can only be conducted on the final segmentation mask or bounding box, but not the intermediate step-by-step reasoning process. For example, for the referring expression "Woman to the left of the red suitcase", a reasonable reasoning process should be first find all suitcases in the image, then identify the red one among them, finally segment the woman to its left. Clearly this requires significantly more high-quality annotations, which are currently unavailable and hard to collect.
To address these concerns and echo similar efforts in visual question answering (i.e., CLEVR [15]), we propose CLEVR-Ref+, a synthetic diagnostic dataset for referring expressions. The advantage of using a synthetic dataset is that we have full control over the scene, and dataset bias can be minimized by employing a uniform sampling strategy. Also, the referring expressions are now automatically annotated with the true underlying reasoning process, so a step-by-step analysis becomes much more plausible.
We make much effort in constructing CLEVR-Ref+ to make sure it is well adapted and applicable to the referring expression task. First, we turn the original questions in CLEVR into their corresponding referring expression format. Second, we change the output space from textual answers (in the form of a word) to referred objects (in the form of segmentation mask or bounding box). Third, we analyzed statistics from real-world REF datasets and found that there are some common types of referring expressions
The big thing(s) that are behind the second one of the big thing(s) from front and to the right of the first one of the large sphere(s) from left Any other things that are the same size as the fifth one of the thing(s) from right Figure 1: Examples from our CLEVR-Ref+ dataset. We use the same scenes as those provided in CLEVR [15]. Instead of asking questions about the scene, we ask the model to either return one bounding box (as illustrated on the left) or return a segmentation mask (could potentially be multiple objects; illustrated on the right) based on the given referring expression.
(e.g., "The second sphere from left") that are not included in CLEVR templates. In our CLEVR-Ref+, we add support for these types of expressions to better match the variety of referring expressions used in real world.
We tested several state-of-the-art referring expression models on our CLEVR-Ref+ dataset. This includes both those designed for referring segmentation [21] and detection [36,34]. In addition to evaluating the overall IoU and accuracy as previous datasets, we can now do a more detailed breakdown and analysis in terms of sub-categories. For example, we found that it is especially hard for the models to understand ordinality. This could point to important research directions in the future.
Besides diagnosing these existing models, we also propose IEP-Ref, a Neural Module Network [1] solution based on IEP [16]. Experiments show that the IEP-Ref model achieved excellent performance on CLEVR-Ref+ with its explicit, step-by-step functional program and module network execution engine, suggesting the importance of compositionality. Very interestingly, we found that the module trained on translating the last module output to segmentation mask is general, and can produce excellent humaninterpretable segmentation masks when attached to intermediate module outputs, revealing the entire reasoning process. We believe ours is the first to show clean visualization of the visual reasoning process carried out by neural module networks, as opposed to gradient norms [16] or soft attention maps [27,9].
In sum, our paper makes the following contributions:
• We construct CLEVR-Ref+, a synthetic diagnostic dataset for referring expression tasks that complements existing real-world datasets.
• We test and diagnose several state-of-the-art referring expression models on CLEVR-Ref+, including our proposed IEP-Ref that explicitly captures compositionality.
• The segmentation module trained in IEP-Ref can be trivially plugged in all intermediate steps in the module network to produce excellent segmentation masks that clearly reveal the network's reasoning process.
Related Works
Referring Expressions
Referring expressions are sentences that refer to specific objects in an image. Understanding referring expressions has important applications in robotics and human-computer interaction. In recent years, many deep learning models have been developed for this task.
Several works focused on detection, i.e. returning one bounding box containing the referred object. [24,13] adapted image captioning for this task by scoring each bounding box proposal with a generative captioning model. [32] learned the alignment between the description and image region by reconstructing the description using an attention mechanism. [35,29] studied the importance of context for referring expressions. [23] used a discriminative comprehension model to improve referring expression generation. [36] showed additional gain by incorporating reinforcement learning. [11,34] used learned parser and module networks to better match the structured semantics.
There are also works focusing on segmentation, i.e. returning the segmentation mask. [12] used FCN feature concatenated with LSTM feature to produce pixel-wise binary segmentation. [21] used a convolutional LSTM in addition to the language-only LSTM to facilitate propagation of intermediate segmentation beliefs. [20,26] improved upon [21] by making more architectural improvements.
Dataset Bias and Diagnostic Datasets
In visual question answering, despite exciting models being proposed and accuracy on benchmark datasets being steadily improved, there has been serious concern over the The green cylinders to the left of the brown sphere.
AND Logic How many green spheres are both in front of the red cylinder and left to the yellow cube?
The green spheres that are both in front of the red cylinder and left to the yellow cube.
OR Logic
Are there any cylinders that are either purple metal objects or small red matte things?
Cylinders that are either purple metal objects or small red matte things.
Same Relation
Are there any other things that have the same size as the red sphere?
The things/objects that have the same size as the red sphere. Compare Integer Are there more brown shiny objects behind the large rubber cylinder than gray blocks? -Comparison Does the small ball have the same color as the small cylinder in front of the big sphere?
dataset bias problem [37,7], meaning that models may be heavily exploiting the imbalanced distribution in the training/testing data. More recently, [3] showed that dataset bias also exists in referring expression datasets [24,18,35]. For example, [3] reported that the performance when discarding the referring expression and basing solely on the image is significantly higher than random. Ideally the dataset should be unbiased so that the performance faithfully reflect the model's true level of understanding. But this is very hard to control when working with real-world images and humanannotated referring expressions. A possible solution is to use synthetic datasets. Indeed this is the path taken by CLEVR [15], a diagnostic dataset for VQA. There, objects are placed on a 2D plane and only have a small number of choices in terms of shape, color, size, and material. The question-answer pairs are also synthesized using carefully designed templates. Together with a uniform sampling strategy, this design can mitigate dataset bias and reveal the model's ability to understand compositionality. We construct our CLEVR-Ref+ dataset by repurposing CLEVR towards the referring expression task.
Several approaches now achieve near-perfect accuracy on CLEVR [16,10,30,33,27,14,9]. In addition to reporting the VQA accuracy, they typically try to interpret the visual reasoning process through visualization. However, the quality of these visualizations does not match the high VQA accuracy. We suspect the primary reason is that the domain these models are trained for (i.e. a textual answer) is different from the domain these models are diagnosed on (i.e. attention over the image). Fortunately, in referring expressions these two domains are very much interchangeable.
Note that CLEVR was also adapted towards referring expression in [9], but they focused on facilitating VQA, instead of introducing extensions (Section 3.3), evaluating state-of-the-art models (Section 4.1), and directly facilitating the diagnosis of visual reasoning (Section 4.3).
The CLEVR-Ref+ Dataset
CLEVR-Ref+ uses the exact same scenes as CLEVR (70K images in train set, 15K images in validation and test set), and every image is associated with 10 referring expressions. Since CLEVR is a VQA dataset, we began by changing the questions to referring expressions (Section 3.1), and the answers to referred objects (Section 3.2). We then made important additions to the set of modules (Section 3.3) as well as necessary changes to the sampling procedure (Section 3.4). Finally, we made the distinction whether more than one object is being referred (Section 3.5).
From Question to Referring Expression
Templates are provided in CLEVR so that questions and the functional programs associated with them can be generated at the same time. We notice that in many cases, part of the question is indeed a referring expression, as we need to first identify objects of interest before asking about their property (e.g. color or number). In Table 1 we provide examples of how we change question templates into their corresponding referring expression templates, usually by selecting a subset. The associated functional programs are also adjusted accordingly. For example, for "How many" questions, we simply remove the Count module at the end.
The original categories "Compare Integer" and "Comparison" were about comparing properties of two groups of referred objects, so they do not contribute additional referring expression patterns. Therefore they are not included in the templates for CLEVR-Ref+.
From Answer to Referred Objects
In referring expressions, the output is no longer a textual answer, but a bounding box or segmentation mask.
Since we know the exact 3D locations and properties of objects in the scene, we can follow the ground truth func- tional program associated with the referring expression to identify which objects are being referred. In fact we can do this not only at the end (also available in real-world datasets), but also at every intermediate step (not available in real-world datasets). This will become useful later when we do step-by-step inspection and evaluation of the visual reasoning process. After finding the referred objects, we project them back to the image plane to get the ground truth bounding box and segmentation mask. This automatic annotation was done through rendering with the software Blender. For occluded objects, only the visible part is treated as ground truth.
Module Additions
We hope the referring expressions that we generate are representative of those used in the real world. However, since the task is no longer the same, we suspect that there may be some frequent referring patterns missing in the templates directly inherited from CLEVR. To this end, we analyzed statistics from a real-world referring expression dataset, RefCOCO+ [35], as shown in Table 2.
We began by sorting the words in these referring expressions by their frequency. Then, starting with the most frequent word, we empirically cluster these words into categories. Not surprisingly, nouns that represent object or human are the most common. However, going down the list, we found that the "ordinal" (e.g. "The second woman from left") and "visible" (e.g. "The barely seen backpack") categories recall more than 10% of all sentences, but are not included in the existing templates. Moreover, it is indeed possible to define them using a computer program, because there is no ambiguity in meaning. We add these two new modules into the CLEVR-Ref+ function catalog.
In a functional program, these two modules may be inserted whenever color, material, size, or shape is being described. As an example, "the red sphere" may be equivalently described as "the third sphere from left" or "the partially visible red object". In our dataset, we define an object to be partially visible if foreground objects' mask occupies more than 20% of its bounding box area. For an object to be fully visible, this value must be exactly 0. We do not describe visibility when there is an ambiguous case (i.e. this value is between 0 and 0.2) in the scene.
Generation Procedure
Generating a referring expression for a scene is conceptually simple and intuitive. The process may be summarized as the following few steps:
1. Randomly choose a referring expression family 2 .
2. Randomly choose a text template from this family.
3. Follow the functional program and select random values when encountering template parameters 3 .
4. Reject when certain criteria fail, that is, the sampled referring expression is inappropriate for the given scene; return when the entire functional program follows through.
We largely follow the generation procedure of CLEVR, with a few important changes:
• To balance the number of referring expressions across different categories (those listed in Table 1), we double the probability of being sampled in categories with a small number of referring expression families. • When describing the attributes for a set of objects, we do not use Ordinal and Visible at the same time. This is because referring an object as "The second partially visible object from left" seems too peculiar and rare, and there usually exists more natural alternatives. • Originally when describing the attributes for a set of objects, four fair coins were flipped to determine whether color, material, size, shape will be included. As a result, usually multiple attributes are selected, and a very small number of objects survive these filters. We empirically found that this makes it quite easy for the system to select the correct object simply from the attributes that directly describe the target object(s).
To remedy this, we first enumerate all possible combinations of these attributes, and calculate how many objects will survive for each possibility. We then uniformly sample from these possible number of survivors, before doing another uniform sampling to find the combination of attributes. This will ensure a larger variance in terms of number of objects after each set of filtering, and prevent near-degenerate solutions. • At the end of the functional program, we verify if at least one object is being referred; reject otherwise.
Multi-Object and Single-Object Referring
As explained in Section 3.4, each referring expression in CLEVR-Ref+ may refer to one or more objects in the scene. We believe this is the more general setting, and models should have the flexibility to handle various number of objects being referred. This is already handled and supported by referring image segmentation systems. However, we notice that detection based systems are usually designed to return a single object instead of multiple objects, presumably because this was how the detection datasets [24,35] were created. As a result, for detection based methods, we evaluate on the subset of CLEVR-Ref+ where only a single object is referred. This subset contains a total of 222,569 referring expressions (32% of the entire dataset).
Experiments
Models and Implementation Details
In all models we resize the input image to 320×320 to set up a fair comparison. Publicly available code for these models are used with minimum change to adapt to our CLEVR-Ref+ dataset. The following referring expression models are studied and tested: Speaker-Listener-Reinforcer (SLR) [36] This is a detection model that includes a generative model (speaker), a discriminative model (listener), as well as a reinforcement learning component that makes further improvement. Before training the main model, the visual-language similarity model needs to be trained first. We use Adam optimizer [19], learning rate 4e-4, batch size 32 for both the visuallanguage similarity model and the main model. MAttNet [34] This is also a detection model, that uses three modular networks to capture the subject, location, and relationship features respectively. A soft attention mechanism is used to return the overall score of a candidate region. We use learning rate 4e-4 and batch size 15. Recurrent Multimodal Interaction (RMI) [21] This is a segmentation model. In addition to concatenating the refer-ring expression LSTM embedding with the image features, RMI also used a convolutional LSTM to facilitate propagation of segmentation beliefs when reading in the referring expression word-by-word. We use Adam optimizer, learning rate 2.5e-4, batch size 3, and weight decay 5e-4.
IEP-Ref
This is a segmentation model that we adapt from IEP [16], which was designed for VQA. The idea is to use a LSTM program generator to translate the referring expression into a structured series of modules, each of which is parameterized by a small CNN. By executing this dynamically constructed neural network (with a special Segment module at the end; see supplementary material for its architecture), IEP-Ref imitates the underlying visual reasoning process. For input visual features, we use the last layer of the conv4 stage of ResNet101 [8] pretrained on ImageNet [4], which is of size 1024 × 20 × 20. Following [16], this part is not finetuned. We tried three settings that use 9K/18K/700K ground truth programs to train the LSTM program generator (Adam optimizer, learning rate 5e-4, batch size 64; 20,000 iterations for the 9K setting, 32,000 iterations for the 18K and 700K setting). The accuracies of the predicted programs are 0.873, 0.971, 0.993 respectively. For the fourth setting, we simply use the ground truth program 4 . The execution engine is trained for 30 epochs using learning rate 1e-4 and Adam optimizer.
Results and Analysis
Overall Evaluation
The experimental results are summarized in Table 3. Detection models are evaluated by accuracy (i.e. whether the prediction selects the correct bounding box among given candidates), where MAttNet performs favorably against SLR. Segmentation models are evaluated by Intersection over Union (IoU), where IEP-Ref performs significantly better than RMI. This suggests the importance to model compositionality within the referring expression. We now present a more detailed analysis of various aspects. Figure 2: Analyzing the basic referring ability of different models. "Include" means the average performance if a module is involved in the referring process. "Exclude" means otherwise. As a result, high "exclude" and low "include" performance suggests that this module is more challenging to learn, and vice versa.
Basic Referring Ability
Here we start with the easiest form: referring by direct description of object attributes (e.g., "The big blue sphere"). Concretely, this corresponds to the "0-Relate" subset.
In CLEVR-Ref+, there are totally 6 types of attributes that may help us locate specific objects: color, size, shape, material, ordinality, and visibility. In Figure 2 we show the average detection accuracy/segmentation IoU of various methods on "0-Relate" referring expressions that either contain or not contain a specific type of module.
Among detection models, we found that accuracy is higher when the referring expression contains descriptions of color, shape, and visibility. A reasonable conjecture is that these concepts are easier to learn compared with the others. However, for segmentation, the performance gaps between "exclude" and "include" are not as significant.
Though it is unclear which concept is the easiest to learn, there seems little dispute that ordinality is the hardest. In particular, for RMI, IoU is 0.91 if the expression does not require ordinality and 0.27 when it does. Other models do not suffer as much, but also experience significant drops. We suspect this is because ordinality requires the global context, whereas the others are local properties.
Spatial Reasoning Ability
Other than directly describing the attributes, it is also common to refer to an object by its spatial location. Here we diagnose whether referring expression models can understand (potentially multiple steps of) relative spatial relationship, for example "The object that is left to the red cube". In Table 3, this corresponds to the "{0, 1, 2, 3}-Relate" columns. Results are shown in Figure 3.
In general, we observe a small drop when referring expressions start to include spatial reasoning. However, there does not seem to be significant difference among referring expressions that require 1, 2, 3 steps of spatial reasoning. This seems to suggest that once the model has grasped spatial reasoning, there is little trouble in successfully applying it multiple times.
Different Reasoning Topologies
There are two referring expression topologies in CLEVR-Ref+: chain-structured and tree-structured. Intuitively, a chain structure has a single reasoning path to follow, whereas a tree structure requires following two such paths before merging. In Figure 4 we compare performance on referring expressions with two sequential spatial relation- ships vs. one on each branch joined with AND. These two templates have roughly the same length and complexity, so the comparison focuses on topology.
Though not consistent among the four models, treestructured referring expressions are generally harder than chain-structured ones. This agrees with the findings in [15].
Different Relation Types
There are two kinds of relationships in CLEVR-Ref+. One is spatial relationship that includes phrases like "left of", "right of", "in front of", "behind" (discussed in Sec-tion 4.2.3). The other is same-attribute relationship that requires recognizing and memorizing particular attributes of another object, e.g. "The large block(s) that have the same color as the metal sphere".
In Figure 5 we study whether the relation type will make a difference in performance. We compare the "2-Relate" column with the "Same" column in Table 3, again because they have roughly the same length and complexity. All models perform much worse on the same-attribute relationship type, suggesting that this is a hard concept to grasp. Similar to ordinality, same-attribute requires global context.
Step-By-Step Inspection of Visual Reasoning
All the results discussed in Section 4.2 are about the endpoint of the visual reasoning process. We argue that in order to trust the predictions made by the referring expression system, it is also important to make sure that the intermediate reasoning steps make sense. CLEVR-Ref+ is suitable because: (1) the semantics of the referring expressions is modularized, and (2) the referring ground truth at all intermediate steps can be obtained automatically (i.e. no human annotators needed).
In training our IEP-Ref model, there is always a Segment module at the end, transforming the 128-channel feature map into a 1-channel segmentation mask. When testing, we simply attach the trained Segment module to the output of all intermediate modules. This is possible because all modules have the same number of input channels and output channels (128). This technique would not help in the VQA setting, because there the ending modules (e.g. Count, Equal) discard the spatial dimensions needed for visualization.
We found that this technique works quite well. In Figure 6 we provide four qualitative examples with various topologies and modules. We notice that all modules are performing their intended functionality, except the Unique module 5 . Yet after one more module, the segmentation mask becomes normal again. The quantitative analysis in Figure 7 confirms this observation: on average, IoU drops by 0.66 after each Unique module; but IoU significantly increases after each Same or Relate module, and these are the only modules that may come after Unique according to the templates. We conjecture that the network has learned some mechanism to treat Unique as the "preprocessing" step of the Same and Relate functionalities.
False-Premise Referring Expressions
In reality, referring expression systems may face all kinds of textual input, and not all of them will make sense. When presented with a referring expression that makes false assumptions (e.g. "The red sphere" when there is no sphere in the scene), the system should follow through as much as it can, and be robust enough to return zero foreground at the end. We test IEP-Ref's ability to deal with these falsepremise referring expressions (c.f. [31]). Note that no such expressions appear during training.
We generate 10,000 referring expressions that refer to zero object at the end. Qualitatively (see Figure 8), it is reassuring to see that intermediate modules are correctly doing their jobs, and a no-foreground prediction is made at the final step. Quantitatively, IEP-Ref predicts 0 foreground 5 It is supposed to simply carry over the previously referred object, yet from what we observe, its behavior is most similar to selecting the complement of the previously referred object, though this is far from consistent. pixel more than 1/4 of the time, and ≤ 8 foreground pixels more than 1/3 of the time.
Conclusion
In this paper, we build the CLEVR-Ref+ dataset to complement existing ones for referring expressions. By choosing a synthetic setup, the advantage is that dataset bias can be minimized, and the ground truth visual reasoning process is readily available. We evaluated several state-of-theart referring object detection and referring image segmentation models on CLEVR-Ref+. In addition, we propose the IEP-Ref model, which uses a module network approach and outperforms competing methods by a large margin. Detailed analysis are conducted to identify the strengths and weaknesses of these models. In particular, we found that ordinality and the same-attribute relationship seem to be the most difficult concepts to grasp.
Besides the correctness of the final segmentation mask, the correctness of the reasoning process is also important. We discovered that IEP-Ref provides an easy and natural way of revealing this process: simply attach the Segment module to each intermediate step.
Our quantitative evaluation shows a high IoU at intermediate steps as well, proving that the neural modules have indeed learned the job they are supposed to do. Another evidence is that IEP-Ref can correctly handle false-premise referring expressions.
Going forward, we are interested to see whether these findings will transfer and inspire better models on real data.
Supplementary Material
In this supplementary material, we begin by providing network architecture details of IEP-Ref to supplement Section 4.1 of the main paper. We then provide more analysis of the four models' performance on CLEVR-Ref+, to supplement Section 4.2 of the main paper. Finally, we show more qualitative examples (referring expression and ground truth box/mask) from CLEVR-Ref+.
A. Network Architectures in IEP-Ref
In Figure 7 of the main paper, we listed all modules used in our IEP-Ref model (except Segment). In IEP-Ref, each of these modules is parameterized with a small fully convolutional network and belongs to one of the following 4 categories:
• Preprocess: This component maps the image to the feature tensor. Its output is the input to the Scene module. See Table 4 for the network architecture.
• Unary: This includes the Scene, Filter X, Unique, Relate, Same X modules. It transforms one feature tensor to another. See Table 5 for the network architecture.
• Binary: This includes the And and Or modules. It transforms two feature tensors to one. See Table 6 for the network architecture. (1) and (4) 128 × 20 × 20 (6) ReLU 128 × 20 × 20 Residual: Add (5) and (8) 128 × 20 × 20 (10) ReLU 128 × 20 × 20 1 × 320 × 320 Table 7: Network architecture for the Segment module.
1-channel segmentation mask. See Table 7 for the network architecture.
Network architectures for Preprocess, Unary, Binary are directly inherited from IEP [16].
B. More Model Analysis on CLEVR-Ref+
B.1. Number of Objects in a Scene
We suspect that the more objects in a scene, the harder for the model to carry out the referring reasoning steps. In Figure 9 we plot the performance of each model with respect to the number of objects in a scene. All models drop in performance when the number of objects increases, suggesting that the models tend to struggle when dealing with too many objects.
B.2. Schedule of Acquiring Reasoning Abilities
We are interested to see if throughout the training process, the network exhibit a schedule of acquiring various reasoning abilities (e.g. spatial reasoning, logic etc). From Figure 10, it seems that no such schedule was developed, and performance steadily increase across different referring expression categories. This may be due to the random sampling during training, instead of active learning (c.f. [28]).
B.3. Novel Compositions
To further test the models' generalization ability, we also conducted experiments on the Compositional Generalization Test (CoGenT) data provided by CLEVR [15]. Here models are trained on objects with only a subset of all combinations, and then tested on both the same subset of combinations (valA) and another subset of combinations (valB). Results are summarized in Figure 11. We see a very small gap for detection models, suggesting that they have learned compositionality to generalize well. The gap for segmentation models, on the other hand, is larger.
Figure 3 :
3Analyzing the spatial reasoning ability of different models. Horizontal axis is the number of spatial relations.
Figure 4 :Figure 5 :
45Effect of reasoning topology (Chain vs. Tree) on referring detection or segmentation performance. Effect of relation type (Spatial vs. Same) on referring detection or segmentation performance.
Figure 7 :
7Average IoU going into/out of each IEP-Ref module on CLEVR-Ref+ validation set. Note that here IoU is not only computed at the end, but also all intermediate steps. This shows that IoU remains high throughout visual reasoning. The large differences in modules marked in dark red are discussed in text.
Figure 8 :
8Our IEP-Ref model can correctly handle falsepremise referring expressions even if they do not appear during training.
Figure 9 :Figure 10 :
910Effect of number of objects in a scene on referring detection or segmentation performance. Performance across different referring expression categories throughout training. We inspect the performance every 1/6 of the entire training iterations.
Figure 11 :
11Different models' performance on valA and valB of the CLEVR CoGenT data.
Table 1 :
1Examples of converting questions to referring expressions.Category
Question (CLEVR)
Referring Expression (CLEVR-Ref+)
Basic
How many cyan cubes are there?
The cyan cubes.
Spatial Relation
Are there any green cylinders to the left of the
brown sphere?
Table 2 :
2Frequent category and words in RefCOCO+[35].Category Example words
Frequency
object
shirt,head,chair,hat,pizza
63.66%
human
man,woman,guy,girl,person 42.54%
color
white,black,blue,red,green
38.76%
spatial
back,next,behind,near,up
23.86%
animal
zebra,elephant,horse,bear
15.36%
attribute
big,striped,small,plaid,long 10.55%
action
standing,holding,looking
10.34%
ordinal
closest,furthest,first,third
5.797%
compare
smaller,tallest,shorter,older
5.247%
visible
fully visible,barely seen
4.639%
Table 3 :
3Referring object detection and referring image segmentation results on CLEVR-Ref+. We evaluated three existing models, as well as IEP-Ref which we adapted from its VQA counterpart.Basic
Spatial Relation
Logic
0-Relate 1-Relate 2-Relate 3-Relate AND
OR
Same Accuracy
IoU
SLR [36]
0.627
0.569
0.570
0.584
0.594 0.701 0.444
0.577
-
MAttNet [34]
0.566
0.623
0.634
0.624
0.723 0.737 0.454
0.609
-
RMI [21]
0.822
0.713
0.736
0.715
0.585 0.679 0.251
-
0.561
IEP-Ref (GT)
0.928
0.895
0.908
0.908
0.879 0.881 0.647
-
0.816
IEP-Ref (700K prog.)
0.920
0.884
0.902
0.898
0.860 0.869 0.636
-
0.806
IEP-Ref (18K prog.)
0.907
0.858
0.874
0.862
0.829 0.847 0.605
-
0.782
IEP-Ref (9K prog.)
0.910
0.858
0.847
0.811
0.778 0.791 0.626
-
0.760
• Postprocess :
PostprocessThis only includes the Segment module. It transforms the 128-channel feature tensor to a Conv(3 × 3, 1024 → 128) 128 × 20 × 20 ReLU 128 × 20 × 20 Conv(3 × 3, 128 → 128) 128 × 20 × 20 ReLU 128 × 20 × 20Layer
Output size
Input image
3 × 320 × 320
ResNet101 [8] conv4 6
1024 × 20 × 20
Table 4 :
4Network architecture for the Preprocess module.Index
Layer
Output size
(1)
Previous module output
128 × 20 × 20
(2)
Conv(3 × 3, 128 → 128) 128 × 20 × 20
(3)
ReLU
128 × 20 × 20
(4)
Conv(3 × 3, 128 → 128) 128 × 20 × 20
(5)
Residual: Add
Table 5 :
5Network architecture for the Unary modules.Index
Layer
Output size
(1)
Previous module output
128 × 20 × 20
(2)
Previous module output
128 × 20 × 20
(3)
Concatenate (1) and (2)
256 × 20 × 20
(4)
Conv(1 × 1, 256 → 128) 128 × 20 × 20
(5)
ReLU
128 × 20 × 20
(6)
Conv(3 × 3, 128 → 128) 128 × 20 × 20
(7)
ReLU
128 × 20 × 20
(8)
Conv(3 × 3, 128 → 128) 128 × 20 × 20
(9)
Table 6 :
6Network architecture for the Binary modules. Conv(1 × 1, 128 → 128) 128 × 20 × 20 ReLU 128 × 20 × 20 Bilinear upsample 128 × 320 × 320 Conv(1 × 1, 128 → 128) 128 × 320 × 320 ReLU 128 × 320 × 320 Conv(1 × 1, 128 → 32) 32 × 320 × 320 ReLU 32 × 320 × 320 Conv(1 × 1, 32 → 4) 4 × 320 × 320 ReLU 4 × 320 × 320 Conv(1 × 1, 4 → 1)Layer
Output size
Previous module output
128 × 20 × 20
Unary module
128 × 20 × 20
All data and code concerning CLEVR-Ref+ and IEP-Ref have been released at https://cs.jhu.edu/˜cxliu/2019/clevr-ref+
A referring expression family contains a template for constructing functional programs and several text templates that provide multiple ways of expressing these programs in natural language.3 For instance, left/right/front/behind; big/small; metal/rubber.
This is our default IEP-Ref setting unless otherwise specified.
Acknowledgments This research is support by NSF award CCF-1317376 and ONR N00014-12-1-0883.C. More Data Examples from CLEVR-Ref+The remaining pages show random images, referring expressions, and the referring ground truth from our CLEVR-Ref+ dataset. In particular, we choose at least one example from each referring expression category (the 7 middle columns inTable 3of the main paper). We show both detection ground truth (Figure 12) and segmentation ground truth (Figure 13).(a) Look at matte thing that is on the left side of the red object that is behind the second one of the object(s) from right; The first one of the rubber thing(s) from front that are right of it (b) The objects that are the seventh one of the thing(s) from right that are in front of the nineth one of the thing(s) from front or the second one of the thing(s) from front (c) The big metallic object(s) that are both to the left of the third one of the large thing(s) from left and on the right side of the first one of the object(s) from front (d) The fully visible yellow ball(s) (e) Any other things that are the same shape as the fourth one of the rubber thing(s) from right (f) Find object that is behind the fifth one of the object(s) from left; The cylinder(s) that are to the right of it (g) Look at partially visible object(s); The second one of the thing(s) from left that are on the right side of it (h) The second one of the shiny cylinder(s) from right that are to the right of the thing that is behind the thing that is on the left side of the first one of the tiny thing(s) from left (i) The blue things that are either the fourth one of the thing(s) from right or the first one of the tiny ball(s) from front (j) The matte object(s) that are behind the second one of the cylinder(s) from right and on the right side of the first one of the object(s) from left
Neural module networks. J Andreas, M Rohrbach, T Darrell, D Klein, CVPR. IEEE Computer SocietyJ. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks. In CVPR, pages 39-48. IEEE Computer Society, 2016. 2
VQA: visual question answering. S Antol, A Agrawal, J Lu, M Mitchell, D Batra, C L Zitnick, D Parikh, ICCV. IEEE Computer SocietyS. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: visual question answering. In ICCV, pages 2425-2433. IEEE Computer Society, 2015. 1
Visual referring expression recognition: What do systems actually learn?. V Cirik, L Morency, T Berg-Kirkpatrick, Association for Computational Linguistics. 13NAACL-HLTV. Cirik, L. Morency, and T. Berg-Kirkpatrick. Visual refer- ring expression recognition: What do systems actually learn? In NAACL-HLT (2), pages 781-787. Association for Compu- tational Linguistics, 2018. 1, 3
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L Li, K Li, F Li, CVPR. IEEE Computer SocietyJ. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li. Ima- genet: A large-scale hierarchical image database. In CVPR, pages 248-255. IEEE Computer Society, 2009. 5
Long-term recurrent convolutional networks for visual recognition and description. J Donahue, L A Hendricks, S Guadarrama, M Rohrbach, S Venugopalan, T Darrell, K Saenko, CVPR. J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, T. Darrell, and K. Saenko. Long-term recur- rent convolutional networks for visual recognition and de- scription. In CVPR, pages 2625-2634. IEEE Computer So- ciety, 2015. 1
Are you talking to a machine? dataset and methods for multilingual image question. H Gao, J Mao, J Zhou, Z Huang, L Wang, W Xu, NIPS. H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for mul- tilingual image question. In NIPS, pages 2296-2304, 2015. 1
Making the V in VQA matter: Elevating the role of image understanding in visual question answering. Y Goyal, T Khot, D Summers-Stay, D Batra, D Parikh, CVPR. IEEE Computer SocietyY. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In CVPR, pages 6325-6334. IEEE Computer Society, 2017. 3
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. IEEE Computer Society511K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In CVPR, pages 770-778. IEEE Computer Society, 2016. 5, 11
Explainable neural computation via stack neural module networks. R Hu, J Andreas, T Darrell, K Saenko, ECCV. Springer112113R. Hu, J. Andreas, T. Darrell, and K. Saenko. Explainable neural computation via stack neural module networks. In ECCV (7), volume 11211 of Lecture Notes in Computer Sci- ence, pages 55-71. Springer, 2018. 2, 3
Learning to reason: End-to-end module networks for visual question answering. R Hu, J Andreas, M Rohrbach, T Darrell, K Saenko, ICCV. IEEE Computer SocietyR. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko. Learning to reason: End-to-end module networks for visual question answering. In ICCV, pages 804-813. IEEE Com- puter Society, 2017. 3
Modeling relationships in referential expressions with compositional modular networks. R Hu, M Rohrbach, J Andreas, T Darrell, K Saenko, CVPR. IEEE Computer SocietyR. Hu, M. Rohrbach, J. Andreas, T. Darrell, and K. Saenko. Modeling relationships in referential expressions with com- positional modular networks. In CVPR, pages 4418-4427. IEEE Computer Society, 2017. 2
Segmentation from natural language expressions. R Hu, M Rohrbach, T Darrell, ECCV (1). 9905R. Hu, M. Rohrbach, and T. Darrell. Segmentation from natural language expressions. In ECCV (1), volume 9905 of Lecture Notes in Computer Science, pages 108-124.
. Springer, Springer, 2016. 2
Natural language object retrieval. R Hu, H Xu, M Rohrbach, J Feng, K Saenko, T Darrell, CVPR. IEEE Computer Society1R. Hu, H. Xu, M. Rohrbach, J. Feng, K. Saenko, and T. Dar- rell. Natural language object retrieval. In CVPR, pages 4555-4564. IEEE Computer Society, 2016. 1, 2
Compositional attention networks for machine reasoning. D A Hudson, C D Manning, abs/1803.03067CoRRD. A. Hudson and C. D. Manning. Compositional attention networks for machine reasoning. CoRR, abs/1803.03067, 2018. 3
CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. J Johnson, B Hariharan, L Van Der Maaten, L Fei-Fei, C L Zitnick, R B Girshick, J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. B. Girshick. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning.
. Cvpr In, IEEE Computer Society712In CVPR, pages 1988-1997. IEEE Computer Society, 2017. 1, 2, 3, 7, 12
Inferring and executing programs for visual reasoning. J Johnson, B Hariharan, L Van Der Maaten, J Hoffman, L Fei-Fei, C L Zitnick, R B Girshick, ICCV. IEEE Computer Society511J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, L. Fei-Fei, C. L. Zitnick, and R. B. Girshick. Inferring and executing programs for visual reasoning. In ICCV, pages 3008-3017. IEEE Computer Society, 2017. 2, 3, 5, 11
Deep visual-semantic alignments for generating image descriptions. A Karpathy, F Li, CVPR. IEEE Computer SocietyA. Karpathy and F. Li. Deep visual-semantic alignments for generating image descriptions. In CVPR, pages 3128-3137. IEEE Computer Society, 2015. 1
Referitgame: Referring to objects in photographs of natural scenes. S Kazemzadeh, V Ordonez, M Matten, T L Berg, EMNLP. 13S. Kazemzadeh, V. Ordonez, M. Matten, and T. L. Berg. Referitgame: Referring to objects in photographs of natural scenes. In EMNLP, pages 787-798. ACL, 2014. 1, 3
Adam: A method for stochastic optimization. CoRR, abs/1412. D P Kingma, J Ba, 6980D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. 5
Referring image segmentation via recurrent refinement networks. R Li, K Li, Y.-C Kuo, M Shu, X Qi, X Shen, J Jia, CVPR. R. Li, K. Li, Y.-C. Kuo, M. Shu, X. Qi, X. Shen, and J. Jia. Referring image segmentation via recurrent refinement net- works. In CVPR, pages 5745-5753. IEEE Computer Society, 2018. 2
Recurrent multimodal interaction for referring image segmentation. C Liu, Z Lin, X Shen, J Yang, X Lu, A L Yuille, ICCV. IEEE Computer Society25C. Liu, Z. Lin, X. Shen, J. Yang, X. Lu, and A. L. Yuille. Re- current multimodal interaction for referring image segmen- tation. In ICCV, pages 1280-1289. IEEE Computer Society, 2017. 2, 5
Attention correctness in neural image captioning. C Liu, J Mao, F Sha, A L Yuille, AAAI. AAAI PressC. Liu, J. Mao, F. Sha, and A. L. Yuille. Attention correct- ness in neural image captioning. In AAAI, pages 4176-4182. AAAI Press, 2017. 1
Comprehension-guided referring expressions. R Luo, G Shakhnarovich, CVPR. IEEE Computer SocietyR. Luo and G. Shakhnarovich. Comprehension-guided refer- ring expressions. In CVPR, pages 3125-3134. IEEE Com- puter Society, 2017. 2
Generation and comprehension of unambiguous object descriptions. J Mao, J Huang, A Toshev, O Camburu, A L Yuille, K Murphy, CVPR. IEEE Computer SocietyJ. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy. Generation and comprehension of unambiguous object descriptions. In CVPR, pages 11-20. IEEE Computer Society, 2016. 1, 2, 3, 5
J Mao, W Xu, Y Yang, J Wang, A L Yuille, abs/1412.6632Deep captioning with multimodal recurrent neural networks (mrnn. CoRRJ. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille. Deep captioning with multimodal recurrent neural networks (m- rnn). CoRR, abs/1412.6632, 2014. 1
Dynamic multimodal instance segmentation guided by natural language queries. E Margffoy-Tuay, J C Pérez, E Botero, P Arbeláez, ECCV. Springer11215E. Margffoy-Tuay, J. C. Pérez, E. Botero, and P. Arbeláez. Dynamic multimodal instance segmentation guided by natu- ral language queries. In ECCV (11), volume 11215 of Lec- ture Notes in Computer Science, pages 656-672. Springer, 2018. 2
Transparency by design: Closing the gap between performance and interpretability in visual reasoning. D Mascharka, P Tran, R Soklaski, A Majumdar, abs/1803.05268CoRR. 23D. Mascharka, P. Tran, R. Soklaski, and A. Majumdar. Transparency by design: Closing the gap between per- formance and interpretability in visual reasoning. CoRR, abs/1803.05268, 2018. 2, 3
Learning by asking questions. I Misra, R B Girshick, R Fergus, M Hebert, A Gupta, L Van Der Maaten, CVPR. 12I. Misra, R. B. Girshick, R. Fergus, M. Hebert, A. Gupta, and L. van der Maaten. Learning by asking questions. In CVPR, pages 11-20. IEEE Computer Society, 2018. 12
Modeling context between objects for referring expression understanding. V K Nagaraja, V I Morariu, L S Davis, ECCV (4). Springer9908V. K. Nagaraja, V. I. Morariu, and L. S. Davis. Modeling con- text between objects for referring expression understanding. In ECCV (4), volume 9908 of Lecture Notes in Computer Science, pages 792-807. Springer, 2016. 2
Film: Visual reasoning with a general conditioning layer. E Perez, F Strub, H Vries, V Dumoulin, A C Courville, AAAI. AAAI PressE. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. C. Courville. Film: Visual reasoning with a general condition- ing layer. In AAAI, pages 3942-3951. AAAI Press, 2018. 3
Question relevance in VQA: identifying non-visual and false-premise questions. A Ray, G Christie, M Bansal, D Batra, D Parikh, EMNLP. The Association for Computational LinguisticsA. Ray, G. Christie, M. Bansal, D. Batra, and D. Parikh. Question relevance in VQA: identifying non-visual and false-premise questions. In EMNLP, pages 919-924. The Association for Computational Linguistics, 2016. 8
Grounding of textual phrases in images by reconstruction. A Rohrbach, M Rohrbach, R Hu, T Darrell, B Schiele, ECCV. Springer9905A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele. Grounding of textual phrases in images by re- construction. In ECCV (1), volume 9905 of Lecture Notes in Computer Science, pages 817-834. Springer, 2016. 2
A simple neural network module for relational reasoning. A Santoro, D Raposo, D G T Barrett, M Malinowski, R Pascanu, P Battaglia, T Lillicrap, NIPS. A. Santoro, D. Raposo, D. G. T. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lillicrap. A simple neu- ral network module for relational reasoning. In NIPS, pages 4974-4983, 2017. 3
Mattnet: Modular attention network for referring expression comprehension. L Yu, Z Lin, X Shen, J Yang, X Lu, M Bansal, T L Berg, CVPR. 25L. Yu, Z. Lin, X. Shen, J. Yang, X. Lu, M. Bansal, and T. L. Berg. Mattnet: Modular attention network for referring ex- pression comprehension. In CVPR. IEEE Computer Society, 2018. 2, 5
Modeling context in referring expressions. L Yu, P Poirson, S Yang, A C Berg, T L Berg, ECCV. Springer9906L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg. Mod- eling context in referring expressions. In ECCV (2), volume 9906 of Lecture Notes in Computer Science, pages 69-85. Springer, 2016. 1, 2, 3, 4, 5
A joint speakerlistener-reinforcer model for referring expressions. L Yu, H Tan, M Bansal, T L Berg, CVPR. IEEE Computer Society25L. Yu, H. Tan, M. Bansal, and T. L. Berg. A joint speaker- listener-reinforcer model for referring expressions. In CVPR, pages 3521-3529. IEEE Computer Society, 2017. 2, 5
Yin and yang: Balancing and answering binary visual questions. P Zhang, Y Goyal, D Summers-Stay, D Batra, D Parikh, CVPR. IEEE Computer SocietyP. Zhang, Y. Goyal, D. Summers-Stay, D. Batra, and D. Parikh. Yin and yang: Balancing and answering binary visual questions. In CVPR, pages 5014-5022. IEEE Com- puter Society, 2016. 3
Visual7w: Grounded question answering in images. Y Zhu, O Groth, M S Bernstein, L Fei-Fei, CVPR. IEEE Computer SocietyY. Zhu, O. Groth, M. S. Bernstein, and L. Fei-Fei. Visual7w: Grounded question answering in images. In CVPR, pages 4995-5004. IEEE Computer Society, 2016. 1
| [] |
[
"Author(s). 2022. Towards Visual-Prompt Temporal Answering Grounding in Medical Instructional Video",
"Author(s). 2022. Towards Visual-Prompt Temporal Answering Grounding in Medical Instructional Video"
] | [
"Bin Li \nCollege of Electrical and Information Engineering\nHunan University\nChangshaChina\n",
"Yixuan Weng wengsyx@gmail.com \nCollege of Electrical and Information Engineering\nHunan University\nChangshaChina\n\nInstitute of Automation\nNational Laboratory of Pattern Recognition\nChinese Academy Sciences\nBeijingChina\n",
"Shutao Li shutao_li@hnu.edu.com \nCollege of Electrical and Information Engineering\nHunan University\nChangshaChina\n"
] | [
"College of Electrical and Information Engineering\nHunan University\nChangshaChina",
"College of Electrical and Information Engineering\nHunan University\nChangshaChina",
"Institute of Automation\nNational Laboratory of Pattern Recognition\nChinese Academy Sciences\nBeijingChina",
"College of Electrical and Information Engineering\nHunan University\nChangshaChina"
] | [
"Portugal ACM Reference Format: Anonymous"
] | The temporal answering grounding in the video (TAGV) is a new task naturally derived from temporal sentence grounding in the video (TSGV). Given an untrimmed video and a text question, this task aims at locating the matching span from the video that can semantically answer the question. Existing methods tend to formulate the TAGV task with a visual span-based question answering (QA) approach by matching the visual frame span queried by the text question. However, due to the weak correlations and huge gaps of the semantic features between the textual question and visual answer, existing methods adopting visual span predictor perform poorly in the TAGV task. To bridge these gaps, we propose a visualprompt text span localizing (VPTSL) method, which introduces the timestamped subtitles as a passage to perform the text span localization for the input text question, and prompts the visual highlight features into the pre-trained language model (PLM) for enhancing the joint semantic representations. Specifically, the context query attention is utilized to perform cross-modal interaction between the extracted textual and visual features. Then, the highlight features are obtained through the video-text highlighting for the visual prompt. To alleviate semantic differences between textual and visual features, we design the text span predictor by encoding the question, the subtitles, and the prompted visual highlight features with the PLM. As a result, the TAGV task is formulated to predict the span of subtitles matching the visual answer. Extensive experiments on the medical instructional dataset, namely MedVidQA, show that the proposed VPTSL outperforms the state-of-the-art (SOTA) method by 28.36% in terms of mIOU with a large margin, which demonstrates the effectiveness of the proposed visual prompt and the text span predictor. | 10.48550/arxiv.2203.06667 | [
"https://arxiv.org/pdf/2203.06667v6.pdf"
] | 247,446,830 | 2203.06667 | 9e06005dd4c31c6b885f66bb73e051c531d957cb |
Author(s). 2022. Towards Visual-Prompt Temporal Answering Grounding in Medical Instructional Video
October 10-14, 2022. October 10-14, 2022
Bin Li
College of Electrical and Information Engineering
Hunan University
ChangshaChina
Yixuan Weng wengsyx@gmail.com
College of Electrical and Information Engineering
Hunan University
ChangshaChina
Institute of Automation
National Laboratory of Pattern Recognition
Chinese Academy Sciences
BeijingChina
Shutao Li shutao_li@hnu.edu.com
College of Electrical and Information Engineering
Hunan University
ChangshaChina
Author(s). 2022. Towards Visual-Prompt Temporal Answering Grounding in Medical Instructional Video
Portugal ACM Reference Format: Anonymous
the 30th ACM International Conference on Multimedia (MM '22)October 10-14, 2022. October 10-14, 202210.1145/3474085.xxxxxxx* These authors contribute equally to this work. † Work done during an internship at Chinese Academy Sciences. ‡ Corresponding author.CCS CONCEPTSInformation systems → Video search;Computing method- ologies → Neural networks
The temporal answering grounding in the video (TAGV) is a new task naturally derived from temporal sentence grounding in the video (TSGV). Given an untrimmed video and a text question, this task aims at locating the matching span from the video that can semantically answer the question. Existing methods tend to formulate the TAGV task with a visual span-based question answering (QA) approach by matching the visual frame span queried by the text question. However, due to the weak correlations and huge gaps of the semantic features between the textual question and visual answer, existing methods adopting visual span predictor perform poorly in the TAGV task. To bridge these gaps, we propose a visualprompt text span localizing (VPTSL) method, which introduces the timestamped subtitles as a passage to perform the text span localization for the input text question, and prompts the visual highlight features into the pre-trained language model (PLM) for enhancing the joint semantic representations. Specifically, the context query attention is utilized to perform cross-modal interaction between the extracted textual and visual features. Then, the highlight features are obtained through the video-text highlighting for the visual prompt. To alleviate semantic differences between textual and visual features, we design the text span predictor by encoding the question, the subtitles, and the prompted visual highlight features with the PLM. As a result, the TAGV task is formulated to predict the span of subtitles matching the visual answer. Extensive experiments on the medical instructional dataset, namely MedVidQA, show that the proposed VPTSL outperforms the state-of-the-art (SOTA) method by 28.36% in terms of mIOU with a large margin, which demonstrates the effectiveness of the proposed visual prompt and the text span predictor.
INTRODUCTION
Question: How to examine lymph nodes in head and neck?
Visual Answer:
The following lymph nodes should be included in every palpation
The area that is palpated should be as relaxed as possible.
Multiple fused lymph nodes are referred to as conglomerates and are highly suspicious for malignancy. After palpating the head and neck, continue by examining the axillary.
Carefully palpate the individual lymph node stations.
2:18 1:32
… …
Additionally, the supraclavicular lymph nodes should be palpated as well,
Carefully palpate the i ndividual l ymph node stations.
The area that is palpated should be as relaxed as possible.
Text Question
How to examine lymph nodes in head and neck? Examination of the lymph nodes.
Cross-Modal Modeling
…
The examination may be facilitated by gently lifting the left flank of the patient ventrally.
… Text Query
After palpating the head and neck, continue by examining the axillary.
How to examine lymph nodes in head and neck?
Carefully palpate the i ndividual l ymph node stations.
The area that is palpated should be as relaxed as possible.
Video
After palpating the head and neck, continue by examining the axillary.
Video-text Highlighting
Cross-Modal Interaction
Subtitle Spans
Ground Truth : 92s ~ 138s
Ground Truth : 92s ~ 138s Figure 1: Illustration of the temporal answering localization in the medical instructional video, where the visual answer with the subtitles locates in the video timeline to perform a demonstration. Below are the differences between the existing method and our method.
"Hey, Siri, could you please show me how to examine lymph nodes in the head and neck ?" Then, the video containing the right processes comes into our eyes... Recently, the surge in availability of online videos has changed the way of acquiring information and knowledge [1][2][3]. Many people prefer instructional videos to teach or learn how to accomplish a particular task with a series of stepby-step procedures [4]. The temporal answering grounding in the video (TAGV) is a new task that has attracted increasing attention due to the visual and verbal communication at the same time in an effective and efficient manner [5,6]. The goal of the TAGV task is to find the matching video answer span corresponding to its question, aka., visual answering localization. As the natural development from temporal sentence grounding in the video (TSGV) [7,8], the TAGV task is challenging since there are huge gaps between two different modalities. The text is discontinuous in syntactic structure, while the video is continuous within adjacent frames [9]. People can easily answer through natural language but are hard to act without the moment guidance in the video to demonstrate their answers. As shown at the top of Figure 1, this example illustrates the temporal answering localization in the medical instructional video, where the figure is borrowed from the original work [6] with the author's permission and change. As we can see in this figure, the particular temporal answering segment is preferred rather than the entire video as the answer to the given question "How to examine lymph nodes in a head and neck ?". How to design a cross-modal method that can locate the video timeline correctly is still one of the key points in the current research [6,10].
Many efforts have been made to realize a reliable and accurate natural language temporal localization in the video [10][11][12], where similar tasks are proven to be important for cross-modal understanding, such as video moment retrieval (VMR) [13], and video question answering (VQA) [14]. On the query side, the query of the TAGV task is a text question instead of a direct text description in the VMR. On the answer side, the answer of the TAGV is located on the video timeline different from the text answering the visual question in the VQA. Therefore, the existing methods may perform poorly in the TAGV task. Similar to the question answering (QA) problem in the natural language processing (NLP) field, we resort to the existing span-based grounding methods [10,12] to address the TAGV problem.
As shown in the middle of Figure 1, existing span-based methods tend to encode video and text separately for feature encoding and adopt cross-modal modeling to construct feature representations in the same space. The visual answer spans can be located with the head and tail in the video frame. However, there is a huge difference in the semantic information between text and video [15,16], where the located video spans queried by the text question may be biased. Moreover, the weak correlations between text queries and video frames may lead to insufficient representation for an answer [17].
To address the above issues, we propose a visual-prompt text span localization (VPTSL) method, which aims to adopt visual highlight features to enhance the text span localization with the pretrained language model (PLM). Different from the existing methods, we leverage the highlight feature as the visual prompt to enhance textual features from the PLM, where the joint semantic representations can be learned together. Given a text question, the timestamped subtitle to the visual answer is modeled to be predicted as the final result. We illustrate the proposed VPTSL method at the bottom of Figure 1.
Our main contributions are three-fold:
• To the best of our knowledge, this is the very first attempt to apply the text span predictor for solving the temporal answering grounding problem, where the timestamps of the subtitle corresponding to the visual answer are formulated for prediction. • The visual highlight features are designed to prompt the visual information for the textual features, where the verbal and the visual part of the video can be jointly learned through the PLM. • Extensive experiments are performed to demonstrate the effectiveness of the proposed VPTSL on the medical instructional dataset (MedVidQA), in which we achieve 28.36 in mIOU score by a large margin compared with other state-ofthe-art methods.
RELATED WORK 2.1 Temporal Sentence Grounding in Video
The temporal sentence grounding in the video (TSGV) is a critical task for cross-modal understanding [8,18]. This task takes a video-query pair as input where the video is a collection of consecutive image frames and the query is a sequence of words. Early attempts resort to the sliding window-based [19][20][21] and scanningand-ranking based [22][23][24][25] paradigm. The former first generates multiple segments and then ranks them according to the similarity between segments and the query. The latter samples candidate segments via the sliding window mechanism and subsequently integrates the query with each segment representation via a matrix operation. The latest works tend to model this problem without segment proposal, which predicts answers directly without generating candidate answers [26]. For this convenience, many works tend to adopt the visual span predictor for locating the sentence grounding segments, where more efficient cross-modal interaction modeling are designed [10,12,16]. However, due to the gaps between the textual features and visual features [27,28], current methods adopted in the TSGV performs poorly in the temporal answering grounding in the video (TAGV). Different from them, our method tries to model subtitles with timestamps for locating the visual answer. The text span predictor is designed in the proposed method, where more semantic information between the predicted answers and the input text question can be jointly learned through the pre-trained language model.
Prompt Learning Tuning
The concept of prompt tuning originates from the NLP domain [29], whose motivation is to provide pre-trained language models, such as BERT [30] or GPT [31], with extra knowledge. Specifically, given a pre-trained language model, the manual designed templates are used to augment the input with extra information [32]. The basic idea of prompting is to induce a pre-trained language model for downstream prediction given cloze-style prompts, such as sentiment analysis [33]. The key lies in how to design the prompt part for tuning the pre-trained model [34]. In the computer vision field, prompt learning is a nascent research direction that has only been explored very recently [35][36][37]. The pioneering works have designed many efficient modules of cross-modal interaction for the downstream tasks [38,39], where the features of different modalities are optimised continuously in the embedding space. Our method is based on the pre-trained language model, adopting the visual prompt feature for perceiving the verbal and non-verbal parts. Concretely, the video-text highlighting is designed for capturing the frame supervision for the text span predictor. The visual prompt features are utilized as the visual tokens for enhancing the pre-trained language model with non-verbal semantics.
MAIN METHOD
We propose the Visual-Prompt Text Span Localization (VPTSL) method for the TAGV task, whose goal is to predict the span of the subtitle timestamp matching the answering frame timeline with the pre-trained language model. The overview of VPTSL is illustrated in Figure 2, which consists of four components: (1) cross-modal modeling: the extracted visual and the textual features are processed through the cross-modal interaction.
(2) Video-text highlighting: the text question is used to query the video frames for obtaining the predicted highlight feature supervised by the highlight ground truth. (3) Visual prompt: highlight feature is adopted to prompt the pre-trained language model, where the textual features can capture the visual information while jointly learning. (4) Text span predictor: the textual tokens with highlight prompts are encoded through the pre-trained language model to predict the subtitle timestamp spans.
Cross-modal Modeling
Given an untrimmed video as = { } =1 and the text question as = { } =1 , where and are the number of frames and tokens, respectively. For obtaining the well-formed semantic representations of two modalities, we will elaborate on feature extractor and cross-modal interaction. Figure 4: The structure of Highlighting module.
… + Conv1D Sigmoid Q Ṽ Concatenation w 1 w 2 w m h Q … … h Q h Q h Q V 1 V 2 V ñ S h Self Attention -
Feature Extractor.
For each video , we extract frames (16 frames per second) and then obtain the corresponding RGB visual features V ′ = {v } =1 ∈ R × using 3D ConvNet (I3D) pre-trained on the Kinetics dataset [40], where is the number of extracted features and is the dimension of the visual features. The extracted features is sent to a visual projection for obtaining the visual feature V ∈ R × . The visual projection is designed as the Conv1D [41] module with dropout (p=0.1). For the text question part, we tokenize the question into the tokens with the tokenizer. Then, the textual tokens are encoded through the DeBEATa pretrained language model [42] for obtaining the well-formed textual features { 1 , 2 , . . . , } ∈ R × , where the is the length of text question and the is the dimension of output encoding. After performing the linear projection, the final textual features Q ∈ R × is obtained, where = .
3.1.2 Cross-modal Interaction. After obtaining both the visual (V) and textual (Q) features, we perform the Context Query Attention, which is inspired by the work [10]. This module aims to capture the cross-modal interactions through context-to-query (A) and query-to-context (B) process. The attention weights are computed as:
A = S · Q ∈ R × , B = S · S · V ∈ R × where S and S are the row-wise and column-wise normalization of S by SoftMax, respectively. Finally, the output of context-query attention is written as:
V = [V; A; V ⊙ A; V ⊙ B](1)
where the is a single feed-forward layer, and ⊙ denotes elementwise multiplication.
Video-text Highlighting
3.2.1 Highlight Module. Inspired by the work [10], we design the visual highlight module, which aims to percept the non-verbal part in the videos. As shown in Figure 3, the ground truth span locates in the verbal part, where the subtitles are contained. However, for an instructional video, the non-verbal part also counts a lot, so the highlight module is designed to enlarge the ground truth of text span. Specifically, we consider the verbal part as the foreground and the rest are the background in the video. The target text span boundaries are enlarged to cover the verbal and the non-verbal information, where the extension ratio is controlled by the hyperparameter . The highlight ground truth time span is calculated as follows
ℎ ℎ = − ,(2)
where ℎ ℎ is the highlight ground truth time span, the is the end ground truth time, while the is the start ground truth time.
Similar to the work [10], we extend the non-verbal frames in the video as the extend part, which can be calculated as
= ℎ ℎ * ( + 1)(3)
where is the extend highlight ground truth time, the is the hyperparameter.
The textual features into the Highlight Module are denoted as Q, where the Q = [ 1 , 2 . . .
] ∈ R × . The self-attention mechanism [43] is performed to obtain the textual features h Q ∈
R 1× . Then ℎ is concatenated with each feature in V as V = [ v 1 , . . . , v ] ∈ R × , where v = [h Q ; v ], ∈ [1, ].
The highlighting score is computed as:
S h = Conv1D( v )
where denotes Sigmoid activation, S h ∈ R .
Highlight
Projection. The highlighted features are required to be projected to the same dimension with the textual feature, which can be calculated by:
S ′ h = (S h )(4)
3.2.3 Highlight Optimization. Accordingly, the highlight loss is computed with the BCE loss function, which is formulated as:
L highlight = BCE (S ′ h , )(5)
Moreover, the highlight module is trained in an end-to-end manner, where one of the total loss can be written as the L 1 , which is shown as follows
L 1 = L highlight .(6)
Algorithm 1 Subtitle Answer Span Selection
Input: Subtitle collections with time stamp, where each subtitle has its corresponding timestamp ( , ); Start time of the visual answer ; End time of visual answer ; Output: Subtitle start and end time ( , )
← +∞ ← +∞ for ∈ do if | . − | < then ← | . − | ← . end if if | . − | <= then ← | . − | ← . end if end for
Visual Prompt
3.3.1 Prompt Designing. We use the visual highlight features as the visual token for prompting the pre-trained language model. Specifically, the highlight feature has the same dimension as the input text tokens, which is considered to be the visual token. On the one head, the visual prompt covers the non-verbal part that the text token may lack. On the other head, the visual prompt is supervised by the visual frames, where some visual features can provide the extra information as the knowledge for the pre-trained model when prompt tuning [37].
Prompt
Tuning. Prompt tuning is considered to be a wise choice to enhance the pre-trained model with extra knowledge [29,35]. Intuitively, the prompt feature is used as the visual token which concatenates with the text query (question) and the video subtitles. The [CLS] is placed at the head of the input token, while the [SEP] is used as the separator. After concatenation, each subtitle is segmented by the subtitle span, which is used for text span prediction. Then the embedding module is adopted for learning the textual and visual features jointly in the same vector space.
Text Span Predictor
The text span predictor is designed to predict the subtitle answer span corresponded to its visual answers. In this section, we first elaborate on the subtitle answer span selection algorithm for selecting the most proper subtitle answer span. Then, the subtitle span prediction is introduced for obtaining the final subtitle timeline.
Subtitle Answer Span
Selection. Subtitle answer span selection aims to select the most approaching text subtitle span corresponding to its visual answer. As a result, we design the aligned subtitle answer span selection for further text span prediction. As shown in the Algorithm 1, we use the subtitle collections of the video to locate the most approaching start and end time of the visual answers ( , ), It is noted that the algorithm has its limitation as the selected part of the subtitle may be inaccurate. A precise subtitle timeline location may improve the final prediction performance. We leave these to the future work.
Subtitle Span Prediction.
The and represent the start and end time of the temporal moment that answers the question. Different from the visual span proposed by work [10,12], we formulate the span timeline prediction as finding its corresponding subtitle timestamp. This problem is formulated as the SQuAD [44] style triples (Context, Question, Answer), where the more efficient method to locate subtitles span can be designed. As a result, we design a text span predictor based on a pre-trained language model. The input is appended with the visual prompt method, where the textual and visual tokens are learned jointly in the pre-trained model. Specifically, we use DeBERTa for feature encoding. Each token is segmented in the subtitle span, which has a probability of being selected head and tail. Therefore, subtitle span-based prediction can be performed by the cross-entropy optimization token by token.
As shown in Figure 2, the ground truth visual timeline is (15~19). This frame timeline can be translated into the subtitle span stamp, which locates in spans 8 and 9. The predicted start index shown in the Figure 2 is located in the 8 , while the predicted end index locates in the 9 . So the corresponding aligned subtitle stamp can be used as the final results (14.91~19.21). It is noticed that the token-level segment may present errors between the subtitle span timeline and the ground truth span timeline. As mentioned in the section 3.4.1, more precise subtitle timeline selection may bring more accuracy for the final results. Next, we will introduce the details of subtitle span prediction.
Let DeBERTa(·) be the pre-trained model, we first obtain the hidden representation with
ℎ = DeBERTa( ) ∈ R ℎ * | |(7)
where | | is the length of the input sequence and ℎ is the size of the hidden dimension. Then the hidden representation is passed to two separate dense layers followed by softmax functions:
1 = softmax( 1 · ℎ + 1 )(8)2 = softmax( 2 · ℎ + 2 )(9)
where 1 , 2 ∈ R ℎ and 1 , 2 ∈ R. The softmax is applied along the dimension of the sequence. The output is a span across the positions in , indicated by two pointers (indexes) and computed from 1 and 2 :
= arg max( 1 ) (10) = arg max( 2 )(11)
where equation (10) represents the start token of the start span, while the equation (11) shows the end token of the end span.
In the end, the final visual answer span will always be aligned with the text span predicted text span, which is presented as ()
. The span prediction loss is optimized by minimizing the following loss:
L 2 = L text_span(12)
Training and Inference
3.5.1 Training. The total optimizing function is performed as multiloss form, which is presented as follows.
L = * L highlight + L text_span (13) where the is the hyper-parametor for tuning the total loss, the L highlight part provides the non-verbal information and the loss L text_span covers the verbal text information.
3.5.2 Inference. We simply take the visual highlight feature to prompt the pre-trained language model, aiming at covering the non-verbal information for the text span localization. The text span predictor performs prediction after encoding the text tokens and the visual token by the pre-trained language model. The predicted start token locates the start span, while the predicted end token locates the end span.
EXPERIMENTS
In this section, we first introduce the dataset used in the experiments. Then, we elaborate on the evaluation metrics and describe the compared state-of-the-art methods. Finally, we present the implementation details.
Datasets
Medical Video Question Answering (MedVidQA) datasets [6] is the first video question answering (VQA) dataset [45] constructed in natural language video localization (NLVL) [21,46] , which aims to provide medical instructional video with text question query. Three medical informatics experts were asked to formulate the medical and health-related instructional questions by watching the given video. They were required to localize the visual answer to those instructional questions by providing their timestamps in the video. The MedVidQA dataset is composed of 899 videos with 3010 questions and the corresponding visual answers. The mean duration time of these videos is 383.29 seconds. The MedVidQA provides subtitle information of the original video and visual feature information extracted from the 3D ConvNet (I3D) which was pre-trained on the Kinetics dataset [40]. We follow the official data split, where 2710, 145, and 155 questions and visual answers are used for training, validation, and testing respectively.
Evaluation Metrics
Following prior works [6,21,26,47,48], we adopt "R@n, IoU = " and "mIoU" as the evaluation metrics, which treats localization of the frames in the video as a span prediction task similar to answer span prediction [49,50] in text-based question answering. The "R@n, IoU = " denotes the percentage of language queries having at least one result whose Inter-section over Union (IoU) with ground truth is larger than in top-n retrieved moments. "mIoU" is the average IoU over all testing samples. In our experiments, we use n = 1 and ∈ 0.3, 0.5, 0.7. The calculation equation is shown as follows
= ( ∑︁ =1 ∩ ∪ )/(14)
where and represent different span collections. Here "with subtitle" means that the subtitle text in the video is added to the text features. We highlight the best score in each column in bold, and the second best score is marked with underline. We also show the improvement between the proposed VPTSL and second place.
Comparison with State-of-the-Art Methods
We compare our VPTSL with several state-of-the-art (SOTA) methods on the MedVidQA dataset. Notably, we set the same I3D feature [40] and text feature extraction model (i.e., DeBERTa pre-trained language model) as the visual and textual feature extractor respectively for all methods to ensure fairness.
TMLGA [15] is the model with a dynamic filter, which adaptively transfers language information to visual domain attention map. A new loss function is designed to guide the model with the most relevant part of the video, and soft labels are performed to cope with annotation uncertainties.
VSLBase [26] is a standard span-based QA framework. Specifically, visual features are analogous to that of text passage, where the target moment is regarded as the answer span. The VSLBase is trained to predict the start and end times of the visual answer span.
VSLNet [26] introduces a Query-Guided Highlighting (QGH) strategy to further enhance the VSLBase model. The VSLNet regards the target moment and its adjacent contexts as foreground, while the rest as background, i.e., foreground covers a slightly longer span than the answer span.
VSLNet-L [51] incorporates the concepts from multi-paragraph question answering [52] by applying a multi-scale split-and-concatenation strategy to address the performance degradation on a long video. Long videos are segmented into multiple short clips. The hierarchical searching strategy is designed for more accurate moment localization.
ACRM [16] predicts the temporal grounding based on an interaction modeling between vision and language modalities. Specifically, the attention module is introduced to automatically assign hidden features to query text with richer semantic information, which is considered to be more important for finding relevant video content. Moreover, the additional predictor is designed for utilizing the internal frames during training to improve the localization accuracy.
RaNet [53] represent the relation-aware network, which formulates temporal language grounding in the video inspired by reading comprehension [54]. The framework of RaNet is designed to select a grounding moment from the predefined answer collections with the aid of coarse-and-fine choice-query interaction and choice-choice relation construction. The choice-query interactor is proposed to match the visual and textual information simultaneously in sentence-moment and token-moment levels, leading to a coarse-and-fine cross-modal interaction.
Implementation Details
We apply the same multimodal features for all the experiments for all the compared methods for fair comparisons. Specifically, for the textual features, we use the DeBERTa-v3 [42] as the pretrained language model, which originates from DeBERTa 1 model with 24 layers and a hidden size of 1024. It has 304M backbone parameters and a vocabulary containing 128K tokens, introducing 131M parameters in the embedding layer. For visual features, all different methods adopt the I3D features [40] as the visual input. We reproduce the compared method with the Pytorch 2 [55] on three NVIDIA A100 GPUs, where all the implementations use the hugging-face 3 [56] framework. For the re-initiated layers, we set the dimension of all the hidden layers in the model as 1024 while the kernel size of convolution layers [57] is set to 7. The head size of multi-head attention [58] is 32. The text span predictor is initialized with another DeBERTa-v3 pre-trained language model, where the subtitles are essential to the proposed method.
As for the method adopting visual span predictor, we also compare their performances with the timestamped subtitles in addition to the original implementations. Specifically, the original implementations use the text question as the query to match the visual Table 3: Results of ablation experiment on the MedVidQA dataset.
answer span for the TAGV task. To make use of the timestamped subtitles in these methods, we concatenated the text question and the subtitles with [SEP] separator, which are used to query the visual frames for cross-modal modeling. The start and end frames are obtained through the visual span predictor.
We use the AdamW [59] as the optimizer and the learning rate is set to 1e-5 with the warm-up [60]. The batch size is 4. Moreover, we set the maximum length of 1800, and delete the excess part. The linear decay of learning rate and gradient clipping of 1e-6 and the dropout [61] is set to 0.1, which is applied to prevent overfitting. The hyperparameters of all the compared methods are tuned on the valid set. At the end of each training epoch, we test in the valid set and select the model with the highest score (mainly depending on mIOU) to predict in the test dataset. All the experimental implementations were repeated three times before reporting in the test set.
EXPERIMENTAL RESULTS
Main Results
The experimental results of the performance compared with the various SOTA methods on the MedVidQA dataset are shown in Table 1. The further conclusions are that our method outperforms each compared method on all metrics, including IOU= 0.3, 0.5, 0.7, and mIOU scores. The text span predictor can achieve better results than the method with the visual span predictor by a large margin in the mIOU score (28.36), indicating that the textual predictor is superior to the visual span predictor in locating the visual answer queried by the text question. The reason may be that the powerful pre-trained language model can leverage more strong semantics from the subtitles given the text question. Moreover, we also add the subtitles to each compared method adopting visual span predictor, where the text question with the subtitles of the video are concatenated with [SEP] for obtaining the text features. It can be found that the final results can be improved with the subtitles augmentation. However, the proposed VPTSL method still obtains significant improvements over these modified compared methods, which demonstrates the effectiveness of the visual prompt and the text span predictor.
Ablation Studies
We first investigate the effectiveness of the pre-trained model. The results of whether the classic visual span-based methods VSLBase and VSLNet adopt the pre-trained model are shown in Table 2, where part of the results is in line with the work [6]. As we can see, only using the word2vec to initial the textual embedding layer achieves worse performance. In particular, the PLM for the textual feature extraction can improve semantic understanding, which results in improvements of 0.69 and 2.11 in terms of mIOU score for VSLBase and VSLNet respectively. It convincingly demonstrates that the pre-trained language model for textual feature extraction can improve the video understanding for the visual span-based method in this TAGV problem.
Then we study the ablation of each component of the proposed method, which is shown in Table 3. Specifically, for the w/o highlight loss, we remove the highlight loss from the total loss for training. For the w/o visual prompt, we use the question and timestamped subtitles to implement the text predictor for text span prediction. For w/o PLM, instead of loading pre-trained language model weights, we use a model of the same size and initialize the model parameters randomly. When there is no highlight supervision, the performance drops slightly. When the prompt module is removed, it is reduced by 2.4 mIOU compared to the original foundation. When the pre-trained language model is removed, the text span prediction
Text Question
How to do quad set exercises to treat pain? Figure 6: Case study of the proposed method compared with the SOTA method, where the embedding weights are visualized to demonstrate the visual prompt's effectiveness. The red box shows the semantics differences between the with and without visual prompt, where the visual prompt can gather more attention weights from the adjacent subtitles containing verbal and non-verbal parts for better video understanding.
ability is greatly impaired. So the performance improvement still comes from the strong understanding ability of the pre-trained model.
We also analyze the performances under different hyperparameters in the experiment. As shown in Sub- Figure 5(a), we show the performances with different values of the extend time hyperparameter on the final result. It can be found from the ablation diagram that when the extend rate is 0.25, the best result is obtained. When becomes larger, it decreases the performance of the final prediction result, which may be due to the non-visual part affecting the understanding of the input text. At the same time, when extend rate is less than 0, the text input range is insufficient, which results in bad performance.
Meanwhile, we also study the weight of the visual prompt loss for the proposed method. From the Sub- Figure 5(b), it can be found that the best results can be achieved when is 0.1. Compared with no visual supervision, visual information can provide more contextual information for the prediction.
Case Study
We present the case study for the TAGV task shown in Figure 6. As we can see that, the proposed VPTSL has a better performance than the SOTA method (RaNet), where more precise subtitle spans are predicted by the text span predictor. Moreover, we compare the performance of whether to use prompt tuning in the prediction. It can be further concluded that the visual prompt can bridge the non-verbal part and the verbal part, bringing improvement to the final prediction.
What's more, we also provide the visualization of the attention features in the embedding layer of the proposed method. Intuitively, these visualizations give the insight to dive into the function of the prompt. As shown in Figure 6, comparing the weights of subtitle 3 boxed in red with or without visual prompt, it receives more attention from its adjacent subtitles which are all in the answer span when the visual prompt is used. This can be evidence that the visual prompt features can bridge the verbal part and the non-verbal part, which results in more precise visual answer prediction.
CONCLUSION
In this paper, we proposed the visual prompt text span localizing (VPTSL) method to make full use of complementary information of textual and visual semantic representations for temporal answering grounding in the video (TAGV). To this end, we first model the cross-modal information and proposed the visual prompt for enhancing the pre-trained language model. The text span predictor is designed for modeling the textual and visual representation for subtitle span prediction. The main results and ablation studies on the proposed method in the TAGV benchmarks significantly demonstrated the effectiveness of the VPTSL. In the future, the more precise and efficient method to perform the TAGV task with visual augmentation is yet to be explored.
Figure 2 :Figure 3 :
23Overview of the proposed Visual-Prompt Text Span Localization (VPTSL) method. The illustration of visual highlighting.
study of hyper parameter .
Figure 5 :
5Ablation study of hyper parameters and of the proposed VPTSL.
Performance comparison of various SOTA methods on MedVidQA dataset.Models
IoU=0.3
IoU=0.5
IoU=0.7
mIoU
Random Mode
8.38
1.93
1.21
6.89
Random Guess
7.74
3.22
0.64
5.96
VSLBase[26] (2020)
with subtitle
27.66
14.19
6.99
21.01
w/o subtitle
26.12
12.44
6.85
20.84
TMLGA[15] (2020)
with subtitle
26.90
15.86
9.66
20.49
w/o subtitle
24.83
16.55
6.21
19.80
VSLNet[26] (2020)
with subtitle
33.10
16.61
8.39
22.61
w/o subtitle
30.32
16.55
7.74
22.23
VSLNet-L[51] (2021)
with subtitle
31.61
17.41
9.72
24.37
w/o subtitle
29.03
16.77
9.03
23.09
ACRM[16] (2021)
with subtitle
26.90
18.06
12.90
23.70
w/o subtitle
24.83
16.55
10.96
22.89
RaNet[16] (2021)
with subtitle
35.48
26.45
14.84
29.45
w/o subtitle
32.90
20.64
15.48
27.48
VPTSL
77.42(+41.94↑)
61.94(+35.49↑)
44.52(+29.04↑)
57.81 (+28.36↑)
Table 1:
Comparison of whether using the pre-trained language model (PLM) for VSLBase and VSLNet in MedVidQAMethod Text Feature Iou=0.3 Iou=0.5 Iou=0.7 mIOU
VSLBase
Word2vec
21.93
12.25
5.80
20.15
PLM
26.12
12.44
6.85
20.84
VSLNet
Word2vec
25.81
14.20
6.45
20.12
PLM
30.32
16.55
7.74
22.23
Table 2: Experimental Item Iou=0.3 Iou=0.5 Iou=0.7 mIOU
W/o Highlight Loss
76.13
60.65
43.87
57.44
W/o Visual Prompt
70.97
59.35
43.87
55.41
W/o PLM
20.65
10.97
4.52
18.83
VPTSL
77.42
61.94
44.52
57.81
https://huggingface.co/microsoft/deberta-v3-large 2 https://pytorch.org 3 https://github.com/huggingface/transformers
ACKNOWLEDGEMENT This work is supported by the National Key R&D Program of China (2018YFB1305200), the National Natural Science Fund of China (62171183).
Spoken moments: Learning joint audio-visual representations from video descriptions. Mathew Monfort, Souyoung Jin, Alexander Liu, David Harwath, Rogerio Feris, James Glass, Aude Oliva, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMathew Monfort, SouYoung Jin, Alexander Liu, David Harwath, Rogerio Feris, James Glass, and Aude Oliva. Spoken moments: Learning joint audio-visual representations from video descriptions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14871-14881, 2021.
Arvind Nitin, Singara Shelke, Singh Kasana, A comprehensive survey on passive techniques for digital video forgery detection. Multimedia Tools and Applications. 80Nitin Arvind Shelke and Singara Singh Kasana. A comprehensive survey on passive techniques for digital video forgery detection. Multimedia Tools and Applications, 80(4):6247-6310, 2021.
Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. Salman Khan, Muzammal Naseer, Munawar Hayat, ACM Computing Surveys. 2021Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fa- had Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. ACM Computing Surveys (CSUR), 2021.
Hierarchical modeling for task recognition and action segmentation in weakly-labeled instructional videos. Reza Ghoddoosian, Saif Sayed, Vassilis Athitsos, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionReza Ghoddoosian, Saif Sayed, and Vassilis Athitsos. Hierarchical modeling for task recognition and action segmentation in weakly-labeled instructional videos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1922-1932, 2022.
Overview of the MedVidQA 2022 Shared Task on Medical Video Question Answering. Deepak Gupta, Dina Demner-Fushman, Proceedings of the 21st. the 21stDeepak Gupta and Dina Demner-Fushman. Overview of the MedVidQA 2022 Shared Task on Medical Video Question Answering. In Proceedings of the 21st
SIGBioMed Workshop on Biomedical Language Processing, ACL-BioNLP 2022. Association for Computational Linguistics. SIGBioMed Workshop on Biomedical Language Processing, ACL-BioNLP 2022. As- sociation for Computational Linguistics, 2022.
Deepak Gupta, Kush Attal, Dina Demner-Fushman, arXiv:2201.12888A Dataset for Medical Instructional Video Classification and Question Answering. arXiv preprintDeepak Gupta, Kush Attal, and Dina Demner-Fushman. A Dataset for Medi- cal Instructional Video Classification and Question Answering. arXiv preprint arXiv:2201.12888, 2022.
A survey of temporal activity localization via language in untrimmed videos. Yulan Yang, Zhaohui Li, Gangyan Zeng, 2020 International Conference on Culture-oriented Science & Technology (ICCST). IEEEYulan Yang, Zhaohui Li, and Gangyan Zeng. A survey of temporal activity localization via language in untrimmed videos. In 2020 International Conference on Culture-oriented Science & Technology (ICCST), pages 596-601. IEEE, 2020.
The elements of temporal sentence grounding in videos: A survey and future directions. Hao Zhang, Aixin Sun, Wei Jing, Joey Tianyi Zhou, arXiv:2201.08071arXiv preprintHao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. The elements of temporal sentence grounding in videos: A survey and future directions. arXiv preprint arXiv:2201.08071, 2022.
Temporally grounding natural sentence in video. Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, Tat-Seng Chua, Proceedings of the 2018 conference on empirical methods in natural language processing. the 2018 conference on empirical methods in natural language processingJingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. Temporally grounding natural sentence in video. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 162-171, 2018.
Span-based localizing network for natural language video localization. Hao Zhang, Aixin Sun, Wei Jing, Joey Tianyi Zhou, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsHao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. Span-based localizing network for natural language video localization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6543-6554, 2020.
Natural language video localization with learnable moment proposals. Shaoning Xiao, Long Chen, Jian Shao, Yueting Zhuang, Jun Xiao, arXiv:2109.10678arXiv preprintShaoning Xiao, Long Chen, Jian Shao, Yueting Zhuang, and Jun Xiao. Natural language video localization with learnable moment proposals. arXiv preprint arXiv:2109.10678, 2021.
Natural language video localization: A revisit in span-based question answering framework. Hao Zhang, Aixin Sun, Wei Jing, Liangli Zhen, Joey Tianyi Zhou, Rick Siow Mong Goh, IEEE Transactions on Pattern Analysis and Machine Intelligence. Hao Zhang, Aixin Sun, Wei Jing, Liangli Zhen, Joey Tianyi Zhou, and Rick Siow Mong Goh. Natural language video localization: A revisit in span-based question answering framework. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 2021.
Frame-wise cross-modal matching for video moment retrieval. H Tang, J Zhu, M Liu, Z Gao, Z Cheng, IEEE Transactions on Multimedia. H. Tang, J. Zhu, M. Liu, Z. Gao, and Z. Cheng. Frame-wise cross-modal matching for video moment retrieval. IEEE Transactions on Multimedia, pages 1-1, 2021.
Tvqa: Localized, compositional video question answering. Jie Lei, Licheng Yu, Mohit Bansal, Tamara Berg, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingJie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. Tvqa: Localized, compo- sitional video question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1369-1379, 2018.
Proposal-free temporal moment localization of a natural-language query in video using guided attention. Cristian Rodríguez-Opazo, Edison Marrese-Taylor, Fatemeh Sadat Saleh, Hongdong Li, Stephen Gould, Winter Conference on Applications of Computer VisionCristian Rodríguez-Opazo, Edison Marrese-Taylor, Fatemeh Sadat Saleh, Hong- dong Li, and Stephen Gould. Proposal-free temporal moment localization of a natural-language query in video using guided attention. Winter Conference on Applications of Computer Vision, 2020.
Frame-wise cross-modal matching for video moment retrieval. Haoyu Tang, Jihua Zhu, Meng Liu, Zan Gao, Zhiyong Cheng, IEEE Transactions on Multimedia. Haoyu Tang, Jihua Zhu, Meng Liu, Zan Gao, and Zhiyong Cheng. Frame-wise cross-modal matching for video moment retrieval. IEEE Transactions on Multi- media, pages 1-1, 2021.
Transformers in computational visual media: A survey. Yifan Xu, Huapeng Wei, Minxuan Lin, Yingying Deng, Kekai Sheng, Mengdan Zhang, Fan Tang, Weiming Dong, Feiyue Huang, Changsheng Xu, Computational Visual Media. 81Yifan Xu, Huapeng Wei, Minxuan Lin, Yingying Deng, Kekai Sheng, Mengdan Zhang, Fan Tang, Weiming Dong, Feiyue Huang, and Changsheng Xu. Trans- formers in computational visual media: A survey. Computational Visual Media, 8(1):33-62, 2022.
A survey of temporal activity localization via language in untrimmed videos. Y Yang, Z Li, G Zeng, 2020 International Conference on Culture-oriented Science Technology (ICCST). Y. Yang, Z. Li, and G. Zeng. A survey of temporal activity localization via language in untrimmed videos. In 2020 International Conference on Culture-oriented Science Technology (ICCST), pages 596-601, 2020.
Localizing moments in video with natural language. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionLisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision, pages 5803-5812, 2017.
Tall: Temporal activity localization via language query. Jiyang Gao, Chen Sun, Zhenheng Yang, Ram Nevatia, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision, pages 5267-5275, 2017.
Attentive moment retrieval in videos. Meng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, Tat-Seng Chua, International ACM SIGIR Conference on Research and Development in Information Retrieval. Meng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, and Tat- Seng Chua. Attentive moment retrieval in videos. In International ACM SIGIR Conference on Research and Development in Information Retrieval, 2018.
Temporally grounding natural sentence in video. Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, Tat-Seng Chua, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. Temporally grounding natural sentence in video. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 162-171, Brussels, Belgium, 2018. Association for Computational Linguistics.
Mac: Mining activity concepts for language-based temporal localization. Runzhou Ge, Jiyang Gao, Kan Chen, Ram Nevatia, 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEERunzhou Ge, Jiyang Gao, Kan Chen, and Ram Nevatia. Mac: Mining activity concepts for language-based temporal localization. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 245-253. IEEE, 2019.
Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, Larry S Davis, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDa Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, and Larry S Davis. Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1247-1257, 2019.
Jointly cross-and self-modal graph attention network for query-based moment localization. Daizong Liu, Xiaoye Qu, Xiao-Yang Liu, Jianfeng Dong, Pan Zhou, Zichuan Xu, Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on MultimediaDaizong Liu, Xiaoye Qu, Xiao-Yang Liu, Jianfeng Dong, Pan Zhou, and Zichuan Xu. Jointly cross-and self-modal graph attention network for query-based mo- ment localization. In Proceedings of the 28th ACM International Conference on Multimedia, pages 4070-4078, 2020.
Hao Zhang, Aixin Sun, Wei Jing, Joey Tianyi Zhou, Span-based localizing network for natural language video localization. arXiv: Computation and Language. Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. Span-based localiz- ing network for natural language video localization. arXiv: Computation and Language, 2020.
Fine-grained iterative attention network for temporal language localization in videos. Xiaoye Qu, Pengwei Tang, Zhikang Zou, Yu Cheng, Jianfeng Dong, Pan Zhou, Zichuan Xu, Proceedings of the 28th ACM International Conference on Multimedia, MM '20. the 28th ACM International Conference on Multimedia, MM '20New York, NY, USAAssociation for Computing MachineryXiaoye Qu, Pengwei Tang, Zhikang Zou, Yu Cheng, Jianfeng Dong, Pan Zhou, and Zichuan Xu. Fine-grained iterative attention network for temporal language localization in videos. In Proceedings of the 28th ACM International Conference on Multimedia, MM '20, page 4280-4288, New York, NY, USA, 2020. Association for Computing Machinery.
Strong: Spatio-temporal reinforcement learning for cross-modal video moment localization. Da Cao, Yawen Zeng, Meng Liu, Xiangnan He, Meng Wang, Zheng Qin, Proceedings of the 28th ACM International Conference on Multimedia, MM '20. the 28th ACM International Conference on Multimedia, MM '20New York, NY, USAAssociation for Computing MachineryDa Cao, Yawen Zeng, Meng Liu, Xiangnan He, Meng Wang, and Zheng Qin. Strong: Spatio-temporal reinforcement learning for cross-modal video moment localization. In Proceedings of the 28th ACM International Conference on Mul- timedia, MM '20, page 4162-4170, New York, NY, USA, 2020. Association for Computing Machinery.
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, arXiv:2107.13586arXiv preprintPengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586, 2021.
Bert: Pretraining of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, 2019.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Prompt programming for large language models: Beyond the few-shot paradigm. Laria Reynolds, Kyle Mcdonell, Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-7, 2021.
Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. Colin Wei, Sang Michael Xie, Tengyu Ma, Advances in Neural Information Processing Systems. 342021Colin Wei, Sang Michael Xie, and Tengyu Ma. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. Advances in Neural Information Processing Systems, 34, 2021.
The power of scale for parameterefficient prompt tuning. Brian Lester, Rami Al-Rfou, Noah Constant, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingBrian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter- efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, 2021.
Cpt: Colorful prompt tuning for pre-trained vision-language models. Yuan Yao, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, arXiv:2109.11797Tat-Seng Chua, and Maosong Sun. arXiv preprintYuan Yao, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. Cpt: Colorful prompt tuning for pre-trained vision-language models. arXiv preprint arXiv:2109.11797, 2021.
Denseclip: Language-guided dense prediction with context-aware prompting. Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, Jiwen Lu, arXiv:2112.01518arXiv preprintYongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, and Jiwen Lu. Denseclip: Language-guided dense prediction with context-aware prompting. arXiv preprint arXiv:2112.01518, 2021.
Learning to prompt for vision-language models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, arXiv:2109.01134arXiv preprintKaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. arXiv preprint arXiv:2109.01134, 2021.
Xiao Liu, Yansong Da Yin, Dongyan Feng, Zhao, arXiv:2203.08075Things not written in text: Exploring spatial commonsense from visual signals. arXiv preprintXiao Liu, Da Yin, Yansong Feng, and Dongyan Zhao. Things not written in text: Ex- ploring spatial commonsense from visual signals. arXiv preprint arXiv:2203.08075, 2022.
Conditional prompt learning for vision-language models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, arXiv:2203.05557arXiv preprintKaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. arXiv preprint arXiv:2203.05557, 2022.
Quo vadis, action recognition? a new model and the kinetics dataset. Joao Carreira, Andrew Zisserman, Computer Vision and Pattern Recognition. Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Computer Vision and Pattern Recognition, 2017.
End-to-end audio-visual speech recognition with conformers. Pingchuan Ma, Stavros Petridis, Maja Pantic, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEPingchuan Ma, Stavros Petridis, and Maja Pantic. End-to-end audio-visual speech recognition with conformers. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7613-7617. IEEE, 2021.
Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. Pengcheng He, Jianfeng Gao, Weizhu Chen, Pengcheng He, Jianfeng Gao, and Weizhu Chen. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing, 2021.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine trans- lation by jointly learning to align and translate. In International Conference on Learning Representations, 2015.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, 2016.
Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao, Vinvl: Revisiting visual representations in vision-language models. arXiv: Computer Vision and Pattern Recognition. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. arXiv: Computer Vision and Pattern Recognition, 2021.
Localizing moments in video with natural language. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell, International Conference on Computer Vision. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In International Conference on Computer Vision, 2017.
Tall: Temporal activity localization via language query. Jiyang Gao, Chen Sun, Zhenheng Yang, Ram Nevatia, International Conference on Computer Vision. Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In International Conference on Computer Vision, 2017.
To find where you talk: Temporal sentence localization in video with attention based location regression. arXiv: Computer Vision and Pattern Recognition. Yitian Yuan, Tao Mei, Wenwu Zhu, Yitian Yuan, Tao Mei, and Wenwu Zhu. To find where you talk: Temporal sentence localization in video with attention based location regression. arXiv: Computer Vision and Pattern Recognition, 2018.
Gated self-matching networks for reading comprehension and question answering. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, Ming Zhou, Meeting of the Association for Computational Linguistics. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. Gated self-matching networks for reading comprehension and question answering. In Meeting of the Association for Computational Linguistics, 2017.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, Bidirectional attention flow for machine comprehension. arXiv: Computation and Language. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. Bidi- rectional attention flow for machine comprehension. arXiv: Computation and Language, 2016.
Natural language video localization: A revisit in span-based question answering framework. Hao Zhang, Aixin Sun, Wei Jing, Liangli Zhen, Joey Tianyi Zhou, Rick Siow Mong Goh, IEEE Transactions on Pattern Analysis and Machine Intelligence. Hao Zhang, Aixin Sun, Wei Jing, Liangli Zhen, Joey Tianyi Zhou, and Rick Siow Mong Goh. Natural language video localization: A revisit in span-based question answering framework. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, pages 1-1, 2021.
Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. Wei Wang, Ming Yan, Chen Wu, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Wei Wang, Ming Yan, and Chen Wu. Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1705-1714, 2018.
Relationaware video reading comprehension for temporal language grounding. Jialin Gao, Xin Sun, Mengmeng Xu, Xi Zhou, Bernard Ghanem, arXiv:2110.05717arXiv preprintJialin Gao, Xin Sun, Mengmeng Xu, Xi Zhou, and Bernard Ghanem. Relation- aware video reading comprehension for temporal language grounding. arXiv preprint arXiv:2110.05717, 2021.
Learning to ask: Neural question generation for reading comprehension. Xinya Du, Junru Shao, Claire Cardie, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Xinya Du, Junru Shao, and Claire Cardie. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342-1352, 2017.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gre- gory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement De- langue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics.
Convolutional neural networks for sentence classification. Yoon Kim, Empirical Methods in Natural Language Processing. Yoon Kim. Convolutional neural networks for sentence classification. In Empirical Methods in Natural Language Processing, 2014.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17. the 31st International Conference on Neural Information Processing Systems, NIPS'17Red Hook, NY, USACurran Associates IncAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA, 2017. Curran Associates Inc.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2018.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 15Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Rus- lan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958, 2014.
| [
"https://github.com/huggingface/transformers"
] |
[
"Prior Omission of Dissimilar Source Domain(s) for Cost-Effective Few-Shot Learning",
"Prior Omission of Dissimilar Source Domain(s) for Cost-Effective Few-Shot Learning"
] | [
"Zezhong Wang zzwang@se.cuhk.edu.hk \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n",
"Hongru Wang \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n",
"Wai Kwan \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n",
"Jia Chung \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n",
"Gabriel Zhu \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n",
"Cheong Pui \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n",
"Kam-Fai Fung \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n",
"Wong kfwong@se.cuhk.edu.hk \nDepartment of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n\n"
] | [
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n",
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n",
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n",
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n",
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n",
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n",
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n",
"Department of Systems Engineering and Engineering Management\nThe Chinese University of Hong Kong\n"
] | [] | Few-shot slot tagging is an emerging research topic in the field of Natural Language Understanding (NLU). With sufficient annotated data from source domains, the key challenge is how to train and adapt the model to another target domain which only has few labels. Conventional few-shot approaches use all the data from the source domains without considering inter-domain relations and implicitly assume each sample in the domain contributes equally. However, our experiments show that the data distribution bias among different domains will significantly affect the adaption performance. Moreover, transferring knowledge from dissimilar domains will even introduce some extra noises so that affect the performance of models. To tackle this problem, we propose an effective similarity-based method to select data from the source domains. In addition, we propose a Shared-Private Network (SP-Net) for the few-shot slot tagging task. The words from the same class would have some shared features. We extract those shared features from the limited annotated data on the target domain and merge them together as the label embedding to help us predict other unlabelled data on the target domain. The experiment shows that our method outperforms the state-of-the-art approaches with fewer source data. The result also proves that some training data from dissimilar sources are redundant and even negative for the adaption. | null | [
"https://arxiv.org/pdf/2109.05234v1.pdf"
] | 237,491,880 | 2109.05234 | 6d66b4d3513ac018d3d6bdaee1e616160599d5fa |
Prior Omission of Dissimilar Source Domain(s) for Cost-Effective Few-Shot Learning
Zezhong Wang zzwang@se.cuhk.edu.hk
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Hongru Wang
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Wai Kwan
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Jia Chung
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Gabriel Zhu
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Cheong Pui
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Kam-Fai Fung
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Wong kfwong@se.cuhk.edu.hk
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
Prior Omission of Dissimilar Source Domain(s) for Cost-Effective Few-Shot Learning
Few-shot slot tagging is an emerging research topic in the field of Natural Language Understanding (NLU). With sufficient annotated data from source domains, the key challenge is how to train and adapt the model to another target domain which only has few labels. Conventional few-shot approaches use all the data from the source domains without considering inter-domain relations and implicitly assume each sample in the domain contributes equally. However, our experiments show that the data distribution bias among different domains will significantly affect the adaption performance. Moreover, transferring knowledge from dissimilar domains will even introduce some extra noises so that affect the performance of models. To tackle this problem, we propose an effective similarity-based method to select data from the source domains. In addition, we propose a Shared-Private Network (SP-Net) for the few-shot slot tagging task. The words from the same class would have some shared features. We extract those shared features from the limited annotated data on the target domain and merge them together as the label embedding to help us predict other unlabelled data on the target domain. The experiment shows that our method outperforms the state-of-the-art approaches with fewer source data. The result also proves that some training data from dissimilar sources are redundant and even negative for the adaption.
Introduction
Slot tagging (Tur and De Mori 2011), one of the crucial problems in Natural Language Understanding (NLU), aims to recognize pre-defined semantic slots from sentences and usually is regarded as a sequence labeling problem (Sarikaya et al. 2016). For example, given a sentence "Book a ticket to London", the word "London" should be recognized as the slot "CITY" by NLU model.
Currently, most of the methods for the slot tagging task have a notorious limitation that they requires a lot of annotated data. However, there are almost infinite long tail domains in the real scenarios (Zhu, Anguelov, and Ramanan 2014) so that it is nearly impossible to annotate sufficient data for each domain. Therefore, few-shot learning methods (Ravi and Larochelle 2016) have received attention as it can transfer the knowledge learned from the existing domains to new domains quickly with limited data. 1 Equal contributions. Figure 1: The difference between training with (a) all data and (b) data selection. The dashed line represents the distance among different domains in the parameter space with the centroid (Φ). With data selection, we remove the dissimilar domains D 4 and D 5 from training and the centroid will be closer to the target domain D .
Current works (Yoon, Seo, and Moon 2019;Liu et al. 2020;Wang et al. 2021) proposed various methods to improve the performance of slot tagging few-shot learning, but most of them focus on "how" to transfer rather than "what" should be transferred. The knowledge from the not-relevant source domain is hard to help the model identify the slots in the new domain. Further, such kind of knowledge is redundant and sometimes could be regarded as noises that even deteriorates the performance . We observe this phenomenon and prove the existence of the negative transfer in the experiment. To this end, we propose a similarity-based method to evaluate the inter-domain relation and indicate which domains should be selected for training. Specifically, we calculate three different similarities including target vocabulary covered (TVC), TF-IDF similarity (TIS), and label overlap (LO) between domains and combine them with different weights. The combined similarity function selects data from both corpus level and label level, which is more comprehensive. In this way, the dissimilar sources will be rejected and the initial parameters of the model will be naturally more closed to the local optimum of the target domain. A high-level intuition of the difference between training with all data and training with data selection is shown in Figure 1.
After selecting proper data, we also propose a solution about "how" to transfer knowledge for few-shot slot tagging task. Specifically, we build a Shared-Private Network to capture stable label representations under the few-shot setting. Many works (Hou et al. 2020;Zhu et al. 2020;Liu et al. 2020) try to enhance the accuracy of slot identification from the label representation engineering. They assign each label with a semantic vector (Snell, Swersky, and Zemel 2017;Hou et al. 2020;Zhu et al. 2020;Yoon, Seo, and Moon 2019) rather than a simple one-hot encoding. However, the quality of the label representations highly depends on the volume of the training samples and suffers from the unstable problem under the few-shot setting due to the extremely biased data distribution. Hence, we propose the Shared-Private Network to separate the shared features and private features from the limited samples. The words with the same label share common information. They are extracted and saved as shared features. Other parts are regarded as detailed information related to the words and will be saved as private features. After filtering the detailed information out, the label representation generated according to the shared features will be more robust against the annotation shortage problems in the few-shot setting.
The contributions of this work are as follows:
• We propose a similarity-based method to measure the relation among domains to guide data selection and to avoid negative knowledge transfer in few-shot learning. • We propose the Shared-Private Network to extract more stable label representation with limited annotations. • We prove the existence of negative transfer via experiments and give explanations about this phenomenon via visualization.
Related Work
Convention studies in slot tagging mainly focus on proposing and utilizing deep neural networks to recognize the semantic slots in given contexts (Shi et al. 2016;Kim, Lee, and Sarikaya 2017). However, most of these models need a large amount of annotated data which is quite scarce in the real world, especially for those minority domains. Recent works (Bapna et al. 2017;Shah et al. 2019;Rastogi et al. 2019;Liu et al. 2020) propose several few-shot learning methods for slot tagging and developed domain-specific model with limited annotated data. Hou et al. (2020) introduced a collapsed dependency transfer mechanism into the conditional random field (CRF) and proposed the Label-enhanced Task-Adaptive Projection Network (L-TapNet) which build a strong few-shot baseline for slot tagging. Based on the work of Hou et al. (2020), Zhu et al. (2020) then introduced a vector projection network for few-shot slot tagging. It is worth to note that, due to the lack of annotation on the target domain, both approaches paid attention to label representation engineering rather than using conventional one-hot encoding directly. But building label representation with limited annotations is still a challenge. To stabilize the effectiveness of label representation, we proposed a Shared-Private network to learn representation from shared information of words.
Besides that, negative transfer that transferring knowledge from the source can have a negative impact on the target has been founded in many tasks Chen et al. 2019;Gui et al. 2018). Because of this phenomenon, methods for relation analysis between source and target domains has been proposed recently. Gururangan et al. (2020) use vocabulary overlap as the similarity between two datasets and emphasized the significant impact of domain-adaptive for pre-training. Dai et al. (2019) study different similarity methods including target vocabulary covered (TVC), language model perplexity (PPL), and word vector variance (WVV) to select data for pre-training tasks. However, a single similarity function does not work well in the few-shot setting. Different similarity methods always give diverse data selection strategies and are hardly consistent. To this end, we propose a comprehensive indicator that combines three similarity functions to guide the data selection in the few-shot setting.
Problem Definition
We follow the same task definition as Hou et al. (2020). Given a sentence x = (x 1 , x 2 , · · · , x n ) as a sequence of words, slot tagging task aims to assign the corresponding label series y = (y 1 , y 2 , · · · , y n ) to indicate which classes the words should belong to.
A domain D = {(x (i) , y (i) )} N D i=1
is a set of (x, y) pairs that from same scenario and N D is the number of sentences in domain D.
In few-shot setting, models are trained from source domain {D 1 , D 2 , · · · } and are applied to the target domain {D 1 , D 2 , · · · } which are new to the models. It is worth note that there are only few labeled samples, which make up the support set S = {(x (i) , y (i) )} N S i=1 , in each target domain D j . For each unique N labels (N-way) in support set S, there are K annotated samples (K-shot). Besides that, the samples in the target domain D j are unlabeled.
Thus, few-shot slot tagging task is defined as follows: given a K-shot support set S and a query sentence x = (x 1 , x 2 , · · · , x n ), determine the corresponding labels sequence y * : y * = (y * 1 , y * 2 , · · · , y * n ) = arg max y p(y x,S) (1)
Data Selection
In this section, we first show the existence of negative knowledge transfer among domains. The phenomenon demonstrates the necessity of data selection. Then introduce our similarity-based data selection strategy that can be used to avoid negative knowledge transfer to improve performance in few-shot slot tagging.
Negative Knowledge Transfer
Due to negative knowledge transfer, some knowledge the model learned before is useless and may affect the judgment of the model on the new domains, which will degrade the performance. In the preliminary study, we train the model with all different combinations of source domains and record their performance. The relation between the number of source domains and their corresponding performance is shown in Figure 2. Overall, with more training domains, the performance would be better. However, comparing the maximum values, it is obvious that training with 3 source domains outperforms training with 4. This phenomenon indicates that more source domains may even decrease the performance and proves the existence of negative knowledge transfer. It also inspires us that the model will achieve a better result with proper data selection.
Selection Strategy
To avoid negative knowledge transfer, an indicator is needed to select data or source domains before training. Given a group of data from source domain and the data of target domain, the indicator should output a score which can reflect how fit are these source data for transferring knowledge to the target. Ideally, the indicator score behaves linearly with the performance so that higher indicator score can lead to better performance. In this way, the group of source data with highest indicator score can be selected as the best choice for training. The data that can be leveraged includes the source domains {D 1 , · · · , D M } with sufficient labels, the support set S j with labels in the target domain D j , and the query set Q j without labels. Notice that the data in the support set S j is much less than the query set Q j . Considering the attributes mentioned above and the data we can use, we investigate three similarity functions as indicators for data selection.
Target Vocabulary Covered (TVC) is a significant corpus level feature that represents the overlap of vocabulary between source domain(s) and a target domain and is defined as:
TVC(D i , D j ) = V Di ∩ V D j V D j(2)
where V Di and V D j are the vocabularies (sets of unique tokens) of the source domain D i and the target domain D j respectively and | · | is the norm operation that indicates the size of the set. Intuitively, if most of words in the target domain have already appeared in the sources, the word embeddings should have been well trained so that improves the performance. TF-IDF Similarity (TIS) is another corpus level feature (Bao et al. 2020). We treat each domain as a document and calculate their tf-idf features (Salton and Buckley 1988;Wu et al. 2008). Cosine similarity is used to evaluate the correlation between the sources and the target. Compared with TVC, TIS assign each word with a weight according to the term frequency and inverse document frequency, which takes fine-grained corpus feature into account. The details are shown below:
tf i,j = n ij k n k,j(3)
where n ij is the times of word t i appeared in domain D j .
idf i = lg M |{j : t i ∈ D j } M j=1 |(4)
where M is the total number of domains. And the tf-idf feature is the product of tf and idf: Figure 2: The relationship between performance (y-axis), specifically the F1 score, and the number of source domains (x-axis).
tf-idf j = tf i,j · idf i(5)
tf-idf j can be regarded as the word distribution feature of the domain j and cosine similarity is used to evaluate the correlation between two domains:
TIS(D i , D j ) = tfidf Di · tfidf Dj ||tfidf Di || 2 · ||tfidf Dj || 2(6)
where || · || 2 is the Euclidean norm. Label Overlap (LO) is a label level feature that represents the overlap of labels between source domains and the target domain. Although labels are quite scarce in the target domain under the few-shot setting, the types of labels are not. Every label on the target domain at least appeared K times (K-shot) in the support set S and therefore the types of the labels are complete. Hence, label overlap is also a good choice as data selection indicator:
LO(Y i , Y j ) = |Y i ∩ Y j | |Y j |(7)
where Y i and Y j stand for the unique label set of the source domain D i and the target domain D j , respectively. Each similarity function only focus on a single aspect, i.e. the corpus level information or the label level. Therefore, it is inevitable to introduce bias when we select data with them. Naturally, we come up with a strategy that combines all three similarity scores as the indicator to give a more stable guidance for data selection. Assume that one of the
combinations, i.e. C θ1,θ2,θ3 (TVC i , TIS i , LO i ) = θ 1 TVC i + θ 2 TIS i +θ 3 LO i ,
is linear with the performance, our goal is to find the best value of θ 1 , θ 2 , and θ 3 . For a better reading ex-
perience, C θ1,θ2,θ3 (TVC i , TIS i , LO i ) is abbreviated to C i .
Following the least squares method (Merriman 1877), we design the objective function as follows:
arg min θ1,θ2,θ3,w,b 1 N E N E i=1 [wC i + b] −p i 2 s.t. w > 0, b ≥ 0(8)
where w and b are respectively the weight and bias of the linear function to simulate the linear relation between the indicator score and the performance. N E is the number of Dtraining ← M erge(combination) 6:
TVC = T V C(Dtraining, D ) 7: TIS = T IS(Dtraining, D ) 8: LO = LO(Dtraining, D ) 9:
train (F(Dtraining)) until Loss converge 10:pi = eval((F(D )) 11: end for 12: end for the experiments andp i is the true performance of the experiment i. TVC i , TIS i , and LO i are the TVC score, TIS score, and LO score between the source domains and the target domain in the experiment i.
To solve the problem in equation (8), we design a scheme to generate samples with the combination of source domains. In general, we pre-define the number of source domains and enumerate all combinations. The three similarity scores between the combination of source domains and target domain will be calculated and recorded. Then we train the model with the combination and record the final performance on the target domain. In this way, we get sufficient tuples (TVC, TIS, LO, p) to figure out the optimum θ 1 , θ 2 , and θ 3 (see Algorithm 1).
With sufficient samples, we fit them with the linear function in equation ( 8) and optimize w, b, θ 1 , θ 2 , and θ 3 via SGD (Curry 1944). Due to the data distribution bias of different domains, we finally assign different w j and b j for each target domain D j to acquire a better linear relation. For the combination weights θ 1 , θ 2 , and θ 3 , we keep same for different target domains. Further, we still have the following points to declare:
• The parameters w and b are learnable but not necessary for data selection. They are not a part of the indicator and are only used to observe the linear relation between the combination similarity scores and the corresponding performance.
• Due to the cross-validation setting in the real dataset (e.g. SNIPS), to avoid data leakage of the target domain, we obtain θ 1 , θ 2 , and θ 3 according to the validation domain for each target. The combination form the validation domain still works well on the target and can prove the generality of this strategy.
• Although training with combination of source domains is time consuming but once the optimum combination weights have been found, it can be adapted to different domains.
After that, we can select domains according to the optimum w * , b * , θ * 1 , θ * 2 , and θ * 3 . The domains which can achieve a higher combined similarity score may lead to a better performance and this can be formulated as:
arg max i w * (θ * 1 TVC i + θ * 2 TIS i + θ * 3 LO i ) + b * (9)
And due to w > 0, equation ( 9) is equivalent to:
arg max i θ * 1 TVC i + θ * 2 TIS i + θ * 3 LO i(10)
In this way, the domain specific w and b are eliminated.
Shared-Private Network
Based on the Prototypical Network (Snell, Swersky, and Zemel 2017), we propose the Shared-Private Network (SP-Net) to gain more representative label embeddings. The workflow is divided into two stages. In the first stage, SP-Net extracts label embeddings for each class from the support set. In the second stage, SP-Net makes prediction on each query sentence according to the label embeddings extracted from stage one. The Figure 3 illustrates this process.
(a) Encode Firstly, sentences are encoded into word embeddings via BERT (Devlin et al. 2019). Given a sentence x = (x 1 , x 2 , · · · , x n ) as a sequence of words, BERT will generate their corresponding contextual word embeddings E = (E 1 , E 2 , · · · , E n ), where E i ∈ R h . h is the hidden size of the word embedings.
(b) Extract shared features Although words are different, there are common information among words from the same class. Intuitively, the words in the same class always appear in similar context with similar syntax. And in some cases, they can be even replaced with each other without any grammatical mistakes. For example, even though we replace the phrase "Hong Kong" with "New York" in Figure 3, the sentence still makes sense. Common information can help us generate scalable label embeddings that can represent most of the words in a class. The shared layer in the framework is designed for this. In this work, we simply implement the shared layer with a residual linear function and the shared feature of a word is calculated as follows:
E s i = E i + RELU(E i W s + b s )(11)
where W s ∈ R h×h and b s ∈ R h are the weight and bias of the shared layer, respectively. RELU is the rectified linear unit function (Maas, Hannun, and Ng 2013).
(c) Extract private features Besides the shared information, each word still has it own specific information. Recall the phrase replacing case mentioned in Figure 3, although the sentence is without any grammatical mistakes after phrase replacing, the meaning has been changed. This is due to the private information carried by the word. The private information is ineffective and can be harmful to label embeddings as they lack generality. Less private information can lead to better quality of label embeddings and therefore, private layer is design to extract private information from the word embeddings. The private layer is also implemented with a residual linear function and the private feature of a word is calculated as fellows: where W p ∈ R h×h and b p ∈ R h is the weight and bias of the private layer, respectively. So far, the shared layer and private layer are symmetrical and share the same design.
E p i = E i + RELU(E i W p + b p )(12)
(d) orthogonality constrain To ensure the shared features and private features are separated completely, we introduce the following constrains:
• The shared features of the words in a same class should be close to each other. • The private features of words should be diverse even though they belong to the same class. • The shared feature and the private feature of a word should not overlap.
For the first requirement, Chen et al. (2020) proposed to use contrastive loss that can make the same samples to be close and different samples to be far apart. The similarity between samples are defined as:
sim(E s i , E s j ) = E s i E s j E s i E s j(13)
The loss in the first requirement is defined as:
L 1 = E c − log {i;yi=c} {j;yj =c} exp(sim(E s i , E s j )/τ ) {i;i∈S} {j;j∈S} exp(sim(E s i , E s j )/τ ) (14)
where τ is the temperature parameter and c is the class. The numerator is the sum of the similarity scores whose class is c. The denominator is the sum of all the similarity scores. Specifically, embeddings in the same class presents high similarity score and the numerator is large and the loss decreases.
For the second requirement, according to the co-variance of two variables, we define the divergence between two em-beddings as:
D(E p i , E p j ) = (E p i − E p ) T (E p j − E p )(15)
where E p is the mean vector of all private embeddings in the set. The loss in the second requirement is:
L 2 = − 1 |S| 2 i∈S j∈S log D(E s i , E s j )(16)
where |S| is the size of the support set, i.e. the number of words. Higher divergence among the private embeddings will lead to lower loss. We also implement L2-norm to restrain the increase of the parameters. The third requirement refines the shared features further. We introduce the orthogonality constraints (Liu, Qiu, and Huang 2017) to force the shared embedding independent with the private embedding:
L 3 = 1 |S| i∈S E s i E p i 2(17)
where · 2 is the Euclidean norm.
(e) Extract label embeddings Label embeddings are extracted from shared embeddings for each class. We take the mean vector of the shared embeddings which belong to class c as the label embedding:
E c = 1 |{y i = c}| {yi=c} E s i (18)
where E c is the label embedding of the class c.
(f) Predict We calculate the similarity between shared embeddings of the query sentence with the label embeddings. We provide various options and here we take cosine similarity as an example: where p c i is the similarity between word i with class c and can also be regarded the confidence that the word belongs to this class. The class with the highest similarity will be regarded as the prediction for the word. We take the binary cross-entropy loss to measure the error in each class:
p c i = E s i E c E s E c(L 4 = 1 |Q| |Q| i C c y i log p c i + (1 − y i ) log (1 − p c i ) (20)
where C is the number of unique labels in the query set and |Q| is the number of words in the query set. Finally, we combine the L 1 , L 2 , L 3 , and L 4 with different weights as the cost function: L = αL 1 + βL 2 + γL 3 + δL 4 (21) where α, β, γ, and δ are hyperparameters determined by the experiments.
Experiments Dataset
We evaluate the proposed method following the same experiment setting provided by Hou et al. (2020) on SNIPS (Coucke et al. 2018) and NER dataset (Zhu et al. 2020). SNIPS contains 7 domains including Weather (We), Music (Mu), PlayList (Pl), Book (Bo), Search Screen (Se), Restaurant (Re), and Creative Work (Cr) and the sentences in SNIPS are annotated with token-level BIO labels for slot tagging. Each domain will be tested in turn following crossvalidation strategy. In each turn, 5 domains are used for training and 1 for evaluation. In each domain, the data are split into 100 episodes (Ren et al. 2018). For the sake of fair peer comparison, the selection of evaluation domain and episodes construct are kept same with Hou et al. (2020). NER dataset contains 4 domains including News, Wiki, Social, and Mixed. In addition, due to the number of domains in the NER dataset is too short, we randomly split domains into pieces and select those pieces via the combined similarity function. More training details can be found in the appendix.
Baselines
SimBERT assigns label to the word according to cosine similarity of word embedding of a fixed BERT. For each word x i , SimBERT finds the most similar word x k in the support set and assign the label of x k to x i . TransferBERT directly transfers the knowledge from source domain to target domain by parameter sharing. L-TapNet+CDT+PWE (Hou et al. 2020) is a strong baseline for few-shot slot tagging that combines with the label name representation and a special CRF framework. L-ProtoNet+CDT+VPB (Zhu et al. 2020) investigates different distance functions and utilizes the powerful distance function VPB to boost the performance of the model. SP-Net is proposed in this work that utilizes the Shared-Private layer to capture the common features and generate a more stable label representation. SP-Net + Domain Selection is also SP-Net but it is trained with the selected data according to the data selection strategy we proposed. Table 1 shows the results of 1-shot and 5-shot on the SNIPS dataset. Generally speaking, the SP-Net achieves best per-formance on the 1-shot setting and comparable performance on the 5-shot setting (0.14% adrift of SOTA). As for the data selection strategy, it greatly enhances the performance on both of the 1-shot and 5-shot settings. With the data selection, the performance of SP-Net is far beyond other baselines.
Main Results
The result on the NER dataset also prove the effectiveness of our method (See Table 2). It is noticed that, due to the short of the data, combined similarity select all data on most domains except Wiki of 5-shot task. Therefore the result of SP-Origin and SP-Domain Selection are nearly the same. The effect of Shared-Private Network is more remarkable if the number of the support samples is less. The SP-Net outperforms all baseline in the 1-shot setting but in 5-shot, it achieves comparable performance. The shared-private Network, essentially, corrects the bias between the label embedding and the center of the class. The bias will be more serious if the support is less. With the increase of the number of supports, bias could be suppressed to some extent (see Figure 4). Some other methods, like label description (Hou et al. 2020), can also correct such kind of bias if enough supports are given. But when the supports are extremely scarce, Shared-Private Network performs the best.
Analysis
We further visualize the relation between the performance with the similarity function and compare combined similarity with TVC in Figure 5. We firstly sample some combinations of source domains and train the model. Then we calculate their similarity with the target domain and record performance. From the left part of Figure 5, the performance generally has a positive correlation with TVC. However, its precision is poor so that cannot be used as an indicator. Points around the green line have similar TVC scores but the performance are quite diverse, i.e. the performance of green points' are from 20% to 70%. A similar conclusion can be drawn from the horizontal direction: blue points around the blue line have similar performance but their TVC scores are from 36% to 87%. Therefore, data selection with TVC suffers from serious performance fluctuation. By comparison, there is an apparent positive linear correlation between combined similarity and performance in terms of target domain (See the right part of Figure 5).
In order to prove the advantage of the combination similarity function, we compare it with its component TVC, TIS, and LO. The result is shown in Figure 6. The performance Figure 5: The relation between performance (y-axis) and the similarity function (y-axis). Different target domains are in different colors. of our combination similarity function (the green line) outperforms others on both 1-shot and 5-shot. Besides that, the LO similarity (blue line) performs equally on different test domain, which is more stable than TVC and TIS. By contrast, the performance of TVC and TIS have huge variance on various test domains. Sometimes they can surpass LO and sometimes their performance even lower than 20%. This is because the 3 similarity functions have their own pros and cons and the combination of them is more effective and stable (See Appendix for more analysis about iter-domain relation).
Conclusions and Future Work
In this paper, we prove the existence of negative knowledge transfer in few-shot learning and propose a similarity-based method to select proper data before training. We propose a Shared-Private Network (SP-Net) for the few-shot slot tagging task. We prove the effectiveness and advantages of both data selection method and SP-Net with experiments. In the future, we will investigate the relations among domains and improve our data selection method to select episodes or samples rather than domains. Also, we will analysis and explain SP-Net from the latent space to figure out what it exactly correct for the label embeddings.
We further study the inter-domain relations which can give strong evidences to prove the importance of data selection. We have a key assumptions in this part: If a source domain and a target domain have a strong relation, (1) removing the source domain from training will decrease the performance on the target domain or (2) training with the single source domain will have a better performance than training with a unrelated domain. Following these two assumptions, we conduct two experiments: (1) For every test, remove each domain from the 5 training domains in turn, train SP-Net, and then record the performance; (2) For every test, select each domain from the 5 training domains in turn, train SP-Net, and then record the performance. Figure 1 shows the results and we have two findings. Firstly, the differences of the source domains have a significant influence to the final performance. For example, in Figure 1 (a), if the source domain mu is removed from training, 35.06% performance decreased is observed in target (test) domain se. By comparison, for the same test domain se, the removal of domain pl causes 1.91% decrease, which is slighter. Similarly, in Figure 1 (d), only with the domain re, the performance on target domain we can achieve 50.29%. By contrast, with the domain cr, the performance on we only has 11.49%. Different source domains bring huge variance in performance. This result shows the need of data selection. Secondly, some negative values appeared in Figure 1 (a) and (c), which means after removing a domain, the performance is improved. For instance, in Figure 1 (c), removing the domain se leads to 0.83% increase (-0.83% decreases). This phenomena gives another strong evidence of negative knowledge transfer.
Training Details
Hyperparameters The BERT in SP-Net is the pre-trained uncased BERT-Base (?). We use ADAM (?) to train the model with a learning rate of 2e-5, a weight decay of 5e-5. And we set VPB (?) as the similarity function for prediction. For the weights assigned to each loss, we set α, β, γ, and δ as 0.2, 0.1, 0.2, and 0.5 respectively. Those hyperparameters mentioned above are derived from the best implement in our experiments. To prevent the impact of randomness, we do each experiment 10 times with different random seed and report the average results. Data Selection Due to cross validation, each domain is used in turn as a test domain. As such one domain used for training and may be used for testing next time. Therefore, if we set a group of global similarity combination weights θ 1 , θ 2 , and θ 3 according to all experimental results, it must lead to test data leakage. This is unfair for the comparison. To this end, we set θ 1 , θ 2 , and θ 3 in terms of the test domain, respectively. θ 1 , θ 2 , and θ 3 is obtained by minimizing Equation (8) according to the training domains and evaluation domain. In addition, if θ 1 , θ 2 , and θ 3 from the evaluation domain work well in the test domain, it demonstrates the generality of this data selection method. In practice, the combination weights just need to be calculated once. In this work, we set a domain as the minimum selection unit. Specifically, if a domain is selected for training, all episode in this domain will be selected. The domain selection follows Equation (10).
Figure 3 :
3This is the workflow of SP-Net. In this case, the support set contains 2 sentences, and the query set contains 1. The details of processes (a) encode, (b) extract shared features, (c) extract private features, (d) orthogonality constrain, (e) extract label embeddings, and (f) predict are introduced in the main body.
Figure 4 :
4This is diagram shows the automatic correction of distribution bias when the number of supports increased. The circles are samples in the support set and triangles are the inferred center, as well as label embedding, according to the supports. Stars are the true center of classes.
Figure 6 :
6The performance of training with domains selected by 4 functions.
Figure 1 :
1The heat map shows the inter-domain relations. The y-axis is the target domain and the x-axis is the source domain. The picture (a) and (b) are the results of 1-shot setting. The picture (c) and (d) are the results of 5-shot setting. The picture (a) and (c) illustrate the performances' decreases on the target if a source domain is removed. The (b) and (d) illustrate the performance on the target, which the model is trained with a single source domain.
Algorithm 1: Training with combination of source domains Require: Set of source domains {D1, · · · , DM }; Target domain D ; Model F; 1: for 1 ≤ i ≤ M do 2: all combination = combination({D1, · · · , DM }, i) // Select i domain(s) from M for training.3:
for 1 ≤ j ≤ |all combination| − 1 do
4:
combination = all combination[j]
// e.g. combination = [D1, D3]
5:
Table 2 :
2F1 scores of few-shot slot tagging on NER dataset
AppendixInter-domain relations
Y Bao, M Wu, S Chang, R Barzilay, arXiv:1908.06039Few-shot Text Classification with Distributional Signatures. Bao, Y.; Wu, M.; Chang, S.; and Barzilay, R. 2020. Few-shot Text Classification with Distributional Signatures. arXiv:1908.06039.
Towards Zero-Shot Frame Semantic Parsing for Domain Scaling. A Bapna, G Tur, D Hakkani-Tur, L Heck, arXiv:1707.02363Bapna, A.; Tur, G.; Hakkani-Tur, D.; and Heck, L. 2017. To- wards Zero-Shot Frame Semantic Parsing for Domain Scal- ing. arXiv:1707.02363.
T Chen, S Kornblith, M Norouzi, G Hinton, arXiv:2002.05709A Simple Framework for Contrastive Learning of Visual Representations. Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A Simple Framework for Contrastive Learning of Visual Rep- resentations. arXiv:2002.05709.
Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning. X Chen, S Wang, B Fu, M Long, J Wang, Chen, X.; Wang, S.; Fu, B.; Long, M.; and Wang, J. 2019. Catastrophic forgetting meets negative transfer: Batch spec- tral shrinkage for safe transfer learning.
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces. A Coucke, A Saade, A Ball, T Bluche, A Caulier, D Leroy, C Doumouro, T Gisselbrecht, F Caltagirone, T Lavril, M Primet, J Dureau, arXiv:1805.10190Coucke, A.; Saade, A.; Ball, A.; Bluche, T.; Caulier, A.; Leroy, D.; Doumouro, C.; Gisselbrecht, T.; Calta- girone, F.; Lavril, T.; Primet, M.; and Dureau, J. 2018. Snips Voice Platform: an embedded Spoken Language Un- derstanding system for private-by-design voice interfaces. arXiv:1805.10190.
The method of steepest descent for nonlinear minimization problems. H B Curry, Quarterly of Applied Mathematics. 23Curry, H. B. 1944. The method of steepest descent for non- linear minimization problems. Quarterly of Applied Mathe- matics, 2(3): 258-261.
Using Similarity Measures to Select Pretraining Data for NER. X Dai, S Karimi, B Hachey, C Paris, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Dai, X.; Karimi, S.; Hachey, B.; and Paris, C. 2019. Using Similarity Measures to Select Pretraining Data for NER. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 1460-1470. Minneapolis, Minnesota: Association for Computational Linguistics.
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805.
Negative transfer detection in transductive transfer learning. L Gui, R Xu, Q Lu, J Du, Y Zhou, International Journal of Machine Learning and Cybernetics. 92Gui, L.; Xu, R.; Lu, Q.; Du, J.; and Zhou, Y. 2018. Negative transfer detection in transductive transfer learning. Interna- tional Journal of Machine Learning and Cybernetics, 9(2): 185-197.
S Gururangan, A Marasović, S Swayamdipta, K Lo, I Beltagy, D Downey, N A Smith, arXiv:2004.10964Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. Gururangan, S.; Marasović, A.; Swayamdipta, S.; Lo, K.; Beltagy, I.; Downey, D.; and Smith, N. A. 2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. arXiv:2004.10964.
Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network. Y Hou, W Che, Y Lai, Z Zhou, Y Liu, H Liu, T Liu, arXiv:2006.05702Hou, Y.; Che, W.; Lai, Y.; Zhou, Z.; Liu, Y.; Liu, H.; and Liu, T. 2020. Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Net- work. arXiv:2006.05702.
Speakersensitive dual memory networks for multi-turn slot tagging. Y.-B Kim, S Lee, R Sarikaya, 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEKim, Y.-B.; Lee, S.; and Sarikaya, R. 2017. Speaker- sensitive dual memory networks for multi-turn slot tagging. In 2017 IEEE Automatic Speech Recognition and Under- standing Workshop (ASRU), 541-546. IEEE.
Adversarial Multi-task Learning for Text Classification. P Liu, X Qiu, X Huang, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Liu, P.; Qiu, X.; and Huang, X. 2017. Adversarial Multi-task Learning for Text Classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), 1-10. Vancouver, Canada: Association for Computational Linguistics.
Z Liu, G I Winata, P Xu, P Fung, arXiv:2004.11727Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling. Liu, Z.; Winata, G. I.; Xu, P.; and Fung, P. 2020. Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling. arXiv:2004.11727.
Rectifier nonlinearities improve neural network acoustic models. A L Maas, A Y Hannun, A Y Ng, Proc. icml. icmlCiteseer303Maas, A. L.; Hannun, A. Y.; and Ng, A. Y. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, volume 30, 3. Citeseer.
A List of Writings Relating to the Method of Least Squares: With Historical and Critical Notes. M Merriman, Academy4Merriman, M. 1877. A List of Writings Relating to the Method of Least Squares: With Historical and Critical Notes, volume 4. Academy.
A Rastogi, X Zang, S Sunkara, R Gupta, P Khaitan, arXiv:1909.05855Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset. arXiv preprintRastogi, A.; Zang, X.; Sunkara, S.; Gupta, R.; and Khai- tan, P. 2019. Towards Scalable Multi-domain Conversa- tional Agents: The Schema-Guided Dialogue Dataset. arXiv preprint arXiv:1909.05855.
Optimization as a model for few-shot learning. S Ravi, H Larochelle, Ravi, S.; and Larochelle, H. 2016. Optimization as a model for few-shot learning.
. M Ren, E Triantafillou, S Ravi, J Snell, K Swersky, J B Tenenbaum, H Larochelle, R S Zemel, Ren, M.; Triantafillou, E.; Ravi, S.; Snell, J.; Swersky, K.; Tenenbaum, J. B.; Larochelle, H.; and Zemel, R. S. 2018.
Meta-learning for semi-supervised few-shot classification. arXiv:1803.00676arXiv preprintMeta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676.
Term-weighting approaches in automatic text retrieval. Information processing & management. G Salton, C Buckley, 24Salton, G.; and Buckley, C. 1988. Term-weighting ap- proaches in automatic text retrieval. Information processing & management, 24(5): 513-523.
An overview of end-to-end language understanding and dialog management for personal digital assistants. R Sarikaya, P A Crook, A Marin, M Jeong, J.-P Robichaud, A Celikyilmaz, Y.-B Kim, A Rochette, O Z Khan, X Liu, IEEEieee spoken language technology workshop (sltSarikaya, R.; Crook, P. A.; Marin, A.; Jeong, M.; Robichaud, J.-P.; Celikyilmaz, A.; Kim, Y.-B.; Rochette, A.; Khan, O. Z.; Liu, X.; et al. 2016. An overview of end-to-end lan- guage understanding and dialog management for personal digital assistants. In 2016 ieee spoken language technology workshop (slt), 391-397. IEEE.
Robust Zero-Shot Cross-Domain Slot Filling with Example Values. D J Shah, R Gupta, A A Fayazi, D Hakkani-Tur, arXiv:1906.06870Shah, D. J.; Gupta, R.; Fayazi, A. A.; and Hakkani-Tur, D. 2019. Robust Zero-Shot Cross-Domain Slot Filling with Ex- ample Values. arXiv:1906.06870.
Recurrent support vector machines for slot tagging in spoken language understanding. Y Shi, K Yao, H Chen, D Yu, Y.-C Pan, M.-Y Hwang, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesShi, Y.; Yao, K.; Chen, H.; Yu, D.; Pan, Y.-C.; and Hwang, M.-Y. 2016. Recurrent support vector machines for slot tagging in spoken language understanding. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, 393-399.
Prototypical networks for few-shot learning. J Snell, K Swersky, R S Zemel, arXiv:1703.05175arXiv preprintSnell, J.; Swersky, K.; and Zemel, R. S. 2017. Proto- typical networks for few-shot learning. arXiv preprint arXiv:1703.05175.
Spoken language understanding: Systems for extracting semantic information from speech. G Tur, De Mori, R , John Wiley & SonsTur, G.; and De Mori, R. 2011. Spoken language under- standing: Systems for extracting semantic information from speech. John Wiley & Sons.
H Wang, Z Wang, G P C Fung, K.-F Wong, arXiv:2108.11635MCML: A Novel Memory-based Contrastive Meta-Learning Method for Few Shot Slot Tagging. Wang, H.; Wang, Z.; Fung, G. P. C.; and Wong, K.-F. 2021. MCML: A Novel Memory-based Con- trastive Meta-Learning Method for Few Shot Slot Tagging. arXiv:2108.11635.
Z Wang, Z Dai, B Póczos, J Carbonell, arXiv:1811.09751Characterizing and Avoiding Negative Transfer. Wang, Z.; Dai, Z.; Póczos, B.; and Carbonell, J. 2019. Characterizing and Avoiding Negative Transfer. arXiv:1811.09751.
Interpreting tf-idf term weights as making relevance decisions. H C Wu, R W P Luk, K F Wong, K L Kwok, ACM Transactions on Information Systems (TOIS). 263Wu, H. C.; Luk, R. W. P.; Wong, K. F.; and Kwok, K. L. 2008. Interpreting tf-idf term weights as making rele- vance decisions. ACM Transactions on Information Systems (TOIS), 26(3): 1-37.
TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning. S W Yoon, J Seo, J Moon, PMLRProceedings of the 36th International Conference on Machine Learning. Chaudhuri, K.and Salakhutdinov, R.the 36th International Conference on Machine Learning97Yoon, S. W.; Seo, J.; and Moon, J. 2019. TapNet: Neu- ral Network Augmented with Task-Adaptive Projection for Few-Shot Learning. In Chaudhuri, K.; and Salakhutdinov, R., eds., Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, 7115-7123. PMLR.
S Zhu, R Cao, L Chen, K Yu, arXiv:2009.09568Vector Projection Network for Few-shot Slot Tagging in Natural Language Understanding. Zhu, S.; Cao, R.; Chen, L.; and Yu, K. 2020. Vector Pro- jection Network for Few-shot Slot Tagging in Natural Lan- guage Understanding. arXiv:2009.09568.
Capturing Long-Tail Distributions of Object Subcategories. X Zhu, D Anguelov, D Ramanan, 2014 IEEE Conference on Computer Vision and Pattern Recognition. Zhu, X.; Anguelov, D.; and Ramanan, D. 2014. Capturing Long-Tail Distributions of Object Subcategories. In 2014 IEEE Conference on Computer Vision and Pattern Recogni- tion, 915-922.
| [] |
[
"Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks",
"Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks"
] | [
"Denis Emelin d.emelin@sms.ed.ac.uk \nUniversity of Edinburgh\nScotland\n",
"Ivan Titov ititov@inf.ed.ac.uksennrich@cl.uzh.ch \nUniversity of Edinburgh\nScotland\n\nUniversity of Amsterdam\nNetherlands\n",
"Rico Sennrich \nUniversity of Edinburgh\nScotland\n\nUniversity of Zurich\nSwitzerland\n"
] | [
"University of Edinburgh\nScotland",
"University of Edinburgh\nScotland",
"University of Amsterdam\nNetherlands",
"University of Edinburgh\nScotland",
"University of Zurich\nSwitzerland"
] | [] | Word sense disambiguation is a well-known source of translation errors in NMT. We posit that some of the incorrect disambiguation choices are due to models' over-reliance on dataset artifacts found in training data, specifically superficial word co-occurrences, rather than a deeper understanding of the source text. We introduce a method for the prediction of disambiguation errors based on statistical data properties, demonstrating its effectiveness across several domains and model types. Moreover, we develop a simple adversarial attack strategy that minimally perturbs sentences in order to elicit disambiguation errors to further probe the robustness of translation models. Our findings indicate that disambiguation robustness varies substantially between domains and that different models trained on the same data are vulnerable to different attacks. 1 . 2020. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3):1-41. | 10.18653/v1/2020.emnlp-main.616 | [
"https://arxiv.org/pdf/2011.01846v1.pdf"
] | 226,237,429 | 2011.01846 | d1ae3f76832bc165d13da5d4e025ab7218fc7d8a |
Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks
Denis Emelin d.emelin@sms.ed.ac.uk
University of Edinburgh
Scotland
Ivan Titov ititov@inf.ed.ac.uksennrich@cl.uzh.ch
University of Edinburgh
Scotland
University of Amsterdam
Netherlands
Rico Sennrich
University of Edinburgh
Scotland
University of Zurich
Switzerland
Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks
Word sense disambiguation is a well-known source of translation errors in NMT. We posit that some of the incorrect disambiguation choices are due to models' over-reliance on dataset artifacts found in training data, specifically superficial word co-occurrences, rather than a deeper understanding of the source text. We introduce a method for the prediction of disambiguation errors based on statistical data properties, demonstrating its effectiveness across several domains and model types. Moreover, we develop a simple adversarial attack strategy that minimally perturbs sentences in order to elicit disambiguation errors to further probe the robustness of translation models. Our findings indicate that disambiguation robustness varies substantially between domains and that different models trained on the same data are vulnerable to different attacks. 1 . 2020. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3):1-41.
Introduction
Consider the sentence John met his wife in the hot spring of 1988. In this context, the polysemous term spring unambiguously refers to the season of a specific year. Its appropriate translation into German would therefore be Frühling (the season), rather than one of its alternative senses, such as Quelle (the source of a stream). To contemporary machine translation systems, however, this sentence presents a non-trivial challenge, with Google Translate (GT) producing the following translation: John traf seine Frau in der heißen Quelle von 1988.
Prior studies have indicated that neural machine translation (NMT) models rely heavily on source sentence information when resolving lexical ambiguity (Tang et al., 2019). This suggests that the combined source contexts in which a specific sense of an ambiguous term occurs in the training data greatly inform the models' disambiguation decisions. Thus, a stronger correlation between the English collocation hot spring and the German translation Quelle, as opposed to Frühling, in the training corpus may explain this disambiguation error. Indeed, John met his wife in the spring of 1988 is translated correctly by GT.
We propose that our motivating example is representative of a systematic pathology NMT systems have yet to overcome when performing word sense disambiguation (WSD). Specifically, we hypothesize that translation models learn to disproportionately rely on lexical correlations observed in the training data when resolving word sense ambiguity. As a result, disambiguation errors are likely to arise when an ambiguous word co-occurs with words that are strongly correlated in the training corpus with a sense that differs from the reference.
To test our hypothesis, we evaluate whether dataset artifacts are predictive of disambiguation decisions made in NMT. First, given an ambiguous term, we define a strategy for quantifying how much its context biases NMT models towards its different target senses, based on statistical patterns in the training data. We validate our approach by examining correlations between this bias measure and WSD errors made by baseline models. Furthermore, we investigate whether such biases can be exploited for the generation of minimally-perturbed adversarial samples that trigger disambiguation errors. Our method does not require access to gradient information nor the score distribution of the decoder, generates samples that do not significantly diverge from the training domain, and comes with a clearly-defined notion of attack success and failure.
The main contributions of this study are:
1. We present evidence for the over-reliance of NMT systems on inappropriate lexical correlations when translating polysemous words.
2. We propose a method for quantifying WSD biases that can predict disambiguation errors.
3. We leverage data artifacts for the creation of adversarial samples that facilitate WSD errors.
2 Can WSD errors be predicted?
To evaluate whether WSD errors can be effectively predicted, we first propose a method for measuring the bias of sentence contexts towards different senses of polysemous words, based on lexical cooccurrence statistics of the training distribution. We restrict our investigation to English→German, although the presented findings can be assumed to be language-agnostic. To bolster the robustness of our results, we conduct experiments in two domains -movie subtitles characterized by casual language use, and the more formal news domain. For the former, we use the OpenSubtitles2018 (OS18) (Lison et al., 2019) corpus 2 , whereas the latter is represented by data made available for the news translation task of the Fourth Conference on Machine Translation (WMT19) 3 (Barrault et al., 2019). Appendix A.1 reports detailed corpus statistics.
Quantifying disambiguation biases
An evaluation of cross-lingual WSD errors presupposes the availability of certain resources, including a list of ambiguous words, a lexicon containing their possible translations, and a set of parallel sentences serving as a disambiguation benchmark.
Resource collection
Since lexical ambiguity is a pervasive feature of natural language, we limit our study to homographs -polysemous words that share their written form but have multiple, unrelated meanings. We further restrict the set of English homographs to nouns that are translated as distinct German nouns, so as to confidently identify disambiguation errors, while minimizing the models' ability to disambiguate based on syntactic cues. English homographs are collected from web resources 4 , excluding those that do not satisfy the above criteria. Refer to appendix A.2 for the full homograph list. We next compile a parallel lexicon of homograph translations, prioritizing a high coverage of all possible senses. Similar to (Raganato et al., 2019), we obtain sense-specific translations from crosslingual BabelNet (Navigli and Ponzetto, 2010) synsets. Since BabelNet entries vary in their granularity, we iteratively merge related synsets as long as they have at least three German translations in common or share at least one definition. 5 This leaves us with multiple sense clusters of semantically related German translations per homograph. To further improve the quality of the lexicon, we manually clean and extend each homograph entry to address the noise inherent in BabelNet and its incomplete coverage. 6 Appendix A.7 provides examples of the final sense clusters.
In order to identify sentence contexts specific to each homograph sense, parallel sentences containing known homographs are extracted from the training corpora in both domains. We lemmatize homographs, their senses, and all sentence pairs using spaCy (Honnibal and Montani, 2017) to improve the extraction recall. Homographs are further required to be aligned with their target senses according to alignments learned with fast align (Dyer et al., 2013). Each extracted pair is assigned to one homograph sense cluster based on its reference homograph translation. Pairs containing homograph senses assigned to multiple clusters are ignored, as disambiguation errors are impossible to detect in such cases.
Bias measures
It can be reasonably assumed that context words cooccurring with homographs in a corpus of natural text are more strongly associated with some of their senses than others. Words that are strongly correlated with a specific sense may therefore bias models towards the corresponding translation at test time. We refer to any source word that co-occurs with a homograph as an attractor associated with the sense cluster of the homograph's translation. Similarly, we denote the degree of an attractor's association with a sense cluster as its disambiguation bias towards that cluster. Table 1 lists the most frequent attractors identified for the different senses of the homograph spring in the OS18 training set.
Intuitively, if an NMT model disproportionately relies on simple surface-level correlations when resolving lexical ambiguity, it is more likely to make WSD errors when translating sentences that contain season water source device summer hot like winter water back come find thing strong attractors towards a wrong sense. To test this, we collect attractors from the extracted parallel sentences, quantifying their disambiguation bias (DB) using two metrics: Raw co-occurrence frequency (FREQ) and positive point-wise mutual information (PPMI) between attractors and homograph senses. FREQ is defined in Eqn.1, while Eqn.2 describes PPMI, with w ∈ V denoting an attractor term in the source vocabulary 7 , and sc ∈ SC denoting a sense cluster in the set of sense clusters assigned to a homograph. For PPMI, P (w i , sc j ), P (w i ), and P (sc j ) are estimated via relative frequencies of (co-)occurrences in training pairs.
F REQ(w i , sc j ) = Count(w i , sc j ) (1) P P M I(w i , sc j ) = max( P (w i , sc j ) P (w i )P (sc j ) , 0) (2)
The disambiguation bias associated with the entire context of a homograph is obtained by averaging sense-specific bias values DB(w i , sc j ) of all attractors in the source sentence S = {w 1 , w 2 , ..., w |S| }, as formalized in Eqn.3. Context words that are not known attractors of sc j are assigned a disambiguation bias value of 0.
DB(S, sc j ) = 1 |S| |S| i=1 DB(w i , sc j )(3)
As a result, sentences containing a greater number of strong attractors are assigned a higher bias score.
Probing NMT models
To evaluate the extent to which sentence-level disambiguation bias is predictive of WSD errors made by NMT systems, we train baseline translation models for both domains. Test sets for WSD error prediction are constructed by extracting parallel sentences from heldout, development, and test data (see appendix A.1 for details). The process is identical to that described in section 2.1, with the added exclusion of source sentences shorter than 10 tokens, as they may not provide enough context. For each source sentence, disambiguation bias values are computed according to equation 3, with sc j corresponding to either the correct sense cluster (DB ) or the incorrect sense cluster with the strongest bias (DB ). Additionally, we consider the difference DB DIFF between DB and DB which can be interpreted as the overall statistical bias in a source sentence towards an incorrect homograph translation. All bias scores are computed either using FREQ or PPMI.
We examine correlations between the proposed bias measures and WSD errors produced by the indomain baseline models. Translations are considered to contain WSD errors if the target homograph sense does not belong to the same sense cluster as its reference translation. We check this by looking up target words aligned with source homographs according to fast align. To estimate correlation strength we employ the ranked biserial correlation (RBC) metric 8 (Cureton, 1956) and measure statistical significance using the Mann-Whitney U (MWU) test (Mann and Whitney, 1947).
In order to compute the RBC values, test sentences are divided into two groups -one containing correctly translated source sentences and another comprised of source sentences with incorrect homograph translations. Next, all possible pairs are constructed between the two groups, pairing together each source sentence from one group with all source sentences from the other. Finally, the proportion of pairs f where the DB score of the incorrectly translated sentence is greater than that of the correctly translated sentence is computed, as well as the proportion of pairs u where the opposite relation holds. The RBC value is then obtained according to Eqn.4.
RBC = f − u(4)
Statistical significance, on the other hand, is estimated by ranking all sentences in the test set according to their DB score in ascending order while resolving ties, and computing the U-value according to Eqn.5-7, where R 1 denotes to the sum of ranks of sentences with incorrectly translated homographs and n 1 their total count, while R 2 denotes the sum of ranks of correctly translated sentences and n 2 their respective total count.
U = min(U 1 , U 2 )(5)U 1 = R 1 − n 1 (n 1 + 1) 2 (6) U 2 = R 2 − n 2 (n 2 + 1) 2(7)
To obtain the p-values, U-values are subjected to tie correction and normal approximation. 9 Table 3 summarizes the results 10 , including correlation estimates between WSD errors and source sentence length, as a proxy for disambiguation context size. Statistically significant correlations are discovered for all bias estimates based on attractors (p < 1e-5, two-sided). Moreover, the observed correlations exhibit a strong effect size (McGrath 9 We use Python implementations of RBC and MWU provided by the pingouin library (Vallat, 2018). 10 Positive values denote a positive correlation between bias measures and the presence of disambiguation errors in model translations, whereas negative values denote negative correlations. The magnitude of the values, meanwhile, indicates the correlations' effect size. and Meyer, 2006). See appendix A.5 for the modelspecific effect size interpretation thresholds. For all models and domains the strongest correlations are observed for DB DIFF derived from simple cooccurrence counts.
Challenge set evaluation
To establish the predictive power of the uncovered correlations, a challenge set of 3000 test pairs with the highest FREQ DIFF score is subsampled from the full WSD test pair pool in both domains. In addition, we create secondary sets of equal size by randomly selecting pairs from each pool. As Figure 1 illustrates, our translation models exhibit a significantly higher WSD error rate -by a factor of up to 6.1 -on the challenge sets as compared to the randomly chosen pairs. While WSD performance is up to 96% on randomly chosen sentences, performance drops to 77-82% for the best-performing model (Transformer). This suggests that lexical association artifacts, from which the proposed disambiguation bias measure is derived, can be an effective predictor of lexical ambiguity resolution errors across model architectures and domains. The observed efficacy of attractor co-occurrence counts for WSD error prediction may be partially due to sense frequency effects, since more frequent senses occur in more sentence pairs, yielding more frequent attractors. NMT models are known to underperform on low-frequency senses of ambiguous terms (Rios et al., 2017), prompting us to investigate if disambiguation biases capture the same information. For this purpose, another challenge set of 3000 pairs is constructed by prioritizing pairs assigned to the rarest among each homograph's sense sets. We find that the new challenge set has a 72.63% overlap with the disambiguation bias challenge set in the OS18 domain and 64.4% overlap in the WMT19 domain. Thus, disambiguation biases appear to indeed capture some sense frequency effects, which themselves represent a dataset artifact, but also introduce novel information.
Our experimental findings indicate that translation models leverage undesirable surface-level correlations when resolving lexical ambiguity and are prone to disambiguation errors in cases where learned statistical patterns are violated. Next, we use these insights for the construction of adversarial samples that cause disambiguation errors by minimally perturbing source sentences.
Adversarial WSD attacks on NMT
Adversarial attacks probe model robustness by attempting to elicit incorrect predictions with perturbed inputs (Zhang et al., 2020). By crafting adversarial samples that explicitly target WSD capabilities of NMT models, we seek to provide further evidence for their susceptibility to dataset artifacts.
Generating adversarial WSD samples
Our proposed attack strategy is based on the assumption that introducing an attractor into a sentence can flip its inherent disambiguation bias towards the attractor's sense cluster. Thus, translations of the so perturbed sentence will be more likely to contain WSD errors. The corresponding sample generation strategy consists of four stages:
1. Select seed sentences containing homographs to be adversarially perturbed.
2. Identify attractors that are likely to yield fluent and natural samples.
3. Apply perturbations by introducing attractors into seed sentences.
Predict effective adversarial samples based on attractor properties.
The targeted attack is deemed successful if a victim model accurately translates the homograph in the seed sentence, but fails to correctly disambiguate it in the adversarially perturbed sample, instead translating it as one of the senses belonging to the attractor's sense cluster. This is a significantly more challenging attack success criterion than the general reduction in test BLEU typically employed for evaluating adversarial attacks on NMT systems (Cheng et al., 2019). Samples are generated using homographs and attractors collected in section 2.1, while all test sentence pairs extracted in section 2.2 form the domain-specific seed sentence pools. Attack success is evaluated on the same baseline translation models as used throughout section 2.
Seed sentence selection
In order to generate informative and interesting adversarial samples, we focus on seed sentences that are likely to be unambiguous. We thus apply three filtering heuristics to seed sentence pairs:
• Sentences have to be at least 10 tokens long.
• We mask out the correct homograph sense in the reference translation and use a pre-trained German BERT model (Devlin et al., 2019) 11 to predict it. Pairs are rejected if the most probable sense does not belong to the correct sense cluster which suggests that the sentence context may be insufficient for correctly disambiguating the homograph. As a result, WSD errors observed in model-generated translations of the constructed adversarial samples are more likely to be due to the applied adversarial perturbations.
• 10% of pairs with the highest disambiguation bias towards incorrect sense clusters are removed from the seed pool.
Setting the rejection threshold above 10% can further reduce WSD errors in seed sentences. At the same time, it would likely render minimal perturbations ineffective, due to the sentences' strong bias towards the correct homograph sense. Thus, we aim for a working compromise.
IH
During this first spring, he planted another tree that looked the same.
RH
A hot new spring will conquer the dark nights of winter.
InH Come the spring, I will be invading the whole country called Frankia.
RnH After a long, eternal fallow winter, spring has come again to Fredericks Manor.
Perturbation types
Naively introducing new words into sentences is expected to yield disfluent, unnatural samples. To counteract this, we constrain candidate attractors to adjectives, since they can usually be placed in front of English nouns without violating grammatical constraints. We consider four perturbation types:
• Insertion of the attractor adjective in front of the homograph (IH)
• Replacement of a seed adjective modifying the homograph (RH)
• Insertion of the attractor adjective in front of a non-homograph noun (InH)
• Replacement of a seed adjective modifying a non-homograph noun (RnH)
Replacement strategies require seed sentences to contain adjectives, but can potentially have a greater impact on the sentence's disambiguation bias by replacing attractors belonging to the correct sense cluster. Examples for each generation strategy are given in Table 4, with homographs highlighted in blue and added attractors in red.
Attractor selection
Since adjectives are subject to selectional preferences of homograph senses, not every attractor will yield a semantically coherent adversarial sample. For instance, inserting the attractor flying in front of the homograph bat in a sentence about baseball will likely produce a nonsensical expression, whereas an attractor like huge would be more acceptable. We attempt to control for this type of disfluency by only considering attractors that had been previously observed to modify the homograph in its seed sentence sense. For non-homograph perturbations, attractors must have been observed modifying the non-homograph noun. This is ensured by obtaining a dependency parse for each sentence in the English half of the training data and maintaining a list of modifier adjectives for each known target homograph sense set and source noun. 12 Lastly, to facilitate the fluency and naturalness of adversarial samples, the generation process incorporates a series of constraints:
• Comparative and superlative adjective forms are excluded from the attractor pool.
• Attractors may not modify compound nouns due to less transparent selectional preferences.
• Attractors are not allowed next to other adjectives modifying the noun, to avoid violating the canonical English adjective order.
As all heuristics rely on POS taggers or dependency parsers, 13 they are not free of noise, occasionally yielding disfluent or unnatural samples. We restrict the number of insertions or replacements to one, so as to maintain a high degree of semantic similarity between adversarial samples and seed sentences. A single seed sentence usually yields several samples, even after applying the aforementioned constraints. Importantly, we generate samples using all retained attractors at this stage, without selecting for expected attack success.
Post-generation filtering
To further ensure the naturalness of generated samples, sentence-level perplexity is computed for each seed sentence and adversarial sample using a pretrained English GPT2 (Radford et al., 2019) language model. 14 Samples are rejected if their perplexity exceeds that of their corresponding seed sentence by more than 20%. In total, we obtain a pool of ∼500K samples for the OS18 domain and ∼3.9M samples for the WMT19 domain. Each sample is translated by all in-domain models.
Identifying effective attractors
The success of the proposed attack strategy relies on the selection of attractors that are highly likely to flip the homograph translation from the correct seed sense towards an adversarial sense belonging to the attractors' own sense set. To identify such attractors, we examine correlations between attractors' disambiguation biases and the effectiveness of adversarial samples containing them. The attractors' bias values are based either on co-occurrence frequencies (Eqn.1) or PPMI scores (Eqn.2) with the homographs' sense clusters. In particular, we examine the predictive power of an attractor's bias towards the adversarial sense cluster (DB ) as well as the difference between its adversarial and seed bias values (DB DIFF ). As before, RBC and MWU measures are used to estimate correlation strength, with Table 5 summarizing the results.
Similarly to the findings reported in section 2.2, all uncovered correlations are strong and statistically significant with p < 1e-5 (see appendix A.5 for effect size thresholds). Importantly, FREQ DIFF exhibits the strongest correlation in all cases.
We are furthermore interested in establishing which of the proposed perturbation methods yields most effective attacks. For this purpose, we examine the percentage of attack successes per perturbation strategy in Figure 2, finding perturbations proximate to the homograph to be most effective.
Challenge set evaluation
Having thus identified a strategy for selecting attractors that are likely to yield successful attacks, we construct a challenge set of 10000 adversarial samples with the highest attractor FREQ DIFF scores that had been obtained via the IH or RH perturbations. To enforce sample diversity, we limit the number of samples to at most 1000 per homograph. Additionally, we create equally-sized, secondary challenge sets by drawing samples at random from each domain's sample pool. Figure 3 illustrates the attack success rate for both categories, while Table 6 shows some of the successful attacks on the OS18 transformer. Further successful samples are reported in Appendix A.7. The success rates are modest, ranging from 4.62% to 24.39%, but nonetheless showcase the capacity of targeted, minimal perturbations for flipping correct homograph translations towards a specific sense set. Since our attacks do not require access to model gradients or predictive score distributions, fall within the same domain as the models' training data, and have a strict notion of success, direct comparisons with previous work are difficult.
Crucially, compared with a random sample selection strategy, subsampling informed by attractors' disambiguation bias is up to 4.25 times more successful at identifying effective adversarial samples. While the relative improvement in attack success rate over the random baseline is comparable in both domains, the OS18 models are more susceptible to attacks in absolute terms. This may be due to their lower quality, or the properties of the training data, which can suffer from noisiness (Lison et al., 2019). Interestingly, the relative robustness of individual model architectures to WSD attacks also differs between domains, despite similar quality in terms of BLEU (see Table 2). A more thorough investigation of architecture-specific WSD vulnerabilities is left for future work.
Sample quality analysis
To examine whether our adversarial samples would appear trivial and innocuous to human translators, automatic and human evaluation of samples included in the challenge set is conducted. Following (Morris et al., 2020), we use a grammar checker 15 to evaluate the number of cases in which adversarial perturbations introduce grammatical errors. In the OS18 domain, only 1.04% of samples are less grammatical than their respective seed sentences, whereas this is the case for 2.04% of WMT19 samples, indicating a minimal degradation.
We additionally present two bilingual judges with 1000 samples picked at random from adversarial challenge sets in both domains and 1000 regular sentences from challenge sets constructed in section 2.2. For each adversarial source sen-15 http://languagetool.org tence, annotators were asked to choose whether the homograph's translation belongs to the correct or adversarial seed cluster. For each regular sentence, the choice was between the correct and randomly selected clusters. Across both domains, annotator error rate was 11.23% in the adversarial setting and 11.45% for regular sentences. As such, the generated samples display a similar degree of ambiguity to natural sentences that are likely to elicit WSD errors in NMT models. Annotator agreement was substantial (Cohen's kappa = 0.7).
The same judges were also asked to rate the naturalness of each sentence on a Likert scale from 1 to 5. Perturbed sentences were assigned a mean score of 3.94, whereas regular sentences scored higher at 4.18. However, annotator agreement was low (weighted Kappa = 0.17). The observed drop in naturalness is likely due to the selection of attractors that are not fully consistent with the selectional preferences of homograph senses during sample generation. We attribute this to WSD errors in reference translations. For instance, we find that the attractor vampire is occasionally applied to seed sentences containing the homograph bat in its sporting equipment sense, which can only occur if the attractor has been observed to modify this sense cluster in the training data (see 3.1). Appendix A.6 replicates annotator instructions for both tasks.
Transferability of adversarial samples
An interesting question to consider is whether translation models trained on the same data are vulnerable to the same adversarial samples. We evaluate this by computing the Jaccard similarity index between successful attacks on each baseline model from the entire pool of adversarial samples described in section 3.2. We find the similarity to be low, raging between 10.1% and 18.2% for OS18 and between 5.7% and 9.1% for WMT19 samples, which suggests that different model architectures appear to be sensitive to different corpus artifacts, possibly due to differences in their inductive biases.
Considering the observed discrepancy in vulnerabilities between architectures, a natural follow-up question is whether two different instances of the same architecture are susceptible to the same set of attacks. We investigate this by training a second transformer model for each domain, keeping all settings constant with the initial models, but choosing a different seed for the random initialization. While the similarity between sets of successful adversarial samples is greater for two models of the same type, with 25.2% in the OS18 and 12.4% in WMT19 domain, is it still remarkably low. To our knowledge, no study so far has examined the interaction between training data artifacts and WSD performance in detail.
Literature review
Dataset artifacts, on the other hand, have previously been shown to enable models to make correct predictions based on incorrect or insufficient information ( where the focus so far has been on strategies requiring direct access to the victim model's loss gradient or output distribution. Recent surveys suggested that state-of-the-art attacks often yield ungrammatical and meaning-destroying samples, thus diminishing their usefulness for the evaluation of model robustness (Michel et al., 2019;Morris et al., 2020). Targeted attacks on WSD abilities of translation models have so far remained unexplored.
Conclusion
We conducted an initial investigation into leveraging data artifacts for the prediction of WSD errors in machine translation and proposed a simple adversarial attack strategy based on the presented insights. Our results show that WSD is not yet a solved problem in NMT, and while the general performance of popular model architectures is high, we can identify or create sentences where models are more likely to fail due to data biases.
The effectiveness of our methods owes to neural models struggling to accurately distinguish between meaningful lexical correlations and superficial ones. As such, the presented approach is expected to be transferable to other language pairs and translation directions, assuming that the employed translation models share this underlying weakness. Given the model-agnostic nature of our findings, this is likely to be the case.
As a continuation to this work, we intend to evaluate whether multilingual translation models are more resilient to lexical disambiguation biases and, as a consequence, are less susceptible to adversarial attacks that exploit source-side homography. Extending model-agnostic attack strategies to incorporate other types of dataset biases and to target natural language processing tasks other than machine translation is likewise a promising avenue for future research. Lastly, the targeted development of models that are resistant to dataset artifacts is a promising direction that is likely to aid generalization across linguistically diverse domains. For model training and evaluation, we additionally learn and apply BPE codes (Sennrich et al., 2016) to the data using the subword-NMT implementation 18 , with 32k merge operations and the vocabulary threshold set to 50.
A.2 Homograph list
The full list of homographs used in our experiments is as follows : anchor, arm, band, bank, balance, bar, barrel, bark, bass, bat, battery, beam, board, bolt, boot, bow, brace, break, bug, butt, cabinet, capital, case, cast, chair, change, charge, chest, chip, clip, club, cock, counter, crane, cycle, date, deck, drill, drop, fall, fan, file, film, flat, fly, gum, hoe, hood, jam, jumper, lap, lead, letter, lock, mail, match, mine, mint, mold, mole, mortar, move, nail, note, offense, organ, pack, palm, pick, pitch, pitcher, plaster, plate, plot, pot, present, punch, quarter, race, racket, record, ruler, seal, sewer, scale, snare, spirit, spot, spring, staff, stock, subject, tank, tear, term, tie, toast, trunk, tube, vacuum, watch. Table 11 lists some of the identified sense clusters for several homographs. All homographs used in our experiments have at least two sense clusters associated with them. 16 http://github.com/saffsd/langid.py 17 http://github.com/moses-smt/ mosesdecoder 18 http://github.com/rsennrich/ subword-nmt A.4 Baseline models Table 12 provides implementation and training details for each architecture. Same settings are used for training identical models types in different domains. We use standard fairseq 19 (Ott et al., 2019) implementations for all model types and train them on NVIDIA 1080ti or NVIDIA 2080ti GPUs. Model translations are obtained by averaging the final 5 model checkpoints and decoding using beam search with beam size 5.
A.3 Sense cluster examples
A.5 Base-rate adjusted effect size thresholds
Whether the effect size of correlations between dichotomous and quantitative variables can be considered strong depends on the size ratio between the two groups denoted by the dichotomous variable, i.e. its base rate. As the standard formulation of RBC is sensitive to the base rate, the estimated effect size decreases as the base rate becomes more extreme (see (McGrath and Meyer, 2006) for details). Applied to our experimental setting, this means that the observed correlation values are sensitive to the number of sentences containing disambiguation errors relative to the amount of those that do not. This is an undesirable property, as we are only interested in the predictive power of our quantitative variables, regardless of how often disambiguation errors are observed. Thus, we adjust the thresholds for the interpretation of correlation strength to account for WSD errors being less frequent than WSD successes overall, in analogy to (McGrath and Meyer, 2006). Doing so enables the direct comparison of correlation strength between domains and model types, as each combination of the two factors exhibits a different disambiguation success base rate.
A common practice for interpreting effect size strength that does not account for base rate inequalities is the adoption of Cohen's benchmark (Cohen, 2013), which posits that the effect size d is large if d >= 0.8, medium if d >= 0.5, and small if d >= 0.2. To adjust these threshold values for the observed base rates, they are converted according to Eqn. 8, where p1 and p2 represent the proportions of groups described by the dichotomous variable, with p 2 = 1 − p 1 : The adjusted effect size interpretation thresholds for WSD error correlation values as given in Table 3 are provided in Table 7. Adjusted thresholds for attack success correlations as given in Table 5 are summarized in Table 8.
threshold = d d 2 + 1 p 1 ,p 2(8)
A.6 Annotator instructions
The judges were presented with the following instructions for the described annotation tasks: Your first task is to judge whether the meaning of the homograph as used in the given sentence is best described by the terms in the SENSE 1 cell or by those in the SENSE 2 cell. Please use the drop-down menu in the WHICH SENSE IS COR-RECT? column to make your choice. If you think that neither sens captures the homograph's meaning, please select NONE from the options in the drop-down menu. If you think that the homograph as used in the given sentence can be equally interpreted both as SENSE 1 or SENSE 2, please select BOTH.
We're also asking you to give us your subjective judgment whether the sentence you've been evaluating makes sense to you, i.e. whether it's grammatical, whether it can be easily understood, and whether it sounds acceptable to you as a whole. Typos and spelling mistakes, on the other hand, can be ignored. Specifically, we would like you to assign each sentence a naturalness score, ranging from 1 to 5, according to the following scale:
• 1 = Completely unnatural (i.e. sentence is clearly ungrammatical, highly implausible, or meaningless / incoherent)
• 2 = Somewhat unnatural (i.e. sentence is not outright incoherent, but sounds very strange)
• 3 = Unsure (i.e. sentence is difficult to judge either way)
• 4 = Mostly natural (i.e. sentence sounds good for the most part)
• 5 = Completely natural (i.e. a well-formed English sentence)
For instance a sentence like "John ate ten pancakes for breakfast." may get a ranking between 4 and 5, as it satisfies all of the above criteria. A sentence like "John ate green pancakes for breakfast." is grammatical but somewhat unusual and may therefore get a score between 3 and 4. "John ate late pancakes for breakfast.", on the other hand, does not sound very natural since pancakes cannot be "late" and may therefore be rated as 1 or 2. For this judgment we ask you to pay special attention to words in the neighborhood of the homograph. To submit your judgment please select the appropriate score from the drop-down menu in the DOES THE SENTENCE MAKE SENSE? column.
A.7 Examples of successful adversarial samples
Tables
Figure 1 :
1WSD errors in subsampled challenge sets.
Figure 2 :
2Successful attacks per perturbation.
Figure 3 :
3Successful challenge sets attacks.
Polysemous terms represent a long-standing challenge for NMT. Past investigations sought to quantify the WSD capacity of translation models through challenge sets(Rios et al., 2017; Raganato et al., 2019), to understand the disambiguation process by analysing their internal representations(Marvin and Koehn, 2018; Tang et al., 2019), or to improve ambiguity resolution capabilities of translation models(Rios et al., 2017; Liu et al., 2018).
McCoy et al., 2019; Gururangan et al., 2018) by over-relying on spurious correlations present in the training data. Within NMT, models were found to exhibit gender-bias, reinforcing harmful stereotypes (Vanmassenhove et al., 2018; Stanovsky et al., 2019). As a response, strategies have been proposed for de-biasing the training data (Li and Vasconcelos, 2019; Le Bras et al., 2020), as well as for making models more robust to data biases through adversarial training (Belinkov et al., 2019). Adversarial attacks have recently been extended as an effective model analysis tool from vision to language tasks (Samanta and Mehta, 2017; Alzantot et al., 2018; Glockner et al., 2018; Zhang et al., 2019), including NMT (Cheng et al., 2018, 2019),
Table 1 :
1Examples of attractors for spring.
Model FREQ PPMI FREQ PPMI FREQ DIFF PPMI DIFF LengthOS18 Transformer
-0.532
-0.578
0.327
0.474
0.708
0.674
0.018
OS18 LSTM
-0.468
-0.504
0.386
0.486
0.690
0.630
0.008
OS18 ConvS2S
-0.477
-0.514
0.391
0.492
0.723
0.658
0.021
WMT19 Transformer -0.610
-0.668
0.415
0.579
0.687
0.677
-0.004
WMT19 LSTM
-0.661
-0.698
0.376
0.574
0.725
0.708
-0.009
WMT19 ConvS2S
-0.648
-0.678
0.408
0.599
0.731
0.710
0.000
Table 3 :
3Rank biserial correlation between disambiguation bias measures and lexical disambiguation errors.
Table 4 :
4Perturbation examples; seed sense: season, adversarial sense: water source. Insertion/replacement in red.
Model FREQ PPMI FREQ DIFF PPMI DIFFTable 5: Rank biserial correlation between attractors' disambiguation bias and attack success.OS18 Transformer
0.307
0.367
0.438
0.306
OS18 LSTM
0.258
0.261
0.375
0.227
OS18 ConvS2S
0.228
0.174
0.325
0.165
WMT19 Transformer
0.241
0.241
0.264
0.224
WMT19 LSTM
0.278
0.256
0.316
0.231
WMT19 ConvS2S
0.304
0.270
0.328
0.216
Source input / Original output / Perturbed output We played the songs again until we felt they sounded right, worked out all the (nasty) bugs. O: Wir spielten die Lieder wieder, bis sie sich richtig anhörten und alle Fehler ausarbeiteten. P: Wir spielten die Lieder wieder, bis sie sich richtig anhörten und alle bösen Käfer ausarbeiteten. error insect S: The driver gets out, opens the (large) boot, takes some flowers out to deliver. O: Der Fahrer steigt aus,öffnet den Kofferraum , nimmt ein paar Blumen zum Ausliefern mit. P: Der Fahrer steigt aus,öffnet den großen Stiefel , nimmt ein paar Blumen zum Ausliefern mit. trunk shoe S: The doctor somehow got that wig mixed up with the newspapers and (different) letters. O: Der Arzt verwechselte die Perücke mit den Zeitungen und Briefen . P: Der Arzt verwechselte die Perücke mit den Zeitungen und anderen Buchstaben . message character S: And he will not cease until every last race of the Four Lands is destroyed. O: Und er wird nicht aufgeben, bis jede Rasse der Vier Länder ausgelöscht ist. P: Und er wird nicht aufhören, bis jedes letzte Rennen der Vier Länder zerstört ist. ethnic group contestSeed sense
Adv. sense
S:
Table 6 :
6Examples of successful attacks on the OS18 transformer. Homographs are blue, attractors are red.
Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019.Loïc Barrault, Ondřej Bojar, Marta R Costa-Jussà,
Christian Federmann, Mark Fishel, Yvette Gra-
ham, Barry Haddow, Matthias Huck, Philipp Koehn,
Shervin Malmasi, et al. 2019. Findings of the 2019
conference on machine translation (wmt19). In
Proceedings of the Fourth Conference on Machine
Translation (Volume 2: Shared Task Papers, Day 1),
pages 1-61.
Yonatan Belinkov, Adam Poliak, Stuart Shieber, Ben-
jamin Van Durme, and Alexander Sasha Rush. 2019.
On adversarial removal of hypothesis-only bias in
natural language inference. In Proceedings of the
Joint Conference on Lexical and Computational Se-
mantics.
Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen,
and Cho-Jui Hsieh. 2018. Seq2sick: Evaluat-
ing the robustness of sequence-to-sequence mod-
els with adversarial examples.
arXiv preprint
arXiv:1803.01128.
Robust neural machine translation with doubly ad-
versarial inputs. In Proceedings of the 57th Annual
Meeting of the Association for Computational Lin-
guistics, pages 4324-4333.
Jacob Cohen. 2013. Statistical power analysis for the
behavioral sciences. Academic press.
Edward E Cureton. 1956. Rank-biserial correlation.
Psychometrika, 21(3):287-290.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. Bert: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long and Short Papers), pages
4171-4186.
Chris Dyer, Victor Chahuneau, and Noah A Smith.
2013. A simple, fast, and effective reparameteriza-
tion of ibm model 2. In Proceedings of the 2013
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, pages 644-648.
Jonas Gehring, Michael Auli, David Grangier, and
Yann Dauphin. 2017. A convolutional encoder
model for neural machine translation. In Proceed-
ings of the 55th Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Pa-
pers), pages 123-135.
Max Glockner, Vered Shwartz, and Yoav Goldberg.
2018. Breaking nli systems with sentences that re-
quire simple lexical inferences. In Proceedings of
the 56th Annual Meeting of the Association for Com-
putational Linguistics (Volume 2: Short Papers),
pages 650-655.
Suchin Gururangan, Swabha Swayamdipta, Omer
Levy, Roy Schwartz, Samuel Bowman, and Noah A
Smith. 2018. Annotation artifacts in natural lan-
guage inference data. In Proceedings of the 2018
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, Volume 2 (Short Papers),
pages 107-112.
Matthew Honnibal and Ines Montani. 2017. spacy 2:
Natural language understanding with bloom embed-
dings, convolutional neural networks and incremen-
tal parsing. To appear, 7(1).
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris
Callison-Burch, Marcello Federico, Nicola Bertoldi,
Brooke Cowan, Wade Shen, Christine Moran,
Richard Zens, et al. 2007. Moses: Open source
toolkit for statistical machine translation. In Pro-
ceedings of the 45th annual meeting of the associ-
ation for computational linguistics companion vol-
ume proceedings of the demo and poster sessions,
pages 177-180.
Ronan Le Bras, Swabha Swayamdipta, Chandra Bha-
gavatula, Rowan Zellers, Matthew E Peters, Ashish
Sabharwal, and Yejin Choi. 2020. Adversarial filters
of dataset biases. arXiv, pages arXiv-2002.
Yi Li and Nuno Vasconcelos. 2019. Repair: Remov-
ing representation bias by dataset resampling. In
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pages 9572-9581.
Pierre Lison, Jörg Tiedemann, Milen Kouylekov, et al.
2019. Open subtitles 2018: Statistical rescoring of
sentence alignments in large, noisy parallel corpora.
In LREC 2018, Eleventh International Conference
on Language Resources and Evaluation. European
Language Resources Association (ELRA).
Frederick Liu, Han Lu, and Graham Neubig. 2018.
Handling homographs in neural machine translation.
In Proceedings of the 2018 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
Volume 1 (Long Papers), pages 1336-1345.
Minh-Thang Luong, Hieu Pham, and Christopher D
Manning. 2015. Effective approaches to attention-
based neural machine translation. In Proceedings of
the 2015 Conference on Empirical Methods in Natu-
ral Language Processing, pages 1412-1421.
Henry B Mann and Donald R Whitney. 1947. On a test
of whether one of two random variables is stochasti-
cally larger than the other. The annals of mathemat-
ical statistics, pages 50-60.
Rebecca Marvin and Philipp Koehn. 2018. Exploring
word sense disambiguation abilities of neural ma-
chine translation systems (non-archival extended ab-
stract). In Proceedings of the 13th Conference of the
Association for Machine Translation in the Americas
(Volume 1: Research Papers), pages 125-131.
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019.
Right for the wrong reasons: Diagnosing syntactic
heuristics in natural language inference. In Proceed-
ings of the 57th Annual Meeting of the Association
for Computational Linguistics, pages 3428-3448.
Robert E McGrath and Gregory J Meyer. 2006. When
effect sizes disagree: the case of r and d. Psycholog-
ical methods, 11(4):386.
Paul Michel, Xian Li, Graham Neubig, and
Juan Miguel Pino. 2019. On evaluation of ad-
versarial perturbations for sequence-to-sequence
models. In Proceedings of NAACL-HLT, pages
3103-3114.
John X Morris, Eli Lifland, Jack Lanchantin, Yangfeng
Ji, and Yanjun Qi. 2020. Reevaluating adversar-
ial examples in natural language. arXiv preprint
arXiv:2004.14174.
Roberto Navigli and Simone Paolo Ponzetto. 2010. Ba-
belnet: Building a very large multilingual semantic
network. In Proceedings of the 48th annual meet-
ing of the association for computational linguistics,
pages 216-225. Association for Computational Lin-
guistics.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela
Fan, Sam Gross, Nathan Ng, David Grangier, and
Michael Auli. 2019. fairseq: A fast, extensible
toolkit for sequence modeling. In Proceedings of
NAACL-HLT 2019: Demonstrations.
Matt Post. 2018. A call for clarity in reporting bleu
scores. In Proceedings of the Third Conference on
Machine Translation: Research Papers, pages 186-
191.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, and Ilya Sutskever. 2019. Language
models are unsupervised multitask learners.
Alessandro Raganato, Yves Scherrer, and Jörg Tiede-
mann. 2019. The mucow test suite at wmt 2019: Au-
tomatically harvested multilingual contrastive word
sense disambiguation test sets for machine transla-
tion. In Proceedings of the Fourth Conference on
Machine Translation (Volume 2: Shared Task Papers,
Day 1), pages 470-480.
Annette Rios, Laura Mascarell, and Rico Sennrich.
2017. Improving word sense disambiguation in neu-
ral machine translation with sense embeddings. In
Proceedings of the Second Conference on Machine
Translation, pages 11-19.
John Ruscio. 2008. A probability-based measure of ef-
fect size: Robustness to base rates and other factors.
Psychological methods, 13(1):19.
Suranjana Samanta and Sameep Mehta. 2017. Towards
crafting text adversarial samples. arXiv preprint
arXiv:1707.02812.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words
with subword units. In Proceedings of the 54th An-
nual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 1715-
1725.
Gabriel Stanovsky, Noah A Smith, and Luke Zettle-
moyer. 2019. Evaluating gender bias in machine
translation. In Proceedings of the 57th Annual Meet-
ing of the Association for Computational Linguistics,
pages 1679-1684.
Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2019.
Encoders help you disambiguate word senses in
neural machine translation. In Proceedings of the
2019 Conference on Empirical Methods in Natu-
ral Language Processing and the 9th International
Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 1429-1435.
A Supplementary material
A.1 Data properties
The WMT19 data is obtained by concatenating the
Europarl v9, Common Crawl, and News Commen-
tary v14 parallel corpora. Basic data cleaning is
performed for both domains, which includes re-
moval of pairs containing sentences classified by
langid 16 as neither German or English and pairs
with a source-to-target sentence length ratio exceed-
ing 2. We create development and training splits
for the OS18 domain by removing 10K sentence
pairs from the full, shuffled corpus in each case.
For each domain, we additionally hold-out 20%
of pairs to be used for the extraction of test pairs
containing homographs, as described in section 2.2.
Final statistics for the OS18 domain are reported
in table 9 and in 10 for the WMT19 domain.
Each dataset is subsequently tokenized and true-
cased using Moses (Koehn et al., 2007) scripts 17 .
Table 7 :
7Base-rate adjusted thresholds for the interpretation of WSD error prediction correlations.Model
small medium large
OS18 Transformer
0.0339 0.0846 0.1345
OS18 LSTM
0.0338 0.0842 0.1340
OS18 ConvS2S
0.0328 0.0817 0.1301
WMT19 Transformer 0.0166 0.0414 0.0661
WMT19 LSTM
0.0178 0.0446 0.0712
WMT19 ConvS2S
0.0219 0.0548 0.0874
Table 8 :
8Base-rate adjusted thresholds for the interpretation of attack success correlations.Statistic
train
dev
test
held-out
# sentences
14,993,062 10,000 10,000 3,751,765
# words (EN)
106,873,835 71,719 71,332 26,763,351
# words/sentence (EN)
7.13
7.17
7.13
7.13
# words (DE)
100,248,893 67,185 66,799 25,094,166
# words/sentence (DE)
6.69
6,71
6.68
6.69
Table 9 :
9Corpus statistics for the OS18 domain.Statistic
train
dev (test18) test14 test19
held-out
# sentences
4,861,743
2,998
3,003
1,997
1,215,435
# words (EN)
100,271,426
58,628
59,325 42034 25,057,036
# words/sentence (EN)
20.62
19.56
19.76
21.05
20.62
# words (DE)
93,900,343
54,933
54,865 42,087 23,467,086
# words/sentence (DE)
19.31
18.32
18.27
21.08
19.31
Table 10 :
10Corpus statistics for the WMT19 domain.Homograph
Sense 1
Sense 2
Sense 3
Table 12 :
12Training settings and model hyperparameters. input / Original output / Perturbed output Seed sense Adv. sense S: The Penguin was beating him with an (old) bat, but it was Gordon that pulled the trigger. O: Der Pinguin hat ihn mit einem Schläger geschlagen, aber Gordon hat abgedrückt. P: Der Pinguin hat ihn mit einer alten Fledermaus geschlagen , aber Gordon hat abgedrückt. club animal S: I'm not going to relax until that thing its back in its (simple) case. O: Ich werde mich nicht entspannen, bis dieses Ding nicht seinen Rücken in seinem Koffer hat. P: Ich werde mich nicht entspannen, bis das Ding nicht seinen Rücken in seinem einfachen Fall hat. container instance S: "They rest in their mother's (hot) lap, enjoying the ultimate bliss" O: "Sie ruhen im Schoß ihrer Mutter, genießen das ultimative Glück" P: "Sie ruhen in der heißen Runde ihrer Mutter, genießen das ultimative Glück" body part circuit S: That's mighty neighbourly, but I got to play the (big) organ for the parson tonight. O: Das ist mächtig nachbarschaftlich, aber ich muss heute Abend Orgel für den Pfarrer spielen. P: Das ist mächtig nachbarschaftlich, aber ich muss heute Abend das Organ für den Pfarrer spielen. instrument body part S: I'm just gonna write a (high) note, and then we'll go. O: Ich schreibe nur einen Zettel und dann gehen wir. P: Ich schreibe einen hohen Ton und dann gehen wir. writing toneSource
Table 13 :
13Additional examples of successful attacks on the OS18 transformer. Homographs are blue, attractors are red. Source input / Original output / Perturbed output Seed sense Adv. sense S: I only sell (good) arms to people who fight clean wars! sure! O: Ich verkaufe nur Waffen an Leute, die saubere Kriege bekämpfen. P: Ich verkaufe nur gute Arme an Leute, die saubere Kriege bekämpfen. weapon body part S: We've heard they're trying to raise (new) capital to rebuild their armies. O: Wir haben gehört, sie wollen Kapital sammeln, um ihre Armeen aufzubauen. P: Wir haben gehört, dass sie eine neue Hauptstadt aufziehen wollen, um ihre Armeen aufzubauen. money city S: Did you charge the Donellys for five (closed) cases of vodka? O: Haben Sie die Donellys für fünf Kisten Wodka berechnet? P: Haben Sie die Donellys für fünf geschlossene Fälle Wodka berechnet? container court case S: All units, repeat. that is a battered yellow van, no (separate) plates. O: An alle Einheiten, das ist ein gegrillter gelben Van, keine Nummernschilder . P: An alle Einheiten, das ist ein gegrillter gelben Van, keine getrennten Teller . number plate dish S: Um, (old) seals tell the truth, but a sea lion's always lyin'? O: Robben sagen die Wahrheit, aber ein Seelöwen lügt immer ? P: Alte Siegel sagen die Wahrheit, aber ein Seelöwen lügt immer? animal emblem
Table 14 :
14Examples of successful attacks on the OS18 LSTM. Homographs are blue, attractors are red. Source input / Original output / Perturbed output Seed sense Adv. sense S: -Oh, well, keep the (small) change and have a drink on me. O: Behalten Sie den Rest und trinken Sie auf mich. P: Oh, nun, behalte die kleine Veränderung und trink einen auf mich. coins development S: Do you know how that (specific) date went, by any chance? O: Wissen Sie, wie das Date gelaufen ist? P: Wissen Sie, wie das Datum gelaufen ist? meeting calendar date S: Goal! (public address) An amazing last-minute third goal that takes Greenock into the (strong) lead. O: Ein erstaunliches drittes drittes Ziel, das Greenock in die Führung führt. P: Ein erstaunliches drittes Ziel, das Greenock in die starke Spur führt. first place clue S: I mean, you seem like someone who plots out every (fucking) move. O: Ich meine, Sie scheinen jemand zu sein, der jeden Schritt aussticht. P: Ich meine, Sie scheinen jemand zu sein, der jede verdammte Bewegung ausschüttet. action movement S: You know, if we get hungry, we eat some chips, have some (crazy) punch ... O: Weißt du, wenn wir hungrig werden, essen wir ein paar Chips, haben etwas Punsch ... P: Weißt du, wenn wir hungrig werden, essen wir ein paar Chips, haben einen verrückten Schlag ... drink hit Table 15: Examples of successful attacks on the OS18 ConvS2S. Homographs are blue, attractors are red. Copenhagen -Copenhagen, Denmark's (financial) capital, wants to be the world's first CO2-neutral city by 2025. O: Kopenhagen -Kopenhagen, die Hauptstadt Dänemarks, will bis 2025 die erste CO2-neutrale Stadt der Welt sein. P: Kopenhagen -Kopenhagen, das Finanzkapital Dänemarks, will bis 2025 die erste CO2-neutrale Stadt der Welt sein. city money S: This is done by pricking the earlobe with a small lancet and taking a (real) drop of blood. O: Dies geschieht, indem der Ohrwurm mit einem kleinen Lancet geprickt wird und ein Tropfen Blut eingenommen wird. P: Dies geschieht, indem der Ohrwurm mit einem kleinen Lancet geprickt wird und ein richtiger Blutabfall entsteht. drop of liquid decrease S: One (small positive) note was from the Republic of Ireland, which saw its PMI grow to 57.3, its highest level since the end of 1999. O: Eine positive Anmerkung war die aus der Republik Irland, wo das PMI auf 57,3 anstieg, das höchste Niveau seit Ende 1999. P: Ein kleiner Schein stammt aus der Republik Irland, wo das PMI auf 57,3 anstieg, das höchste Niveau seit Ende 1999. remark paper money S: His epoch-making (full) record "Free Jazz" was released by Atlantic Records at the dawn of that decade. O: Seine epochale Platte "Free Jazz" wurde zu Beginn des Jahrzehnts von Atlantic Records veröffentlicht. P: Seine epochale Aufzeichnung "Free Jazz" wurde zu Beginn des Jahrzehnts von Atlantic Records veröffentlicht. musical medium document S: After winter delivered an early dose of (natural) spring last week, temperatures dropped again on Monday to a high of just 15.8C in the city. O: Nachdem der Winter vergangene Woche eine frühe Frühjahrsdosis geliefert hatte, fielen die Temperaturen am Montag wieder auf einen Höchstwert von nur 15,8C in der Stadt. P: Nachdem der Winter letzte Woche eine frühe Dosis Naturquelle lieferte, fielen die Temperaturen am Montag wieder auf einen Höchstwert von nur 15,8C in der Stadt.Source input / Original output / Perturbed output
Seed sense
Adv. sense
S:
Table 16 :
16Examples of successful attacks on the WMT19 transformer. Homographs are blue, attractors are red. input / Original output / Perturbed output Seed sense Adv. sense S: A Thousand Splendid Suns is a story of two women's lives in Afghanistan, where women are equal, as a table or the (last) chair. O: Ein Thousand Splendid Seine ist eine Geschichte von zwei Frauen in Afghanistan, wo Frauen gleich sind, als Tisch oder Stuhl . P: Ein Thousand Splendid Seine ist eine Geschichte von zwei Frauen in Afghanistan, wo Frauen gleich sind, als Tisch oder als letzter Vorsitzender . furniture chairperson S: See a (small rapid) drop in your CO level once you stop smoking. O: Sehen Sie sich einen schnellen Rückgang Ihrer CO-Ebene an, sobald Sie das Rauchen einstellen. P: Sehen Sie einen kleinen Tropfen auf Ihrem CO-Niveau, sobald Sie aufhören, Rauchen zu beenden. decrease drop of liquid S: And moreover -each of our guests will get a (different small) present! O: Und darüber hinaus wird jeder unserer Gäste ein kleines Geschenk bekommen! P: Und darüber hinaus wird jeder unserer Gäste eine andere Gegenwart bekommen! gift current time S: A (new) record of every transaction made is kept, allowing for a complete audit if necessary. O: Ein Datensatz jeder Transaktion wird gehalten, so dass erforderlichenfalls eine vollständige Prüfung möglich ist. P: Ein neuer Rekord jeder Transaktion wird gehalten, so dass erforderlichenfalls eine vollständige Prüfung möglich ist. document achievement S: Britain's new trade deals with non-EU countries would also probably involve (political worse) terms. O: Die neuen Handelsvereinbarungen Großbritanniens mit Nicht-EU-Ländern würden wahrscheinlich auch schlechtere Bedingungen beinhalten. P: Großbritanniens neue Handelsabkommen mit Nicht-EU-Ländern würden wahrscheinlich auch politische Begriffe beinhalten. demand expressionSource
Table 17 :
17Examples of successful attacks on the WMT19 LSTM. Homographs are blue, attractors are red. input / Original output / Perturbed output Seed sense Adv. sense S: Not to mention (non) uniform loading and soring fingers, contaminated with (common) lead. O: Ganz zu schweigen von (nicht) einheitlichen Lade-und Sortierfingern, die mit Blei kontaminiert sind. P: Ganz zu schweigen von (nicht) einheitlichen Lade-und Sortierfingern, die mit einer gemeinsamen Führung kontaminiert sind. metal first place S: If the symbol ">" is displayed, keep entering (greek) letters until predictive options are displayed. O: Wenn das Symbol ">" angezeigt wird, erhalten Sie die Eingabe von Buchstaben , bis prognostizierte Optionen angezeigt werden. P: Wenn das Symbol ">" angezeigt wird, erhalten Sie immer wieder Grußbriefe , bis prognostizierte Optionen angezeigt werden. character message S: This film is not about dialogue or a (little stringent) plot, but all about atmospherea feverish dream that has become a film. O: In diesem Film geht es nicht um einen Dialog oder um eine strenge Handlung , sondern um die Atmosphäre -ein feverser Traum, der zu einem Film geworden ist. P: In diesem Film geht es nicht um Dialog oder ein wenig Grundstück , sondern allesüber die Atmosphäre -ein feverser Traum, der zu einem Film geworden ist. story tract of land S: Manufacture of products from silicone and rubber, Production of springs, Manufacturing of springs, Winding of (small) springs. O: Herstellung von Produkten aus Silikon-und Gummi, Herstellung von Quellen, Herstellung von Quellen, Federn . P: Herstellung von Produkten aus Silikon-und Gummi, Herstellung von Quellen, Herstellung von Quellen, Winding von kleinen Quellen . device water source S: In 1980, financial assets -(large) stocks, bonds, and bank deposits -totaled around 100% of GDP in the advanced economies. O; Im Jahr 1980 belief sich das Finanzvermögen -Aktien , Anleihen und Bankeinlagen -in den hochentwickelten Volkswirtschaften rund 100% des BIP. P: Im Jahr 1980 belief sich das Finanzvermögen -große Bestände , Anleihen und Bankeinlagen -in den hochentwickelten Volkswirtschaften rund 100% des BIP. investment inventorySource
Table 18 :
18Examples of successful attacks on the WMT19 ConvS2S. Homographs are blue, attractors are red.
Experimental codebase available at http://github. com/demelin/detecting_wsd_biases_for_nmt
http://opus.nlpl.eu 3 http://statmt.org/wmt19 4 http://7esl.com/homographs http://en.wikipedia.org/wiki/List_of_ English_homographs
A manual inspection found the clusters to be meaningful.6 The lexicon is released as part of our experimental code: http://github.com/demelin/detecting_ wsd_biases_for_nmt.
We consider any word that co-occurs with a homograph in the training corpus as an attractor of the homograph's specific sense cluster, except for the homograph itself which is not regarded as an attractor for any of its known sense clusters.
We additionally used the non-parametric generalization of the Common Language Effect Size (Ruscio, 2008) for correlation size estimation, but couldn't detect any advantages over RBC in our experimental setting.
We use the implementation provided by the Hugging Face Transformers library (Wolf et al., 2019). We do not fine-tune BERT, as our use case corresponds to its original masked language modeling objective.
This assumes correctness of homograph reference translations, which is unfortunately not always guaranteed.13 We use spaCy in all cases. 14 As implemented in the Transformers library.
AcknowledgementsWe thank Sabine Weber and Tom Pelsmaeker for valuable discussions throughout the development of this work, as well as the anonymous reviewers for their constructive feedback. Rico Sennrich has received funding from the Swiss National Science Foundation (project MUTAMUR; no. 176727).
Generating natural language adversarial examples. Moustafa Alzantot, Yash Sharma Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingMoustafa Alzantot, Yash Sharma Sharma, Ahmed El- gohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adver- sarial examples. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing.
| [
"http://github.com/saffsd/langid.py",
"http://github.com/moses-smt/",
"http://github.com/rsennrich/",
"http://github.com/demelin/detecting_"
] |
[
"Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens",
"Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens"
] | [
"Nitish Joshi nitish@nyu.edu \nDepartment of Computer Science\nNew York University\n\n",
"Xiang Pan xiangpan@nyu.edu \nDepartment of Computer Science\nNew York University\n\n",
"He He \nDepartment of Computer Science\nNew York University\n\n\nCenter for Data Science\nNew York University\n\n"
] | [
"Department of Computer Science\nNew York University\n",
"Department of Computer Science\nNew York University\n",
"Department of Computer Science\nNew York University\n",
"Center for Data Science\nNew York University\n"
] | [
"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing"
] | The term 'spurious correlations' has been used in NLP to informally denote any undesirable feature-label correlations. However, a correlation can be undesirable because (i) the feature is irrelevant to the label (e.g. punctuation in a review), or (ii) the feature's effect on the label depends on the context (e.g. negation words in a review), which is ubiquitous in language tasks. In case (i), we want the model to be invariant to the feature, which is neither necessary nor sufficient for prediction. But in case (ii), even an ideal model (e.g. humans) must rely on the feature, since it is necessary (but not sufficient) for prediction. Therefore, a more fine-grained treatment of spurious features is needed to specify the desired model behavior. We formalize this distinction using a causal model and probabilities of necessity and sufficiency, which delineates the causal relations between a feature and a label. We then show that this distinction helps explain results of existing debiasing methods on different spurious features, and demystifies surprising results such as the encoding of spurious features in model representations after debiasing. | 10.48550/arxiv.2210.14011 | [
"https://www.aclanthology.org/2022.emnlp-main.666.pdf"
] | 253,107,207 | 2210.14011 | af299c407af44b568b73382dbaf6cd177b2a0c7f |
Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens
December 7-11, 2022
Nitish Joshi nitish@nyu.edu
Department of Computer Science
New York University
Xiang Pan xiangpan@nyu.edu
Department of Computer Science
New York University
He He
Department of Computer Science
New York University
Center for Data Science
New York University
Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
the 2022 Conference on Empirical Methods in Natural Language ProcessingDecember 7-11, 2022
The term 'spurious correlations' has been used in NLP to informally denote any undesirable feature-label correlations. However, a correlation can be undesirable because (i) the feature is irrelevant to the label (e.g. punctuation in a review), or (ii) the feature's effect on the label depends on the context (e.g. negation words in a review), which is ubiquitous in language tasks. In case (i), we want the model to be invariant to the feature, which is neither necessary nor sufficient for prediction. But in case (ii), even an ideal model (e.g. humans) must rely on the feature, since it is necessary (but not sufficient) for prediction. Therefore, a more fine-grained treatment of spurious features is needed to specify the desired model behavior. We formalize this distinction using a causal model and probabilities of necessity and sufficiency, which delineates the causal relations between a feature and a label. We then show that this distinction helps explain results of existing debiasing methods on different spurious features, and demystifies surprising results such as the encoding of spurious features in model representations after debiasing.
Introduction
Advancements in pre-trained language models (Devlin et al., 2019;Radford et al., 2019) and large datasets (Rajpurkar et al., 2016;Wang et al., 2018) have enabled tremendous progress on natural language understanding (NLU). This progress has been accompanied by the concern of models relying on superficial features such as negation words and lexical overlap (Poliak et al., 2018;Gururangan et al., 2018;McCoy et al., 2019). Despite the progress in building models robust to spurious features (Clark et al., 2019;He et al., 2019;Veitch et al., 2021;Puli et al., 2022), the term has been used to denote any feature that * equal contribution
Necessary features
The differential compounds to a hefty sum over time.
The differential will not grow −→ Contradiction The differential will grow −→ ? Table 1: Difference between two spurious features: (a) the director name can be replaced without affecting the sentiment prediction; (b) the negation word is necessary as it is not possible to determine the label without it.
the model should not rely on, as judged by domain experts.
Our key observation is that a feature can be considered spurious for different reasons. Compare two such features studied in the literature (Table 1): (a) director names (such as 'Spielberg') in sentiment analysis (Wang and Culotta, 2020); (b) negation words in natural language inference (Gururangan et al., 2018). We do not want the model to rely on the director name because removing or changing it does not affect the sentiment. In contrast, while models should not solely rely on the negation word, they are still necessary for prediction-it is impossible to determine the label without knowing its presence.
In this work, we argue that many spurious features studied in NLP are of the second type where the feature is necessary (although not sufficient) for prediction, which is more complex to deal with than completely irrelevant features in the first case. Current methods do not treat the two types of feature separately, and we show that this can lead to misleading interpretation of the results.
To formalize the distinction illustrated in Table 1, we borrow notions from causality (Wang and Jordan, 2021;Pearl, 1999), and use probability of necessity (PN) and probability of sufficiency (PS) to describe the relation between a feature and a label. Intuitively, high PN means that changing the feature is likely to change the label (e.g. remov-ing "not" will flip the label); high PS means that adding the feature to an example would produce the label (e.g. adding "the movie is brilliant" to a neutral review is likely to make it positive). Under this framework, we define two types of spurious features (Section 2): irrelevant features (e.g. the director name) that have low PN and low PS, and necessary features (e.g. the negation word) that have high PN despite low PS.
Next, we describe the challenges in evaluating and improving robustness to necessary spurious features (Section 4). First, necessary features compose with other features in the context to influence the label. Thus, evaluating whether the model relies solely on the necessary feature requires perturbing the context. This process often introduces new features and leads to inconsistent results depending on how the context is perturbed.
Second, we analyze the effectiveness of two classes of methods-data balancing and representation debiasing-on the two types of spurious features. Data balancing breaks the correlation between the label and the spurious feature (e.g. ); representation debiasing directly removes the spurious feature from the learned representation (e.g. Ravfogel et al. (2020)). Although they are effective for irrelevant features, we show that for necessary spurious features, (i) data balancing does not lead to invariant performance with respect to the spurious feature (Section 5.1); and (ii) removing the spurious feature from the representation significantly hurts performance (Section 5.2).
In sum, this work provides a formal characterization of spurious features in natural language. We highlight that many common spurious features in NLU are necessary (despite being not sufficient) to predict the label, which introduces new challenges to both evaluation and learning.
Categorization of Spurious Features
Causal Models
We describe a structural causal model for text classification to illustrate the relation between different spurious features and the label. Let X = (X 1 , X 2 , .., X n ) denote a sequence of input words/features 1 and Y the output label. We assume a data generating model shown in Figure 1a. There is a common cause C of the input (e.g. a review writer, a PCFG or a semantic representation of the sentence), conditioned on which the words are independent to each other. Each word X i may causally affect the label Y .
Under this model, the dependence between Y and a feature X i can be induced by two processes. The type 1 dependence is induced by a confounder (in this case C) influencing both Y and X i due to biases in data collection, e.g. search engines return positive reviews for famous movies; we denote this non-causal association by the red path in Figure 1b. The type 2 dependence is induced by input words that causally affect Y (the red path in Figure 1c), e.g. negating an adjective affects the sentiment. Importantly, the two processes can and often do happen simultaneously. For example, in NLI datasets, the association between negation words and the label is also induced by crowdworkers' inclination of negating the premise to create a contradiction example.
A type 1 dependence ("Titanic"-sentiment) is clearly spurious because the feature and Y are associated through C while having no causal relationship. 2 In contrast, a type 2 dependence ("not"sentiment) is not spurious per se-even a human needs to rely on negation words to predict the label. Now, how do we measure and differentiate the two types of feature-label dependence? In the following, we describe fine-grained notions of the relationship between a feature and a label, which will allow us to define the spuriousness of a feature.
Sufficiency and Necessity of a Feature
We borrow notions from causality to describe whether a feature is a necessary or sufficient cause of a label (Pearl, 1999;Wang and Jordan, 2021). Table 1: intuitively, "not" is necessary for the contradiction label because in the absence of it (e.g. removing or replacing it by other syntactically correct words) the example would no longer be contradiction; in contrast, "the movie is brilliant" is sufficient to produce the positive label because adding the sentence to a negative review is likely to increase its sentiment score. Thus, the feature's effect on the label relies on counterfactual outcomes. (a) C is the common cause of words in the input. Each word X i may be causally influence Y . (b) Y (sentiment label) and X i ("Titanic") are dependent because of the confounder C (indicated by the red path). (c) Y (sentiment label) and X i ("not") are dependent because of a causal relation.
Consider the examples in
We use Y (X i = x i ) to denote the counterfactual label of an example had we set X i to the specific value x i . 3 Definition 1 (Probability of necessity). The probability of necessity (PN) of a feature X i = x i for the label Y = y conditioned on context
X −i = x −i is PN(X i =x i , Y =y | X −i =x −i ) ≜ p(Y (X i ̸ = x i ) ̸ = y | X i =x i , X −i =x −i , Y =y) .
Given an example (x, y), PN(x i , y | x −i ) 4 is the probability that the label y would change had we set X i to a value different from x i . The distribution of the counterfactual label Y (X i ̸ = x i ) is defined to be p(Y (X i ))p(X i | X i ̸ = x i ) dX i . This corresponds to the label distribution when we replace the word x i with a random word that fits in the context (e.g. "Titanic" to "Ip Man"). In practice, we can simulate the intervention X i ̸ = x i by text infilling using masked language models (Devlin et al., 2019). Definition 2 (Probability of sufficiency). The probability of sufficiency (PS) of a feature X i = x i for the label Y = y conditioned on the context
X −i = x −i is PS(X i =x i , Y =y | X −i =x −i ) ≜ p(Y (X i =x i ) = y | X i ̸ =x i , X −i =x −i , Y ̸ =y) .
Similarly, PS(x i , y | x −i ) is the probability that setting X i to x i would produce the label y on an example where x i is absent. For example, PS of "not" for the negative sentiment measures the probability that a positive review will become negative had we added "not" to the input.
We note that both PN and PS are contextdependent-they measure the counterfactual outcome of individual data points. For example, while "not" has high PN for contradiction in the example in Table 1 6 To consider the average effect of a feature, we marginalize over the context X −i :
PN(x i , y) ≜ PN(x i , y | X −i )p(X −i | x i , y) dX −i ,
and similarly for PS.
Definition 3 (Spuriousness of a feature). The spuriousness of a feature X i = x i for a label Y = y is 1 − PS(x i , y). We say a feature is spurious to the label if its spuriousness is positive.
Our definition of the spuriousness of a feature follows directly from the definition of PS, which measures the extent to which a feature is a sufficient cause of the label (marginalized over the context X −i ). Following this definition, a feature is non-spurious only if it is sufficient in any context. Admittedly, this definition may be too strict for NLP tasks as arguably the effect of any feature can be modulated by context, making all features spurious. Therefore, practically we may consider a feature non-spurious if it has low spuriousness (i.e. high PS).
Feature categorization. The above definitions provide a framework for categorizing features by their necessity and sufficiency to the label as shown in Figure 2.
Incomplete
It's not good.
Robust
A great movie! Estimating PN and PS. Calculating PN and PS of a feature requires knowing how the label would change when the feature is removed or added to an instance. Sometimes we can reason about it with domain knowledge. Consider the feature "Titanic" in Figure 1b, it has zero PN and PS since removing or adding it would not change the label.
In more complex cases, we might need to estimate the probabilities using an experiment. For example, consider the lexical overlap between the premise and hypothesis in NLI. Given an entailment example with high word overlap, changing the overlapped words is likely to cause label change (H1-3) unless it is replaced by a synonym (H4): P: The doctor was paid by the actor. H0: The actor paid the doctor.
L0: Entailment H1: The teacher paid the doctor. L1: Neutral H2: The actor liked the doctor.
L2: Neutral H3: The actor paid the guard.
L3: Neutral H4: An actor paid the doctor.
L4: Entailment
Since a non-synonym is more likely to be sampled during intervention thus causing a label change, we conclude that word overlap has high PN to entailment. On the other hand, it is not a completely sufficient feature (i.e. spuriousness > 0) since there are plenty of examples with high lexical overlap but non-entailment labels (McCoy et al., 2019). We can partially automate this process by intervening examples using masked language models and then collecting labels for the perturbed examples. We discuss this method in more detail and provide preliminary results in Appendix A. However, we note that while PN/PS can be estimated through careful intervention, as a conceptual framework, domain knowledge often suffices to judge whether a feature has high or low PN/PS.
Experiment Setup
Before diving into the implications of our categorization of spurious features, we explain the common setup of experiments that we use to support our arguments.
Spurious features. Typical features considered in the literature (such as word overlap and negation words) fall into the high PN and low PS category. Therefore, in the following discussion, we will focus on two types of spurious features: low PN features that are irrelevant to prediction, and high PN features that are necessary but need additional context to decide the label.
Datasets. We use the following datasets that contain the spurious features. Models. For all our experiments, unless otherwise stated, we use RoBERTa-large (Liu et al., 2019) from Huggingface (Wolf et al., 2019) as the backbone model.
Training methods. Our baseline algorithm finetunes the pretrained model on the original dataset with cross-entropy loss. We also experiment with debiasing methods including Subsampling tion. For example, in Figure 1b the movie name has no causal effect on the label; if intervening on it (e.g. changing "Titanic") nevertheless incurs a prediction change, we say the model is not robust.
Is relying on spurious features always bad?
Prior work has suggested that if the model prediction relies on a single feature in any way, it is undesired (Gardner et al., 2021). However, for a high PN feature, the label and the model output should depend on it ( Figure 1c). Such dependency only becomes undesirable when other necessary features are ignored by the model (e.g. predicting negative sentiment whenever "not" is present). This can be caused by two reasons: first, the model may overly rely on a spurious feature X i due to confounding between Y and X i in the training data (e.g. "not" appears in all negative examples but not positive examples); second, even without confounding, the model may fail to learn how X i interacts with other features to affect the label (e.g. not understanding double negation).
How to evaluate models' robustness? A typical way to test models' robustness is to construct a "challenge set" that tests if perturbations of the input cause model predictions to change in an expected way. The challenge here is that the expected behavior of a model depends on the type of the spurious feature. For low PN spurious features, we can simply perturb them directly and check if the model prediction is invariant, e.g. replacing named entities with another entity of the same type (Balasubramanian et al., 2020). Performance drop on this test set then implies that the model is non-robust. However, intervention on the spurious feature only tells us if the feature is necessary, thus it cannot be used to evaluate robustness to high PN spurious features, where the model prediction is likely (and expected) to flip if we perturb the feature (e.g. replacing "not" with "also" in Figure 1c). Figure 1c. With the spurious feature ("not") fixed, to change Y we must change other features (e.g. "good" → "bad") that affect the label by interacting with "not". To make a correct prediction, the model must learn the composite feature formed by the spurious feature and the newly introduced features. As a result, its performance depends not only on the spurious feature but also on the features introduced during the perturbation.
Inconsistent results on different challenge sets.
To illustrate this problem, we evaluate models' robustness to lexical overlap on two challenge sets constructed differently: (a) HANS; (b) subsets of high lexical overlap examples in the MNLI dev set (where > 0.8 fraction of words in the hypothesis are also in the premise). Compared to (b), HANS non-entailment examples require linguistic knowledge such as understanding passive voice (e.g. "The senators were helped by the managers" does not imply "the senators helped the managers") or adverbs of probability (e.g. "Probably the artists saw the authors" does not imply "the artists saw the authors"), which are rare in MNLI.
We fine-tune pre-trained models on MNLI and report their results in Table 2. While models perform poorly on high overlap non-entailment examples from HANS, their performance is much higher on such examples from MNLI (56.2% vs 93.6%), leading to inconsistent conclusions. 8 Thus, we should be careful when interpreting the magnitude of the problem on challenge sets, as the performance drop could also be attributed to unseen features introduced during dataset construction.
Implications on Learning Methods
In this section, we discuss two common classes of methods to train robust models and their effectiveness for spurious features with high/low PN.
Decorrelating the Spurious Feature and the Label
A straightforward idea to remove undesirable correlation between the label Y and a spurious feature X i due to confounding is to balance the training data such that Y and X i are independent (Japkowicz, 2000;Austin, 2011;Li and Vasconcelos, 2019). In practice, this amounts to subsampling the dataset to balance the classes conditioned on the spurious feature (e.g. "Titanic is good/bad" are equally likely) , or upweighting examples where the spurious feature is not predictive for the label (Karimi Mahabadi et al., 2020). While these methods have shown promise for spurious features with both high and low PN, there is a key difference between the underlying mechanisms. For a low PN spurious feature, the dependence between model prediction and the feature arises from a confounder that affects both Y and X i . As shown in Figure 1b, assuming independence between the spurious feature and other features that affect the label (i.e. there is no path from X i to Y through C), 9 X i and Y are independent without confounding. Thus, enforcing the independence through data balancing matches the independence condition on the data generating distribution. As a result, the model prediction will be independent of X i and we expect its performance to be invariant across examples grouped by X i values (e.g. similar accuracy on reviews about famous vs. non-famous movies).
On the other hand, for high PN spurious features, even without confounding, X i is not independent of Y on the data generating distribution ( Figure 1c). Then why do these methods work for high PN features? Note that X i is not sufficient to decide Y alone but forms a composite feature with other features that affect the label together (e.g. a double negation construction). Therefore, within the same class, examples with different X i are likely to form different composite features. In real data, certain combinations of X i and Y (e.g. positive examples with negation) often correlate with composite features that are difficult to learn (e.g. double negation or comparison). By balancing the (X i , Y ) groups, we allow the model to learn the minority examples more effectively. However, the model performance is not necessarily invariant across groups because 9 While this is not true in general due to the complex grammar constraints in natural language, we use a simplified model for our analysis. Results. In Figure 3, we observe that for the punctuation feature (low PN), there is no large variance in performance across groups. But models have very different performances between the low and high overlap groups. Specifically, models trained on high overlap examples perform poorly on low overlap examples, in particular the entailment class, despite seeing no correlation between lexical overlap and label during training. This could happen because entailment examples within the high and low overlap groups require different features, such as lexical semantics in the high overlap group ("It stayed cold for the whole week" implies 'It stayed cold for the entire week"), and world knowledge in the low overlap group ("He lives in the northern part of Canada" implies "He stays in a cold place") (Joshi et al., 2020). The result highlights that for high PN spurious features, balancing the dataset might not be enough-we additionally need more examples (or larger models (Tu et al., 2020)) to learn the minority patterns.
Removing Spurious Features from the Representation
A different class of methods focuses on removing the spurious feature from the learned representations, e.g. iterative null-space projection (Ravfogel et al., 2020, INLP) and adversarial learning (Zhang et al., 2018). As argued in the previous section, high PN spurious features form composite features with other necessary features. Therefore, removing them also leads to the removal of the composite features, which ends up hurting performance.
Removing high PN spurious features from the representation hurts performance.
Experiments. We test our hypothesis by removing two spurious features (lexical overlap and punctuation) using INLP, a debiasing method that removes linearly encoded spurious features by iteratively projecting the learned representation. We fine-tune RoBERTa-large on subsampled datasets where the label and the spurious feature are independent. Over iterations of INLP, we measure the extractability of the spurious feature by its probing accuracy and measure the model performance by its task accuracy, where both are from linear classifiers trained on the debiased representations. Following Mendelson and Belinkov (2021), the linear classifiers are also trained and evaluated on balanced datasets. For task accuracy, we report results on the minority group (e.g. high lexical overlap examples with non-entailment label) since we find that this group is most affected by debiasing. 10
Results. Figure 4 shows the results for two spurious features. We observe that for high PN features (lexical overlap), when the probing accuracy drops significantly around 300 iterations (i.e. the feature 10 Full results are in Appendix C. is largely removed from representation), there is a significant drop in task accuracy. In contrast, removing the low PN feature does not affect task accuracy significantly.
What Features does the Model Learn with Data Balancing?
We have seen that directly removing spurious features from the representation may hurt performance, whereas data balancing generally helps. Then what features do models learn from balanced data? Mendelson and Belinkov (2021) recently found that, quite counter-intuitively, it is easier to extract the spurious feature from the representation of models trained on balanced data. We argue that this occurs for high PN spurious features because they form composite features with other features, which a probe can rely on (e.g. from "not good" we can still predict the existence of "not"). In contrast, a low PN spurious feature that is not useful for prediction may become less extractable in the representation.
Data balancing does not remove high PN spurious features from the representation.
To understand the relation between a feature's correlation with the label (in the training set) and its prominence in the learned representation, we first conduct experiments on a synthetic dataset where we can control the strength of feature-label correlations precisely. Synthetic data results. We create a binary sequence classification task similar to Lovering et al. (2020), where each input is of length 10 from a vocabulary V of integers (|V | = 1k). We create spurious features with low and high PN as follows. In the first task, the label is 1 if the first two characters are identical; the spurious feature is the presence of the symbol 2 in the sequence, which has zero PN. In the second task, the label is 1 if the first two characters are identical XOR 2 is present; the spurious feature is again the presence of 2, but in this case it has high PN (since removing 2 will flip the label).
We generate a sequence of synthetic datasets with increasing bias strength by varying the correlation between the label and the spurious feature. We then train LSTM models (embedding layer, a 1-layer LSTM and an MLP with 1 hidden layer with tanh activation) on each dataset and measure the extractability of the spurious feature from the model's representation. Following Mendelson and Belinkov (2021), we train linear probes on balanced datasets to predict the feature from the last layer embeddings of each model. We then measure extractability using two metrics: probing accuracy and compression C based on minimum description length (Voita and Titov, 2020). 11 Figure 5 plots the extractability of the spurious feature (measured by compression C) as a function of the bias strength. We observe that the extractability of the high PN spurious feature remains high across varying bias strengths, including when the 11 For both metrics, higher value indicates higher extractability. See Appendix B for more details about training. spurious feature and the label are independent (bias strength=0.5). In contrast, for low PN spurious features, we observe that its extractability decreases as the bias strength decreases. In other words, they become less prominent in the representation as their correlation with the label drops.
Real data results. Next, we study the effect of debiasing algorithms on the extractability of spurious features in real datasets. We evaluate the following methods: Subsampling , Product-of-Expert (POE) and Debiased Focal Loss (DFL) (Karimi Mahabadi et al., 2020), all of which explicitly or implicitly break the feature-label correlation during training. We also train using ERM on the original biased dataset as a baseline. All methods use RoBERTa-large as the backbone model. We test on the low PN spurious feature (punctuation, '!!') and the high PN spurious feature (lexical overlap) in Table 3. 12 We observe that the high PN feature, lexical overlap, is still easily extractable after debiasing. In contrast, for the low PN feature, punctuation, although its probing accuracy is high, its compression is larger in the baseline models, i.e. the feature becomes harder to extract after debiasing, which is consistent with what we observe in the synthetic case.
In sum, we show that breaking the correlation between a feature and the label (e.g. through data balancing) does not necessarily remove the feature from the learned representation. The high PN features can still be detected from the composite features on which the label depends.
Related Work
While there is a large body of work on improving model robustness to spurious correlations, the question of what spurious features are in natural language is less studied. Veitch et al. (2021) formalize spurious correlations from a causal perspective and argued that the right objective is counterfactual invariance (CI)the prediction of a model should be invariant to perturbations of the spurious feature. They also make a distinction between purely spurious and non-purely spurious correlations, which are similar to the type 1 and type 2 dependencies we defined. However, their main approach and results assumed purely spurious correlations. Here, we argue that high PN features, or non-purely spurious correlations, are more common in NLP tasks, and the label is not invariant to these features. Gardner et al. (2021) consider all single features/words that correlate with the label as spurious. Under this definition, the learning algorithm should enforce a uniform distribution of the prediction conditioned on any feature, i.e. Y |X i = x i should follow a uniform distribution (termed uninformative input features or UIF by Eisenstein (2022)). To connect PN/PS (counterfactual quantities) with the conditional probability (an observational quantity), we must marginalize over the context. If the feature has zero PN and PS (i.e. it has no effect on the label in any context), p(Y | X i = x i ) is uniform for all x i . However, we cannot say the same for features with non-zero PN/PS.
Recently, Eisenstein (2022) used a toy example to demonstrate the disconnect between UIF and CI, showing that neither objective implies the other. Along similar lines, Schwartz and Stanovsky (2022) argued that UIF is hard to achieve in practice; further, enforcing a uniform label distribution for one feature may skew the label distribution for other features. Our work complements the two by adding more clarity to the relation between a feature and the label in NLP tasks. Additionally, we highlight that neither the CI nor the UIF principle holds for high PN spurious features, which the label depends on in the true data generating distribution.
Finally, formal notions of necessity and sufficiency from causality have also been used in the context of explanations. Mothilal et al. (2021) and Galhotra et al. (2021) use a causal framework and counterfactual examples to estimate necessity and sufficiency of explanations. Wang and Jordan (2021) used the notions to formalize the desired properties of representations-they should be nonspurious (capturing sufficient features) and efficient (every feature should be necessary). We use notions of probability of causation to formalize two different types of spurious features present in natural language.
Conclusion
In this work, we showed that all spurious features in natural language are not alike-many spurious features in NLU are necessary but not sufficient to predict the label. We further showed how this distinction makes it challenging to evaluate model robustness and to learn robust models. In particular, unlike low PN spurious features that are irrelevant to prediction, high PN features interact with other features to influence the label. Therefore, they do not have a clean relationship with the label that allows us to enforce independence or invariance during training.
Perhaps a pessimistic takeaway is that there is not much we can do about high PN spurious features. The key problem is that the model fails to learn the rare or unseen compositions of the necessary spurious feature and other features (e.g. different constructions that involve negation). That said, we believe large language models suggest promising solutions because 1) they have good representations of various constructions in natural language; 2) they can bypass the problem of dataset bias in supervised learning through few-shot in-context learning; 3) they can take additional inductive bias for the task through natural language prompting (e.g. chain-of-thought). We hope that our result will spur future work on training and evaluating spurious correlations that are more suited for spurious features arising in natural language.
Limitations
While our definition helps put spurious features into perspective, it has some limitations: Kaushik et al. (2020)). We believe that more research is needed to understand how to train models robust to spurious correlations. Both our work and Schwartz and Stanovsky (2022) argue that subsampling training data to ensure the independence between the spurious feature and the label might not work. Nevertheless, we believe that our definitions are important to put the results in perspective and make progress.
A Measuring PN
To provide a more concrete method for measuring PN of any feature, we use the following method: We use masked language models (MLMs) (Devlin et al., 2019) to intervene on the feature X i by masking and in-filling while ensuring that x ′ i ̸ = x i i.e. the replaced word is different from the original one. We can then annotate these examples (either using experts or through crowdsourcing) to check if the new label is the same. We use this method to compute PN over a small set of randomly sampled examples (20) which were annotated by the authors. We used RoBERTa-large for mask in-filling. Using this method, the estimated PN for negation features is 0.8, for lexical overlap it is 0.7 and for punctuation bias in NLI it is 0. This shows that, as expected, lexical overlap and negation features have much higher PN than punctuation. We note that while such a method is useful to estimate PN/PS, as a conceptual framework, domain knowledge often suffices to judge whether a feature has high or low PN/PS.
B Experimental Details
In all the experiments, the model is trained for 3 epochs, with a maximum sequence length of 128 tokens. We use a learning rate of 1e-5 with the Adam Optimizer (Kingma and Ba, 2015) with a batch size of 32. All experiments were run on a single RTX8000 GPU with run-time of < 12 hours for each experiment. We use the default train/dev split in MNLI dataset.
Probing Experiments (Section 5.2): We use setting similar to Mendelson and Belinkov (2021) where we train linear probes on subsampled datasets where the probing label is balanced. The probe is trained with a batch size of 64 for 50 epochs with a learning rate 1e-3 using Adam optimizer.
C INLP: Extended Results
Training Details For INLP, we use the 1024 dimensional representation of the first token from RoBERTa-Large as the representation of the input. The linear model is trained and evaluated on subsets of the dataset where the probing label is balanced.
In Figure 4 we observed that for the lexical overlap spurious correlation, the performance for the main task drops significantly on the minority examples. Here, we show that we also observe a Accuracy word-overlap bias -task accuracy punctuation bias -task accuracy word-overlap probing accuracy punctuation probing accuracy Figure 6: Extractability of the spurious feature (probing accuracy) and the main task accuracy (task accuracy) as a function of iterations in INLP. The high PN feature (word-overlap) is more difficult to remove (noisier probing accuracy), and is accompanied by drop in the task accuracy. 1.0 Accuracy negation bias -task accuracy punctuation bias -task accuracy negation probing accuracy punctuation probing accuracy Figure 7: Extractability of the spurious feature (probing accuracy) and the main task accuracy (task accuracy) as a function of iterations in INLP. The high PN feature (negation) is more difficult to remove (noisier probing accuracy), and is accompanied by drop in the task accuracy.
decrease in the average performance albeit less than that for the minority group. One potential explanation for why we observe larger drop on the minority examples is that learning an invariant representation leads the model to solve the easier examples in the majority group (e.g. high lexical overlap examples with entailment label) at the cost of the minority examples. The performance for the main task on all dev examples for lexical overlap is shown in Figure 6. We additionally also compare to the negation spurious correlation which also has a type 2 dependency in Figure 7 -we observe that the main task accuracy remains much higher than that for lexical overlap but eventually drops down suddenly. Table 4: Extractability of the spurious feature for various robust training methods. In general, the representation is more invariant to the feature if it has low PN (synthetic NLI) than if it has high PN (negation and word-overlap bias).
D Encoding of Spurious Feature: Extended Results
In addition to the results reported for lexical overlap and synthetic bias in NLI, we also verify the hypothesis for negation spurious correlation and evaluate Group-DRO on all spurious correlations in Table 4.
Irrelevant featuresSpeilberg's new film is brilliant. −→ Positive 's new film is brilliant. −→ Positive
Figure 1 :
1Causal models for text classification.
Figure 2 :
2Categorization of features based on their PN and PS. Spurious features have low PS. Among them, the high PN ones are part of the features needed for prediction but they alone are not sufficient; and the low PN ones are irrelevant to prediction.
(i) Low PN spurious features: we inject synthetic bias to MNLI examples by associating a punctuation ('!!') with the neutral label. Following Dranker et al. (2021), we set bias prevalence (i.e. examples where '!!' is present) to 25% and set bias strength (i.e. percentage of examples with '!!' and the neutral label) to 90%. The dataset is created by modifying MNLI examples through adding/deleting the feature at the end of the hypothesis. (ii) High PN spurious features: we consider the negation bias (Poliak et al., 2018) and the lexical overlap bias in MNLI (Williams et al., 2018) for which we use the HANS challenge set (McCoy et al., 2019) during evaluation.
Figure 3 :
3Model performance across groups
Figure 4 :
4Extractability (probing accuracy) of the spurious feature (shown in dashed lines) and the task accuracy (shown in solid lines) as a function of iterations in INLP. For high PN features (word-overlap), its removal (decreasing probing accuracy) is accompanied by large drop in the task accuracy.
Figure 5 :
5Extractability (compression) of the spurious feature as a function of bias strength on the synthetic data. The high PN feature is easily extractable regardless of its correlation with the label, whereas the low PN feature becomes less extractable when the bias strength drops.
, there are examples where it has low PN. 5 Similarly, there can be examples where the word "Titanic" has high PN.
, Product-of-Expert (POE) and Debiased Focal Loss (DFL) (Karimi Mahabadi et al., 2020) for comparison. Hyperparameters and training details can be found in Appendix B. 74 Implications on Model RobustnessUnder the causal framework, we say a model is non-robust if it fails on the interventional distribu-Models
HANS
MNLI subsets
Ent/Non-ent
∆ Ent/Non-ent
∆
BERT-base
99.2/12.9 86.3 96.4/82.5 13.9
RoBERTa-large 99.9/56.2 43.7 97.1/93.6
3.5
Table 2 :
2Results on two challenge sets for lexical overlap. Both indicate significantly different extent to which the models rely on the spurious correlation.
For high PN spurious features like negation words, we instead want to test if they are sufficient for the model prediction. An alternate method is to create two sets of examples with the same spurious feature but different labels. For example, HANS (McCoy et al., 2019) consists of entailment and non-entailment examples, both having complete lexical overlap; this tests if high word overlap alone is sufficient to produce an entailment prediction. However, this process inevitably introduces a new variable. Consider the causal graph in
Data balancing leads to invariance to low PN spurious features but not high PN ones. fraction of the words in the hypothesis are also in the premise, and 'low-overlap' if less than 0.2). For both groups, we subsample the training set such that the label distribution is uniform in each group.To test models' invariance to groups, we train RoBERTa-large on one group and test on the other, e.g. training only on high-overlap examples and evaluating on low-overlap examples -a model that is invariant to the spurious feature should generalize equally well to both groups.. Left: train
on high overlap examples. Right: train on examples
with punctuation. We show both the in-distribution per-
formance where the model is tested on the same group as
training, and the out-of-distribution performance where
the model is tested on the unseen group. Performance is
invariant to groups if the feature has low PN (right) but
has large variation if the feature has high PN (left).
the model must rely on different (composite) fea-
tures.
Experiments. We create balanced datasets for
two spurious features in MNLI: (a) punctuation,
where examples are grouped by whether they end
with '!!' as described in Section 4; and (b) lexical
overlap, where examples are grouped by lexical
overlap ('high overlap' if more than 0.8
Table 3 :
3Extractability of the spurious feature for vari-
ous robust training methods. Blue denotes an increase
whereas red denotes a decrease in extractability from
the baseline. For high PN spurious features (lexical
overlap), the feature is as easy if not easier to extract
after debiasing, as compared to the baseline, in contrast
to the low PN feature (punctuation).
1 .
1Our definition relies on counterfactual quantities which are not observed. Thus, actually computing PN and PS is expensive and needs a human to, at the very least, go through the perturbed examples. 2. While the definitions and categorization help interpret experiment results, they do not di-rectly tell us what training & evaluation methods are suitable for the high PN spurious features in particular. One straightforward idea to enforce models to match the PN and PS of features in the data generating distribution. This would require collecting counterfactual examples with control for a specific feature (as opposed to generic counterfactuals as in
Negation-bias Word-overlap bias Synthetic-NLIC
Acc.
C
Acc.
C
Acc.
Baseline
2.6
86.7
3.5
90.5
47.6
100
Subsampling (Sagawa et al., 2020)
2.6
87.8
3.6
91.5
10.2
97.7
POE (Karimi Mahabadi et al., 2020) 2.8
88.9
4.2
91.3
42.9
99.9
DFL (Karimi Mahabadi et al., 2020) 2.9
89.2
3.9
88.3
48.5
100
Group-DRO (Sagawa* et al., 2020) 2.8
89.8
4.7
91.5
14.7
100
For illustration purposes, we assume that each feature is a word in the input text. However, the same model and analysis apply to cases where Xi denote a more complex feature (e.g. named entities or text length) extracted from the input.
The two types of dependencies are also discussed inVeitch et al. (2021), where the type 1 dependence is called "purely spurious".
The counterfactual label Y (Xi = xi) is also commonly written as Yx i(Pearl, 2009) but we follow the notation inWang and Jordan (2021) 4 For notational simplicity, we omit the random variables (denoted by capital letters) when clear from the context.
Consider the premise "The woman was happy" and the hypothesis "The woman angrily remarked 'This will not work!"'.6 For example in sentiment analysis, consider 'This movie was on a similar level as Titanic'.
Our code can be found at https://github.com/ joshinh/spurious-correlations-nlp
Large variance in performance across different subcases of non-entailment examples as reported in McCoy et al. (2019) is another example of the unreliability.
The results for negation bias can be found in Appendix D.
AcknowledgementsWe thank Sameer Singh, Nicholas Lourie, Vishakh Padmakumar, Richard Pang, Chen Zhao, and Saranya Venkatraman for discussion and feedback on the work. We thank Yixin Wang for pointing out an error in our initial causal model. NJ is supported by an NSF Graduate Research Fellowship under grant number 1839302. This work is partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: From Pattern Recognition to AI) and a gift from AWS AI.
An introduction to propensity score methods for reducing the effects of confounding in observational studies. C Peter, Austin, Multivariate Behavioral Research. 46Peter C Austin. 2011. An introduction to propensity score methods for reducing the effects of confound- ing in observational studies. Multivariate Behavioral Research, 46:399-424.
What's in a name? are BERT named entity representations just as good for any other name?. Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, Sunita Sarawagi, 10.18653/v1/2020.repl4nlp-1.24Proceedings of the 5th Workshop on Representation Learning for NLP. the 5th Workshop on Representation Learning for NLPOnline. Association for Computational LinguisticsSriram Balasubramanian, Naman Jain, Gaurav Jin- dal, Abhijeet Awasthi, and Sunita Sarawagi. 2020. What's in a name? are BERT named entity represen- tations just as good for any other name? In Proceed- ings of the 5th Workshop on Representation Learning for NLP, pages 205-214, Online. Association for Computational Linguistics.
Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. Christopher Clark, Mark Yatskar, Luke Zettlemoyer, 10.18653/v1/D19-1418Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsChristopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4069-4082, Hong Kong, China. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Irm-when it works and when it doesn't: A test case of natural language inference. Yana Dranker, He He, Yonatan Belinkov, Advances in Neural Information Processing Systems. Curran Associates, Inc34Yana Dranker, He He, and Yonatan Belinkov. 2021. Irm-when it works and when it doesn't: A test case of natural language inference. In Advances in Neural Information Processing Systems, volume 34, pages 18212-18224. Curran Associates, Inc.
Uninformative input features and counterfactual invariance: Two perspectives on spurious correlations in natural language. Jacob Eisenstein, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJacob Eisenstein. 2022. Uninformative input features and counterfactual invariance: Two perspectives on spurious correlations in natural language. In Proceed- ings of the 2022 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies.
Explaining black-box algorithms using probabilistic contrastive counterfactuals. Sainyam Galhotra, Romila Pradhan, Babak Salimi, Proceedings of the 2021 International Conference on Management of Data. the 2021 International Conference on Management of DataSainyam Galhotra, Romila Pradhan, and Babak Salimi. 2021. Explaining black-box algorithms using prob- abilistic contrastive counterfactuals. Proceedings of the 2021 International Conference on Management of Data.
Competency problems: On finding and removing artifacts in language data. Matt Gardner, William Merrill, Jesse Dodge, Matthew Peters, Alexis Ross, Sameer Singh, Noah A Smith, 10.18653/v1/2021.emnlp-main.135Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsMatt Gardner, William Merrill, Jesse Dodge, Matthew Peters, Alexis Ross, Sameer Singh, and Noah A. Smith. 2021. Competency problems: On finding and removing artifacts in language data. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing, pages 1801-1813, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Annotation artifacts in natural language inference data. Swabha Suchin Gururangan, Omer Swayamdipta, Roy Levy, Samuel Schwartz, Noah A Bowman, Smith, 10.18653/v1/N18-2017Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsSuchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language infer- ence data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Computa- tional Linguistics.
Unlearn dataset bias in natural language inference by fitting the residual. He He, Sheng Zha, Haohan Wang, 10.18653/v1/D19-6115Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP. the 2nd Workshop on Deep Learning Approaches for Low-Resource NLPHong Kong, ChinaAssociation for Computational LinguisticsHe He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 132-142, Hong Kong, China. Association for Computational Linguistics.
The class imbalance problem: Significance and strategies. Nathalie Japkowicz, Nathalie Japkowicz. 2000. The class imbalance prob- lem: Significance and strategies.
TaxiNLI: Taking a ride up the NLU hill. Pratik Joshi, Somak Aditya, Aalok Sathe, Monojit Choudhury, 10.18653/v1/2020.conll-1.4Proceedings of the 24th Conference on Computational Natural Language Learning. the 24th Conference on Computational Natural Language LearningPratik Joshi, Somak Aditya, Aalok Sathe, and Monojit Choudhury. 2020. TaxiNLI: Taking a ride up the NLU hill. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 41-55, Online. Association for Computational Lin- guistics.
End-to-end bias mitigation by modelling biases in corpora. Yonatan Rabeeh Karimi Mahabadi, James Belinkov, Henderson, 10.18653/v1/2020.acl-main.769Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsRabeeh Karimi Mahabadi, Yonatan Belinkov, and James Henderson. 2020. End-to-end bias mitigation by modelling biases in corpora. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 8706-8716, Online. Asso- ciation for Computational Linguistics.
Learning the difference that makes a difference with counterfactually augmented data. Divyansh Kaushik, Eduard Hovy, Zachary C Lipton, International Conference on Learning Representations. ICLRDivyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2020. Learning the difference that makes a difference with counterfactually augmented data. International Conference on Learning Representations (ICLR).
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, abs/1412.6980CoRRDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
Repair: Removing representation bias by dataset resampling. Yi Li, Nuno Vasconcelos, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionYi Li and Nuno Vasconcelos. 2019. Repair: Removing representation bias by dataset resampling. In Pro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9572-9581.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692.
Predicting inductive biases of pretrained models. Charles Lovering, Rohan Jha, Tal Linzen, Ellie Pavlick, International Conference on Learning Representations. Charles Lovering, Rohan Jha, Tal Linzen, and Ellie Pavlick. 2020. Predicting inductive biases of pre- trained models. In International Conference on Learning Representations.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. Tom Mccoy, Ellie Pavlick, Tal Linzen, 10.18653/v1/P19-1334Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuris- tics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics.
Debiasing methods in natural language understanding make bias more accessible. Michael Mendelson, Yonatan Belinkov, 10.18653/v1/2021.emnlp-main.116Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsMichael Mendelson and Yonatan Belinkov. 2021. De- biasing methods in natural language understanding make bias more accessible. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1545-1557, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Towards unifying feature attribution and counterfactual explanations: Different means to the same end. Divyat Ramaravind Kommiya Mothilal, Chenhao Mahajan, Amit Tan, Sharma, Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. the 2021 AAAI/ACM Conference on AI, Ethics, and SocietyRamaravind Kommiya Mothilal, Divyat Mahajan, Chen- hao Tan, and Amit Sharma. 2021. Towards unifying feature attribution and counterfactual explanations: Different means to the same end. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
Probabilities of causation: Three counterfactual interpretations and their identification. Judea Pearl, 10.1023/A:1005233831499Synthese. 1211-2Judea Pearl. 1999. Probabilities of causation: Three counterfactual interpretations and their identification. Synthese, 121(1-2):93-149.
Causality: Models, Reasoning and Inference. Judea Pearl, Cambridge University PressUSA2nd editionJudea Pearl. 2009. Causality: Models, Reasoning and Inference, 2nd edition. Cambridge University Press, USA.
Hypothesis only baselines in natural language inference. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme, 10.18653/v1/S18-2023Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. the Seventh Joint Conference on Lexical and Computational SemanticsNew Orleans, LouisianaAssociation for Computational LinguisticsAdam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence. In Proceedings of the Seventh Joint Confer- ence on Lexical and Computational Semantics, pages 180-191, New Orleans, Louisiana. Association for Computational Linguistics.
Out-of-distribution generalization in the presence of nuisance-induced spurious correlations. Aahlad Manas Puli, H Lily, Eric Karl Zhang, Rajesh Oermann, Ranganath, International Conference on Learning Representations. Aahlad Manas Puli, Lily H Zhang, Eric Karl Oermann, and Rajesh Ranganath. 2022. Out-of-distribution generalization in the presence of nuisance-induced spurious correlations. In International Conference on Learning Representations.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
Null it out: Guarding protected attributes by iterative nullspace projection. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav Goldberg, 10.18653/v1/2020.acl-main.647Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsShauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guard- ing protected attributes by iterative nullspace projec- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237-7256, Online. Association for Computational Linguistics.
Distributionally robust neural networks. Shiori Sagawa, * , Pang Wei Koh, * , Tatsunori B Hashimoto, Percy Liang, International Conference on Learning Representations. Shiori Sagawa*, Pang Wei Koh*, Tatsunori B. Hashimoto, and Percy Liang. 2020. Distributionally robust neural networks. In International Conference on Learning Representations.
An investigation of why overparameterization exacerbates spurious correlations. Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, Percy Liang, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. 2020. An investigation of why over- parameterization exacerbates spurious correlations. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 8346-8356. PMLR.
On the limitations of dataset balancing: The lost battle against spurious correlations. Roy Schwartz, Gabriel Stanovsky, Findings of NAACL. Roy Schwartz and Gabriel Stanovsky. 2022. On the lim- itations of dataset balancing: The lost battle against spurious correlations. In Findings of NAACL.
Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. Lifu Tu, Garima Lalwani, 10.1162/tacl_a_00335Transactions of the Association for Computational Linguistics. 8Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spuri- ous correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633.
Counterfactual invariance to spurious correlations in text classification. Victor Veitch, Steve Alexander D'amour, Jacob Yadlowsky, Eisenstein, Advances in Neural Information Processing Systems. Victor Veitch, Alexander D'Amour, Steve Yadlowsky, and Jacob Eisenstein. 2021. Counterfactual invari- ance to spurious correlations in text classification. In Advances in Neural Information Processing Systems.
Information-theoretic probing with minimum description length. Elena Voita, Ivan Titov, 10.18653/v1/2020.emnlp-main.14Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsElena Voita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computa- tional Linguistics.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, 10.18653/v1/W18-5446Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat- ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Com- putational Linguistics.
Desiderata for representation learning: A causal perspective. Yixin Wang, Michael I Jordan, 10.48550/ARXIV.2109.03795Neural Information Processing Systems (NeurIPS) Workshop on Causal Inference & Machine Learning: Why now? arXiv. Yixin Wang and Michael I. Jordan. 2021. Desiderata for representation learning: A causal perspective. In Neural Information Processing Systems (NeurIPS) Workshop on Causal Inference & Machine Learning: Why now? arXiv.
Identifying spurious correlations for robust text classification. Zhao Wang, Aron Culotta, 10.18653/v1/2020.findings-emnlp.308Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsZhao Wang and Aron Culotta. 2020. Identifying spu- rious correlations for robust text classification. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 3431-3440, Online. Association for Computational Linguistics.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, 10.18653/v1/N18-1101Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Jamie Brew, abs/1910.03771ArXiv. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
Mitigating unwanted biases with adversarial learning. Brian Hu Zhang, Blake Lemoine, Margaret Mitchell, Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. the 2018 AAAI/ACM Conference on AI, Ethics, and SocietyBrian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335- 340.
| [] |
[
"HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization",
"HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization"
] | [
"Ye Liu \nUniversity of Illinois at Chicago\nChicagoILUSA\n",
"Jian-Guo Zhang \nUniversity of Illinois at Chicago\nChicagoILUSA\n",
"Yao Wan wanyao@hust.edu.cn \nHuazhong University of Science and Technology\nHuhanChina\n",
"Congying Xia \nUniversity of Illinois at Chicago\nChicagoILUSA\n",
"Lifang He \nLehigh University\nBethlehemPAUSA\n",
"Philip S Yu psyu@@uic.edu \nUniversity of Illinois at Chicago\nChicagoILUSA\n"
] | [
"University of Illinois at Chicago\nChicagoILUSA",
"University of Illinois at Chicago\nChicagoILUSA",
"Huazhong University of Science and Technology\nHuhanChina",
"University of Illinois at Chicago\nChicagoILUSA",
"Lehigh University\nBethlehemPAUSA",
"University of Illinois at Chicago\nChicagoILUSA"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. To mitigate these issues, this paper proposes HET-FORMER, a Transformer-based pre-trained model with multi-granularity sparse attentions for long-text extractive summarization. Specifically, we model different types of semantic nodes in raw text as a potential heterogeneous graph and directly learn heterogeneous relationships (edges) among nodes by Transformer. Extensive experiments on both single-and multi-document summarization tasks show that HETFORMER achieves stateof-the-art performance in Rouge F1 while using less memory and fewer parameters. | 10.18653/v1/2021.emnlp-main.13 | [
"https://www.aclanthology.org/2021.emnlp-main.13.pdf"
] | 238,744,469 | 2110.06388 | 01eb050eaefd75e8d40443344ce9bd791abc898a |
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Ye Liu
University of Illinois at Chicago
ChicagoILUSA
Jian-Guo Zhang
University of Illinois at Chicago
ChicagoILUSA
Yao Wan wanyao@hust.edu.cn
Huazhong University of Science and Technology
HuhanChina
Congying Xia
University of Illinois at Chicago
ChicagoILUSA
Lifang He
Lehigh University
BethlehemPAUSA
Philip S Yu psyu@@uic.edu
University of Illinois at Chicago
ChicagoILUSA
HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 2021146
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. To mitigate these issues, this paper proposes HET-FORMER, a Transformer-based pre-trained model with multi-granularity sparse attentions for long-text extractive summarization. Specifically, we model different types of semantic nodes in raw text as a potential heterogeneous graph and directly learn heterogeneous relationships (edges) among nodes by Transformer. Extensive experiments on both single-and multi-document summarization tasks show that HETFORMER achieves stateof-the-art performance in Rouge F1 while using less memory and fewer parameters.
Introduction
Recent years have seen a resounding success in the use of graph neural networks (GNNs) on document summarization tasks Hanqi Jin, 2020), due to their ability to capture inter-sentence relationships in complex document. Since GNN requires node features and graph structure as input, various methods, including extraction and abstraction (Li et al., 2020;Huang et al., 2020;Jia et al., 2020), have been proposed for learning desirable node representations from raw text. Particularly, they have shown that Transformer-based pre-trained models such as BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) offer an effective way to initialize and fine tune the node representations as the input of GNN.
Despite great success in combining Transformerbased pre-trained models with GNNs, all existing approaches have their limitations. The first limitation lies in the adaptation capability to long-text input. Most pre-trained methods truncate longer documents into a small fixed-length sequence (e.g., n = 512 tokens), as its attention mechanism requires a quadratic cost w.r.t. sequence length. This would lead to serious information loss (Li et al., 2020;Huang et al., 2020). The second limitation is that they use pre-trained models as a multilayer feature extractor to learn better node features and build multi-layer GNNs on top of extracted features, which have cumbersome networks and tremendous parameters (Jia et al., 2020).
Recently there have been several works focusing on reducing the computational overhead of fullyconnected attention in Transformers. Especially, ETC (Ravula et al., 2020) and Longformer (Beltagy et al., 2020) proposed to use local-global sparse attention in pre-trained models to limit each token to attend to a subset of the other tokens (Child et al., 2019), which achieves a linear computational cost of the sequence length. Although these methods have considered using local and global attentions to preserve hierarchical structure information contained in raw text data, their abilities are still not enough to capture multi-level granularities of semantics in complex text summarization scenarios.
In this work, we propose HETFORMER, a HETerogeneous transFORMER-based pre-trained model for long-text extractive summarization using multi-granularity sparse attentions. Specifically, we treat tokens, entities, sentences as different types of nodes and the multiple sparse masks as different types of edges to represent the relations (e.g., token-to-token, token-to-sentence), which can preserve the graph structure of the document even with the raw textual input. Moreover, our approach will eschew GNN and instead rely entirely on a sparse attention mechanism to draw heterogeneous graph structural dependencies between input tokens.
The main contributions of the paper are summarized as follows: 1) we propose a new structured pre-trained method to capture the heterogeneous structure of documents using sparse attention; 2) we extend the pre-trained method to longer text extractive summarization instead of truncating the document to small inputs; 3) we empirically demonstrate that our approach achieves state-of-the-art performance on both single-and multi-document extractive summarization tasks.
HETFORMER on Summarization
HETFORMER aims to learn a heterogeneous Transformer in pre-trained model for text summarization.
To be specific, we model different types of semantic nodes in raw text as a potential heterogeneous graph, and explore multi-granularity sparse attention patterns in Transformer to directly capture heterogeneous relationships among nodes. The node representations will be interactively updated in a fine-tuned manner, and finally, the sentence node representations are used to predict the labels for extractive text summarization.
Node Construction
In order to accommodate multiple granularities of semantics, we consider three types of nodes: token, sentence and entity.
The token node represents the original textual item that is used to store token-level information. Different from HSG which aggregates identical tokens into one node, we keep each token occurrence as a different node to avoid ambiguity and confusion in different contexts. Each sentence node corresponds to one sentence and represents the global information of one sentence. Specifically, we insert an external [CLS] token at the start of each sentence and use it to encode features of each tokens in the sentence. We also use the interval segment embeddings to distinguish multiple sentences within a document, and the position embeddings to display monotonical increase of the token position in the same sentence. The entity node represents the named entity associated with the topic. The same entity may appear in multiple spans in the document. We utilize NeuralCoref 1 to obtain the coreference resolution of each entity, which can be used to determine whether two expressions (or "mentions") refer to the same entity.
Sparse Attention Patterns
Our goal is to model different types of relationships (edges) among nodes, so as to achieve a sparse graph-like structure directly. To this end, we leverage multi-granularity sparse attention mechanisms in Transformer, by considering five attention patterns, as shown in Fig. 1: token-to-token (t2t), tokento-sentence (t2s), sentence-to-token (s2t), sentenceto-sentence (s2s) and entity-to-entity (e2e).
Specifically, we use a fixed-size window attention surrounding each token ( Fig. 1(a)) to capture the short-term t2t dependence of the context. Even if each window captures the short-term dependence, by using multiple stacked layers of such windowed attention, it could result in a large receptive field (Beltagy et al., 2020). Because the top layers have access to all input locations and have the capacity to build representations that incorporate information across the entire input.
The t2s represents the attention of all tokens connecting to the sentence nodes, and conversely, s2t is the attention of sentence nodes connecting to all tokens across the sentence (the dark blue lines in Fig. 1(b)). The s2s is the attention between multiple sentence nodes (the light blue squares in Fig. 1(b)). To compensate for the limitation of t2t caused by using fixed-size window, we allow the sentence nodes to have unrestricted attentions for all these three types. Thus tokens that are arbitrarily far apart in the long-text input can transfer information to each other through the sentence nodes.
Complex topics related to the same entity may span multiple sentences, making it challenging for existing sequential models to fully capture the se-mantics among entities. To solve this problem, we introduce the e2e attention pattern (Fig. 1(c). The intuition is that if there are several mentions of a particular entity, all the pairs of the same mentions are connected. In this way, we can facilitate the connections of relevant entities and preserve global context, e.g., entity interactions and topic flows.
Linear Projections for Sparse Attention. In order to ensure the sparsity of attention, we create three binary masks for each attention patterns M t2t , M ts and M e2e , where 0 means disconnection and 1 means connection between pairs of nodes. In particular, M ts is used jointly for s2s, t2s and s2t. We use different projection parameters for each attention pattern in order to model the heterogeneity of relationships across nodes. To do so, we first calculate each attention with its respective mask and then sum up these three attentions together as the final integrated attention ( Fig. 1(d)).
Each sparse attention is calculated as:
A m = softmax Q m K m √ d k V m , m ∈ {t2t, ts, e2e}. The query Q m is calculated as (M m X)W m Q ,
where X is the input text embedding, represents the element-wise product and W m Q is the projection parameter. The key K m and the value V m are calculated in a similar way as Q m , but with respect to different projection parameters, which are helpful to learn better representation for heterogeneous semantics. The expensive operation of full-connected attention is QK T as its computational complexity is related to the sequence length (Kitaev et al., 2020). While in HETFORMER, we follow the implementation of Longformer that only calculates and stores attention at the position where the mask value is 1 and this results in a linear increase in memory use compared to quadratic increase for full-connected attention.
Sentence Extraction
As extractive summarization is more general and widely used, we build a classifier on each sentence node representation o s to select sentences from the last layer of HETFORMER. The classifier uses a linear projection layer with the activation function to get the prediction score for each sentence:
ỹ s = σ (o s W o + b o ),
where σ is the sigmoid function, W o and b o are parameters of projection layer.
In the training stage, these prediction scores are trained learned on the binary cross-entropy loss with the golden labels y. In the inference stage, these scores are used to sort the sentences and select the top-k as the extracted summary.
Extension to Multi-Document
Our framework can establish the document-level relationship in the same way as the sentence-level, by just adding document nodes for multiple documents (i.e., adding the [CLS] token in front of each document) and calculate the document↔sentence (d2s, s2d), document↔token (d2t, t2d) and documentto-document (d2d) attention patterns. Therefore, it can be easily adapted from the single-document to multi-document summarization.
Discussions
The most relevant approaches to this work are Longformer (Beltagy et al., 2020) and ETC (Ravula et al., 2020) which use a hierarchical attention pattern to scale Transformers to long documents. Compared to these two methods, we formulate the Transformer as multi-granularity graph attention patterns, which can better encode heterogeneous node types and different edge connections. More specifically, Longformer treats the input sequence as one sentence with the single tokens marked as global. In contrast, we consider the input sequence as multi-sentence units by using sentenceto-sentence attention, which is able to capture the inter-sentence relationships in the complex document. Additionally, we introduce entity-to-entity attention pattern to facilitate the connection of relevant subjects and preserve global context, which are ignored in both Longformer and ETC. Moreover, our model is more flexible to be extended to the multi-document setting.
Experiments
Datasets
CNN/DailyMail is the most widely used benchmark dataset for single-document summarization (Zhang et al., 2019;Jia et al., 2020). The standard dataset split contains 287,227/13,368/11,490 samples for train/validation/test. To be comparable with other baselines, we follow the data processing in (Liu and Lapata, 2019b;See et al., 2017).
Multi-News is a large-scale dataset for multidocument summarization introduced in (Fabbri et al., 2019), where each sample is composed of 2-10 documents and a corresponding humanwritten summary. Following Fabbri et al. (2019), we split the dataset into 44,972/5,622/5,622 for train/validation/test. The average length of source documents and output summaries are 2,103.5 tokens and 263.7 tokens, respectively. Given the N input documents, we taking the first L/N tokens from each source document. Then we concatenate the truncated source documents into a sequence by the original order. Due to the memory limitation, we truncate input length L to 1,024 tokens. But if the memory capacity allows, our model can process the max input length = 4,096.
While the dataset contains abstractive gold summaries, it is not readily suited to training extractive models. So we follow the work of (Zhou et al., 2018) on extractive summary labeling, constructing gold-label sequences by greedily optimizing R-2 F1 on the gold-standard summary.
Baselines and Metrics
We evaluate our proposed model with the pretrained language model (Devlin et al., 2018;Liu et al., 2019), the state-of-the-art GNN-based pretrained language models Jia et al., 2020;Hanqi Jin, 2020) and pre-trained language model with the sparse attention (Narayan et al., 2020;Beltagy et al., 2020). And please check Appendix B for the detail.
We use unigram, bigram, and longest common subsequence of Rouge F1 (denoted as R-1, R-1 and R-L) (Lin and Och, 2004) 2 to evaluate the summarization qualities. Note that the experimental results of baselines are from the original papers.
Implementation Detail
Our model HETFORMER 3 is initialized using the Longformer pretrained checkpoints longformer-base-4096 4 , which is further pertained using the standard masked language model task on the Roberta checkpoints roberta-base 5 with the documents of max length 4,096. We apply dropout with probability 0.1 before all linear layers in our models. The proposed model follows the Longformer-base architecture, where the number of d model hidden units in our models is set as 768, the d h hidden size is 64, the layer number is 12 and the number of heads is 12. We train our model for 500K steps on the TitanRTX, 24G GPU with gradient accumulation in every two steps with Adam optimizers. Learning rate schedule follows the strategies with warming-up on first 10,000 steps (Vaswani et al., 2017). We select the top-3 checkpoints according to the evaluation loss on validation set and report the averaged results on the test set. For the testing stage, we select top-3 sentences for CNN/DailyMail and top-9 for Multi-News according to the average length of their humanwritten summaries. Trigram blocking is used to reduce repetitions.
Summerization Results
As shown in Table 1, our approach outperforms or is on par with current state-of-the-art baselines. Longformer and ETC outperforms the hierarchical structure model using fully-connected attention model HiBERT, which shows the supreme of using sparse attention by capturing more relations (e.g., token-to-sentence and sentence-to-token). Comparing to the pre-trained models using sparse attention, HETFORMER considering the heterogeneous graph structure among the text input outperforms Longformer and ETC. Moreover, HETFORMER achieves competitive performance compared with GNN-based models, such as HSG and HAHsum. Our model is slightly lower than the performance of HAHsum large . But it uses large architecture (24 layers with about 400M parameters), while our Table 2 shows the results of multi-document summarization. Our model outperforms all the extractive and abstractive baselines. These results reveal the importance of modeling the longer document to avoid serious information loss.
Memory Cost
Compared with the self-attention component requiring quadratic memory complexity in original Transformers, the proposed model only calculates the position where attention pattern mask=1, which can significantly save the memory cost. To verify that, we show the memory costs of BERT, RoBERTa, Longformer and HETFORMER base-version model on the CNN/DailyMail dataset with the same configuration (input length = 512, batch size = 1). From the results in Table 3, we can see that HETFORMER only takes 55.9% memory cost of RoBERTa model and also does not take too much more memory than Longformer.
Ablation Study
To show the importance of the design choices of our attention patterns, we tried different variants and reported their controlled experiment results. To make the ablation study more manageable, we train each configuration for 500K steps on the singledocument CNN/DailyMail dataset, then report the Rouge score on the test set.
The top of Table 4 demonstrates the impact of different ways of configuring the window sizes per layer. We observe that increasing the window size from the bottom to the top layer leads to the best performance (from 32 to 512). But the reverse way leads to worse performance (from 512 to 32). And using a fixed window size (the average of window sizes of the other configuration) leads to a performance that it is in between.
The middle of Table 4 presents the impact of incorporating the sentence node in the attention pattern. In implementation, no sentence node means that we delete the [CLS] tokens of the document input and use the average representation of each token in the sentences as the sentence representation. We observe that without using the sentence node to fully connect with the other tokens could decrease the performance.
The bottom of Table 4 shows the influence of using the entity node. We can see that without the entity node, the performance will decrease. It demonstrates that facilitating the connection of relevant subjects can preserve the global context, which can benefit the summarization task.
Conclusion
For the task of long-text extractive summarization, this paper has proposed HETFORMER, using multigranularity sparse attention to represent the heterogeneous graph among texts. Experiments show that the proposed model can achieve comparable performance on a single-document summarization task, as well as state-of-the-art performance on the multi-document summarization task with longer input document. In our future work, we plan to expand the edge from the binary type (connect or disconnect) to more plentiful semantic types, i.e., is-a, part-of, and others (Zhang et al., 2020).
A Background
A.1 Graph-enhanced Summarization
In the recent state-of-the-art summarization models, there is a trend to extract the structure from the text to formulate the document text as a hierarchical structure or heterogeneous graph . HiBERT (Zhang et al., 2019), GraphSum (Li et al., 2020) and HT (Liu and Lapata, 2019a) consider the word-level, sentence-level and document-level of the input text to formulate the hierarchical structure. MGSum (Hanqi Jin, 2020), ASGARD (Huang et al., 2020), HSG and HAH-Sum (Jia et al., 2020) construct the source article as a heterogeneous graph where words, sentences, and entities are used as the semantic nodes and they iteratively update the sentence nodes representation which is used to do the sentence extraction. The limitation of those models is that they use pre-trained methods as the feature-based model to learn the node feature and build GNN layers upon the node which brings more training parameters than just using pre-trained methods. Compared with those models, our work can achieve the same thing but using the lite framework. Moreover, these models typically limit inputs to n = 512 tokens since the O(n 2 ) cost of attention. Due to the long source article, when applying BERT or RoBERTa to the summarization task, they need to truncate source documents into one or several smaller block input (Li et al., 2020;Jia et al., 2020;Huang et al., 2020). Huang et al. (2021) proposed an efficient encoderdecoder attention with head-wise positional strides, which yields ten times faster than existing full attention models and can be scale to long documents. Liu et al. (2021) leveraged the syntactic and semantic structures of text to improve the Transformer and achieved nine times speedup. Our model focuses on the different direction to use graph-structured sparse attention to capture the long term dependence on the long text input. The most related approaches to the work presented in this paper are Longformer (Beltagy et al., 2020) and ETC (Ravula et al., 2020) which feature a very similar global-local attention mechanism and take advantage of the pre-trained model RoBERTa. Except Longformer has a single input sequence with some tokens marked as global (the only ones that use full attention), while the global tokens in the ETC is pre-trained with CPC loss. Comparing with those two works, we formulate the heterogeneous attention mechanism, which can consider the wordto-word, word-to-sen, sen-to-word and entity-toentity attention.
A.2 Structure Transformer
A.3 Graph Transformer
With the great similarity between the attention mechanism used in both Transformer (Vaswani et al., 2017) and Graph Attention network (Veličković et al., 2017), there are several recent Graph Transformer works recently. Such as GTN (Yun et al., 2019), HGT (Hu et al., 2020), (Fan et al., 2021) and HetGT (Yao et al., 2020) formulate the different type of the attention mechanisms to capture the node relationship in the graph.
The major difference between of our work and Graph Transformer is that the input of graph transformer is structural input, such as graph or dependence tree, but the input of our HeterFormer is unstructured text information. Our work is to convert the transformer to structural structure so that it can capture the latent relation in the unstructured text, such as the word-to-word, word-to-sent, sentto-word, sent-to-sent and entity-to-entity relations.
B Baseline Details
Extractive Models: BERT (or RoBERTa) (Devlin et al., 2018;Liu et al., 2019) is a Transformer-based model for text understanding through masking language models. HIBERT (Zhang et al., 2019) proposed a hierarchical Transformer model where it first encodes each sentence using the sentence Transformer encoder, and then encoded the whole document using the document Transformer encoder. HSG, HDSG formulated the input text as the heterogeneous graph which contains different granularity semantic nodes, (like word, sentence, document nodes) and connected the nodes with the TF-IDF. HSG used CNN and BiLSTM to initialize the node representation and updated the node representation by iteratively passing messages by Graph Attention Network (GAT). In the end, the final sentence nodes representation is used to select the summary sentence. HAHsum (Jia et al., 2020) constructed the input text as the heterogeneous graph containing the word, named entity, and sentence node. HAHsum used a pre-trained ALBERT to learn the node initial representation and then adapted GAT to iteratively learn node hidden repre-sentations. MGsum (Hanqi Jin, 2020) treated documents, sentences, and words as the different granularity of semantic units, and connected these semantic units within a multi-granularity hierarchical graph. They also proposed a model based on GAT to update the node representation. ETC (Narayan et al., 2020), andLongformer (Beltagy et al., 2020) are two pre-trained models to capture hierarchical structures among input documents through the sparse attention mechanism.
Abstractive Models: Hi-MAP (Fabbri et al., 2019) expands the pointer-generator network model into a hierarchical network and integrates an MMR module to calculate sentence-level scores. Graphsum (Li et al., 2020) leverage the graph representations of documents by processing input documents as the hierarchical structure with a pretrained language model to generate the abstractive summary.
Figure 1 :
1An illustration of sparse attention patterns ((a), (b), (c)) and their combination (d) in HETFORMER.
Table 2 :
2Rouge F1 scores on test set of Multi-News. '-' means that the original paper did not report the result.
Table 3 :
3Memory cost of different pre-trained modelsmodel builds on the base model (12 layers with
about 170M parameters).
Table 4 :
4Top: changing window size across layers. Mid-
dle: entity-to-entity attention pattern influence. Bottom:
sentence-to-sentence attention pattern influence
https://github.com/huggingface/ neuralcoref
https://pypi.org/project/rouge/ 3 https://github.com/yeliu918/HETFORMER 4 https://github.com/allenai/longformer 5 https://github.com/huggingface/ transformers
AcknowledgementsWe would like to thank all the reviewers for their helpful comments. This work is supported by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941.
Multigranularity interaction network for extractive and abstractive multi-document summarization. Tianming Xiaojun Wan Hanqi Jin, Wang, Proceedings of the Conference of Association for Computational Linguistics. the Conference of Association for Computational LinguisticsXiaojun Wan Hanqi Jin, Tianming Wang. 2020. Multi- granularity interaction network for extractive and ab- stractive multi-document summarization. In Pro- ceedings of the Conference of Association for Com- putational Linguistics, pages 6244-6254.
Heterogeneous graph transformer. Ziniu Hu, Yuxiao Dong, Kuansan Wang, Yizhou Sun, Proceedings of the Web Conference. the Web ConferenceZiniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. 2020. Heterogeneous graph transformer. In Proceedings of the Web Conference, pages 2704- 2710.
Efficient attentions for long document summarization. Luyang Huang, Shuyang Cao, Nikolaus Parulian, Ji Heng, Lu Wang, Proceedings of the North American Chapter of the Association for Computational Linguistics. the North American Chapter of the Association for Computational LinguisticsLuyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. In Proceedings of the North American Chapter of the Association for Com- putational Linguistics.
Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward. Luyang Huang, Lingfei Wu, Lu Wang, Proceedings of the Conference of Association for Computational Linguistics. the Conference of Association for Computational LinguisticsLuyang Huang, Lingfei Wu, and Lu Wang. 2020. Knowledge graph-augmented abstractive summa- rization with semantic-driven cloze reward. In Pro- ceedings of the Conference of Association for Com- putational Linguistics, page 5094-5107.
Neural extractive summarization with hierarchical attentive heterogeneous graph network. Ruipeng Jia, Yanan Cao, Hengzhu Tang, Fang Fang, Cong Cao, Shi Wang, Proceedings of the Conference of Neural Information Processing Systems. the Conference of Neural Information Processing SystemsRuipeng Jia, Yanan Cao, Hengzhu Tang, Fang Fang, Cong Cao, and Shi Wang. 2020. Neural extractive summarization with hierarchical attentive heteroge- neous graph network. In Proceedings of the Con- ference of Neural Information Processing Systems, pages 3622-3631.
Reformer: The efficient transformer. Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsNikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In Pro- ceedings of the International Conference on Learn- ing Representations.
Leveraging graph to improve abstractive multi-document summarization. Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, Junping Du, Proceedings of the Conference of Association for Computational Linguistics. the Conference of Association for Computational LinguisticsWei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, and Junping Du. 2020. Leveraging graph to improve abstractive multi-document summarization. In Proceedings of the Conference of Association for Computational Linguistics, pages 6232--6243.
Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. Chin-Yew Lin, Franz Josef Och, Proceedings of the Conference of Association for Computational Linguistics. the Conference of Association for Computational LinguisticsChin-Yew Lin and Franz Josef Och. 2004. Auto- matic evaluation of machine translation quality us- ing longest common subsequence and skip-bigram statistics. In Proceedings of the Conference of As- sociation for Computational Linguistics, pages 605- 612.
Hierarchical transformers for multi-document summarization. Yang Liu, Mirella Lapata, Proceedings of the Conference of Association for Computational Linguistics. the Conference of Association for Computational LinguisticsYang Liu and Mirella Lapata. 2019a. Hierarchical transformers for multi-document summarization. In Proceedings of the Conference of Association for Computational Linguistics, pages 5070-5081.
Text summarization with pretrained encoders. Yang Liu, Mirella Lapata, Proceedings of the Conference of Neural Information Processing Systems. the Conference of Neural Information Processing SystemsYang Liu and Mirella Lapata. 2019b. Text summariza- tion with pretrained encoders. In Proceedings of the Conference of Neural Information Processing Sys- tems, pages 3730-3740.
Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S Yu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceYe Liu, Yao Wan, Lifang He, Hao Peng, and Philip S Yu. 2020. Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence.
Enriching non-autoregressive transformer with syntactic and semanticstructures for neural machine translation. Ye Liu, Yao Wan, Jian-Guo Zhang, Wenting Zhao, Philip S Yu, Proceedings of the European Chapter of the Association for Computational Linguistics. the European Chapter of the Association for Computational LinguisticsYe Liu, Yao Wan, Jian-Guo Zhang, Wenting Zhao, and Philip S Yu. 2021. Enriching non-autoregressive transformer with syntactic and semanticstructures for neural machine translation. In Proceedings of the European Chapter of the Association for Com- putational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Stepwise extractive summarization and planning with structured transformers. Shashi Narayan, Joshua Maynez, Jakub Adamek, Daniele Pighin, Blaž Bratanič, Ryan Mcdonald, Proceedings of the Conference of Neural Information Processing Systems. the Conference of Neural Information Processing SystemsShashi Narayan, Joshua Maynez, Jakub Adamek, Daniele Pighin, Blaž Bratanič, and Ryan McDonald. 2020. Stepwise extractive summarization and plan- ning with structured transformers. In Proceedings of the Conference of Neural Information Processing Systems, page 4143-4159.
Sumit Kumar Sanghai, VConference of Association for Computational Linguisticsav Cvicek, and Zach Fisher. 2020. Etc: Encoding long and structured inputs in transformers. Anirudh Ravula, Chris Alberti, Joshua Ainslie, Li Yang, Philip Minh Pham, Qifan Wang, Santiago Ontanon, Proceedings of the Conference of Neural Information Processing Systems. the Conference of Neural Information Processing SystemsAnirudh Ravula, Chris Alberti, Joshua Ainslie, Li Yang, Philip Minh Pham, Qifan Wang, Santiago Ontanon, Sumit Kumar Sanghai, VConference of Association for Computational Linguisticsav Cvicek, and Zach Fisher. 2020. Etc: Encoding long and structured inputs in transformers. In Pro- ceedings of the Conference of Neural Information Processing Systems, pages 268-284.
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the Conference of Association for Computational Linguistics. the Conference of Association for Computational LinguisticsAbigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the Confer- ence of Association for Computational Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proceedings of the Conference of Neural Information Processing Systems. the Conference of Neural Information Processing SystemsAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Conference of Neu- ral Information Processing Systems, pages 5998- 6008.
Graph attention networks. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. In Proceedings of the International Conference on Learning Represen- tations.
Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous. Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous
| [
"https://github.com/huggingface/",
"https://github.com/yeliu918/HETFORMER",
"https://github.com/allenai/longformer",
"https://github.com/huggingface/"
] |
[
"Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking",
"Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking"
] | [
"Mostafa Dehghani dehghani@uva.nl \nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n\n",
"Hosein Azarbonyad h.azarbonyad@uva.nl \nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n\n",
"Jaap Kamps kamps@uva.nl \nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n\n",
"Maarten De Rijke derijke@uva.nl \nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n\n"
] | [
"University of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n",
"University of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n",
"University of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n",
"University of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n"
] | [] | Deep neural networks have become a primary tool for solving problems in many elds. ey are also used for addressing information retrieval problems and show strong performance in several tasks. Training these models requires large, representative datasets and for most IR tasks, such data contains sensitive information from users. Privacy and con dentiality concerns prevent many data owners from sharing the data, thus today the research community can only bene t from research on large-scale datasets in a limited manner.In this paper, we discuss privacy preserving mimic learning, i.e., using predictions from a privacy preserving trained model instead of labels from the original sensitive training data as a supervision signal. We present the results of preliminary experiments in which we apply the idea of mimic learning and privacy preserving mimic learning for the task of document re-ranking as one of the core IR tasks. is research is a step toward laying the ground for enabling researchers from data-rich environments to share knowledge learned from actual users' data, which should facilitate research collaborations. | null | [
"https://arxiv.org/pdf/1707.07605v1.pdf"
] | 3,704,070 | 1707.07605 | 98472179ed79d8d67362e61b4582d686755c9d3b |
Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking
Mostafa Dehghani dehghani@uva.nl
University of Amsterdam
University of Amsterdam
University of Amsterdam
University of Amsterdam
Hosein Azarbonyad h.azarbonyad@uva.nl
University of Amsterdam
University of Amsterdam
University of Amsterdam
University of Amsterdam
Jaap Kamps kamps@uva.nl
University of Amsterdam
University of Amsterdam
University of Amsterdam
University of Amsterdam
Maarten De Rijke derijke@uva.nl
University of Amsterdam
University of Amsterdam
University of Amsterdam
University of Amsterdam
Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking
10.1145/nnnnnnn.nnnnnnnDeep learningMimic learningResponsible information retrievalPrivacyModel sharingData sharing
Deep neural networks have become a primary tool for solving problems in many elds. ey are also used for addressing information retrieval problems and show strong performance in several tasks. Training these models requires large, representative datasets and for most IR tasks, such data contains sensitive information from users. Privacy and con dentiality concerns prevent many data owners from sharing the data, thus today the research community can only bene t from research on large-scale datasets in a limited manner.In this paper, we discuss privacy preserving mimic learning, i.e., using predictions from a privacy preserving trained model instead of labels from the original sensitive training data as a supervision signal. We present the results of preliminary experiments in which we apply the idea of mimic learning and privacy preserving mimic learning for the task of document re-ranking as one of the core IR tasks. is research is a step toward laying the ground for enabling researchers from data-rich environments to share knowledge learned from actual users' data, which should facilitate research collaborations.
INTRODUCTION
Deep neural networks demonstrate undeniable success in several elds and employing them is taking o for information retrieval problems [10,11]. It has been shown that supervised neural network models perform be er as the training dataset grows bigger and becomes more diverse [17]. Information retrieval is an experimental and empirical discipline, thus, having access to large-scale real datasets is essential for designing e ective IR systems. However, in many information retrieval tasks, due to the sensitivity of the data from users and privacy issues, not all researchers have access to large-scale datasets for training their models. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro t or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Much research has been done on the general problem of preserving the privacy of sensitive data in IR applications, where the question is how should we design e ective IR systems without damaging users' privacy? One of the solutions so far is to anonymize the data and try to hide the identity of users [4,20]. As an example, Zhang et al. [20] use a di erential privacy approach for query log anonymization. However, there is no guarantee that the anonymized data will be as e ective as the original data.
Using machine learning-based approaches, sharing the trained model instead of the original data has turned out to be an option for transferring knowledge [1,12,15]. e idea of mimic learning is to use a model that is trained based on the signals from the original training data to annotate a large set of unlabeled data and use these labels as training signals for training a new model. It has been shown, for many tasks in computer vision and natural language processing, that we can transfer knowledge this way and the newly trained models perform as well as the model trained on the original training data [2,3,8,14].
However, trained models can expose the private information from the dataset they have been trained on [15]. Hence, the problem of preserving the privacy of the data is changed into the problem preserving the privacy of the model. Modeling privacy in machine learning is a challenging problem and there has been much research in this area. Preserving the privacy of deep learning models is even more challenging, as there are more parameters to be safeguarded [13]. Some work has studied the vulnerability of deep neural network as a service, where the interaction with the model is only via an input-output black box [7,16,19]. Others have proposed approaches to protect privacy against an adversary with a full knowledge of the training mechanism and access to the model's parameters. For instance, Abadi et al. [1] propose a privacy preserving stochastic gradient descent algorithm o ering a trade-o between utility and privacy. More recently, Papernot et al. [12] propose a semi-supervised method for transferring the knowledge for deep learning from private training data. ey propose a setup for learning privacy-preserving student models by transferring knowledge from an ensemble of teachers trained on disjoint subsets of the data for which privacy guarantees are provided.
We investigate the possibility of mimic learning for document ranking and study techniques aimed at preserving privacy in mimic learning for this task. Generally, we address two research questions: RQ1 Can we use mimic learning to train a neural ranker? RQ2 Are privacy preserving mimic learning methods e ective for training a neural ranker?
Below, we rst assess the general possibility of exploiting mimic learning for document ranking task regardless of the privacy concerns.
en we examine the model by Papernot et al. [12] as a privacy preserving technique for mimic learning.
TRAINING A NEURAL RANKER WITH MIMIC LEARNING
In this section, we address our rst research question: "Can we use mimic learning to train a neural ranker?" e motivation for mimic learning comes from a well-known property of neural networks, namely that they are universal approximators, i.e., given enough training data, and a deep enough neural net with large enough hidden layers, they can approximate any function to an arbitrary precision [3]. e general idea is to train a very deep and wide network on the original training data which leads to a big model that is able to express the structure from the data very well; such a model is called a teacher model. en the teacher model is used to annotate a large unlabeled dataset.
is annotated set is then used to train a neural network which is called a student network. For many applications, it has been shown that the student model makes predictions similar to the teacher model with nearly the same or even be er performance [8,14].
is idea is mostly employed for compressing complex neural models or ensembles of neural models to a small deployable neural model [2,3].
We have performed a set of preliminary experiments to examine the idea of mimic learning for the task of document ranking. e question is: Can we use a trained neural ranker on a set of training data to annotate unlabeled data and train a new model (another ranker) on the newly generated training data that works nearly as good as the original model?
In our experiments, as the neural ranker, we have employed Rank model proposed by Dehghani et al. [5]. e general scheme of this model is illustrated in (1). In this model, the goal is to learn a scoring function S(q, d; θ ) for a given pair of query q and document d with the set of model parameters θ . is model uses a pair-wise ranking scenario during training in which there are two point-wise networks that share parameters and their parameters get updated to minimize a pair-wise loss. Each training instance has ve elements τ = (q, d 1 , d 2 , s q,d 1 , s q,d 2 ), where s q,d i indicates the relevance score of d i with respect to q from the ground-truth. During inference, the trained model is treated as a point-wise scoring function to score query-document pairs.
In this model, the input query and documents are passed through a representation learning layer, which is a function i that learns the representation of the input data instances, i.e. (q, d + , d − ), and consists of three components: (1) an embedding function ε : V → R m (where V denotes the vocabulary and m is the number of embedding dimensions), (2) a weighting function ω : V → R, and (3) a compositionality function : (R m , R) n → R m . More formally, the function i is de ned as: where t q i and t d i denote the i-th term in query q and document d, respectively. e weighting function ω assigns a weight to each term in the vocabulary. It has been shown that ω simulates the e ect of inverse document frequency (IDF), which is an important feature in information retrieval [5]. e compositionality function projects a set of n embedding-weighting pairs to an m-dimensional representation, independent of the value of n by taking the element-wise weighted sum over the terms' embedding vectors. We initialize the embedding function ε with word2vec embeddings [9] pre-trained on Google News, and the weighting function ω with IDF.
i(q, d + , d − ) = [ |q | i=1 (ε(t q i ), ω(t q i )) |d + | i=1 (ε(t d + i ), ω(t d + i )) |d − | i=1 (ε(t d − i ), ω(t d − i )) ],(1)
e representation learning layer is followed by a simple feedforward neural network that is composed of l −1 hidden layers with ReLU as the activation function, and output layer z l . e output layer z l is a fully-connected layer with a single continuous output with tanh as the activation function. e model is optimized using the hinge loss (max-margin loss function) on batches of training instances and it is de ned as follows:
L(b; θ ) = 1 |b | |b | i=1 max 0, 1 − sign(s {q,d 1 } i − s {q,d 2 } i ) (S ({q, d 1 } i ; θ ) − S ({q, d 2 } i ; θ )) ,(2)
is model is implemented using TensorFlow [6,18]. e con guration of teacher and student networks is presented in Table 1. [8] for training the student network. We have two sets of experiments, in the rst one, we train the teacher model with full supervision, i.e., on the set of queries with judgments, using 5-fold cross validation. In the second set of experiments, the set of queries with judgments is only used for evaluation and we train the teacher model using the weak supervision setup proposed in [5]. We use 3 million queries from the AOL query log as the unlabeled training query set for the teacher model. In all experiments, we use a separate set of 3 million queries from the AOL query log as unlabeled data that is annotated by the trained teacher model (either using full or weak supervision) for training the student model. Results obtained from these experiments are summarized in Table 2. e results generally suggest that we can train a neural ranker using mimic learning. Using weak supervision to train the teacher model, the student model performs as good as the teacher model. In case of training the teacher with full supervision, as the original training data is small, the performance of the teacher model is rather low which is mostly due to the fact that the big teacher model over ts on the train data and is not able to generalize well. However, due to the regularization e ect of mimic learning, the student model, which is trained on the predictions by the teacher model signi cantly outperforms the teacher model [8,14].
TRAINING A NEURAL RANKER WITH PRIVACY PRESERVING MIMIC LEARNING
In the previous section, we examined using the idea of mimic learning to train a neural ranker regardless of the privacy risks. In this section, we address our second research question: "Are privacy preserving mimic learning methods e ective for training a neural ranker?" It has been shown that there is a risk of privacy problems, both where the adversary is just able to query the model, and where the model parameters are exposed to the adversaries inspection. For instance, Fredrikson et al. [7] show that only by observing the prediction of the machine learning models they can approximately reconstruct part of the training data (model-inversion a ack). Shokri et al. [16] also demonstrate that it is possible to infer whether a speci c training point is included in the model's training data by observing only the predictions of the model (membership inference a ack).
We apply the idea of knowledge transfer for deep neural networks from private training data, proposed by Papernot et al. [12]. e authors propose a private aggregation of teacher ensembles based on the teacher-student paradigm to preserve the privacy of training data. First, the sensitive training data is divided into n partitions. en, on each partition, an independent neural network model is trained as a teacher. Once the teachers are trained, an aggregation step is done using majority voting to generate a single global prediction. Laplacian noise is injected into the output of the prediction of each teacher before aggregation. e introduction of this noise is what protects privacy because it obfuscates the vulnerable cases, where teachers disagree. e aggregated teacher can be considered as a deferentially private API to which we can submit the input and it then returns the privacy preserving label. ere are some circumstances where due to e ciency reasons the model is needed to be deployed to the user device [1]. To be able to generate a shareable model where the privacy of the training data is preserved, Papernot et al. [12] train an additional model called the student model. e student model has access to unlabeled public data during training. e unlabeled public data is annotated using the aggregated teacher to transfer knowledge from teachers to student model in a privacy preserving fashion. is way, if the adversary tries to recover the training data by inspecting the parameters of the student model, in the worst case, the public training instances with privacy preserving labels from the aggregated teacher are going to be revealed. e privacy guarantee of this approach is formally proved using di erential privacy framework.
We apply the same idea to our task. We use a weak supervision setup, as partitioning the fully supervised training data in our problem leads to very small training sets which are not big enough for training good teachers. In our experiments, we split the training data into three partitions, each contains one million queries annotated by the BM25 method. We train three identical teacher models. en, we use the aggregated noisy predictions from these teachers to train the student network using the knowledge distillation approach. Con gurations of teacher and student networks are similar to the previous experiments, as they are presented in Table 1.
We evaluate the performance in two situations: In the rst one, the privacy parameter, which determines the amount of noise, is set to zero, and in the second one, the noise parameter is set to 0.05, which guarantees a low privacy risk [12]. We report the average performance of the teachers before noise, the performance of noisy and non-noisy aggregated teacher, and the performance of the student networks in two situations. e results of these experiments are reported in Table 3. Results in the table suggest that using the noisy aggregation of multiple teachers as the supervision signal, we can train a neural ranker with an acceptable performance. Compared to the single teacher setup in the previous section, the performance of the student network is not as good as the average performance of teachers. Although the student network performs be er than the teacher in the noisy aggregation setup. is is more or less the case for a student together with a non-noisy aggregated teacher. We believe drops in the performance on the student networks compared to the results in the previous section are not just due to partitioning, noise, and aggregation. is is also the e ect of the change in the amount of training data for the teachers in our experiments. So, in the case of having enough training data in each partition for each teacher, their prediction will be more determined and we will have less disagreement in the aggregation phase and consequently, we will get be er signals for training the student model.
CONCLUSION
With the recent success of deep learning in many elds, IR is also moving from traditional statistical approaches to neural network based approaches. Supervised neural networks are data hungry and training an e ective model requires a huge amount of labeled samples. However, for many IR tasks, there are not big enough datasets. For many tasks such as the ad-hoc retrieval task, companies and commercial search engines have access to large amounts of data. However, sharing these datasets with the research community raises concerns such as violating the privacy of users. In this paper, we acknowledge this problem and propose an approach to overcome it. Our suggestion is based on the recent success on mimic learning in computer vision and NLP tasks. Our rst research question was: Can we use mimic learning to train a neural ranker?
To answer this question, we used the idea of mimic learning. Instead of sharing the original training data, we propose to train a model on the data and share the model. e trained model can then be used in a knowledge transfer fashion to label a huge amount of unlabeled data and create big datasets. We showed that a student ranker model trained on a dataset labeled based on predictions of a teacher model, can perform almost as well as the teacher model.
is shows the potential of mimic learning for the ranking task which can overcome the problem of lack of large datasets for ad-hoc IR task and open-up the future research in this direction.
As shown in the literature, even sharing the trained model on sensitive training data instead of the original data cannot guarantee the privacy. Our second research question was: Are privacy preserving mimic learning methods e ective for training a neural ranker?
To guarantee the privacy of users, we proposed to use the idea of privacy preserving mimic learning. We showed that using this approach, not only the privacy of users is guaranteed, but also we can achieve an acceptable performance. In this paper, we aim to lay the groundwork for the idea of sharing a privacy preserving model instead of sensitive data in IR applications. is suggests researchers from industry share the knowledge learned from actual users' data with the academic community that leads to a be er collaboration of all researchers in the eld.
As a future direction of this research, we aim to establish formal statements regarding the level of privacy that this would entail using privacy preserving mimic learning and strengthen this angel in the experimental evaluation. Besides, we can investigate that which kind of neural network structure is more suitable for mimic learning for the ranking task.
SIGIR 2017 Workshop on Neural Information Retrieval (Neu-IR'17), August 7-11, 2017, Shinjuku, Tokyo, Japan © 2017 Copyright held by the owner/author(s). 978-x-xxxx-xxxx-x/YY/MM. DOI: 10.1145/nnnnnnn.nnnnnnn
Figure 1 :
1Rank Model: Neural Ranking model proposed by Dehghani et al.[5]
Figure 2 :
2Privacy preserving annotator/model sharing, proposed by Papernot et al.[12].
Table 1 :
1Teacher and student neural networks con gurations. TREC topics 301-450 and 601-700) with judgments, which has been used in the TREC Robust Track 2004. We follow the knowledgeParameter
Teacher Student
Number of hidden layers
3
3
Size of hidden layers
512
128
Initial learning rate
1E-3
1E-3
Dropout
0.2
0.1
Embedding size
500
300
Batch size
512
512
As our test collection, we use Robust04 with a set of 250 queries
(
Table 2 :
2Performance of teacher and student models with di erent training strategies.Training strategy model
MAP P@20 nDCG@20
Full supervision
Teacher 0.1814 0.2888
0.3419
Student 0.2256 0.3111
0.3891
Weak supervision
Teacher 0.2716 0.3664
0.4109
Student 0.2701 0.3562
0.4145
distillation approach
Table 3 :
3Performance of teachers (average) and student models with noisy and non-noisy aggregation.Model
MAP P@20 nDCG@20
Teachers (avg)
0.2566 0.3300
0.3836
Non-noisy aggregated teacher 0.2380 0.3055
0.3702
Student (non-noisy aggregation) 0.2337 0.3192
0.3717
Noisy aggregated teacher
0.2110 0.2868
0.3407
Student (noisy aggregation)
0.2255 0.2984
0.3559
Deep learning with di erential privacy. Martín Abadi, Andy Chu, Ian Goodfellow, Ilya H Brendan Mcmahan, Kunal Mironov, Li Talwar, Zhang, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. the 2016 ACM SIGSAC Conference on Computer and Communications SecurityACMMartín Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with di erential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 308-318.
Do deep nets really need to be deep. Jimmy Ba, Rich Caruana, Advances in neural information processing systems. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep?. In Advances in neural information processing systems. 2654-2662.
Model compression. Cristian Bucilua, Rich Caruana, Alexandru Niculescu-Mizil, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningACMCristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 535-541.
Semantic Search Log Kanonymization with Generalized K-cores of ery Concept Graph. Claudio Carpineto, Giovanni Romano, ECIR'13. Claudio Carpineto and Giovanni Romano. 2013. Semantic Search Log K- anonymization with Generalized K-cores of ery Concept Graph. In ECIR'13. 110-121.
Neural Ranking Models with Weak Supervision. Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, W Bruce Cro, 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Cro . 2017. Neural Ranking Models with Weak Supervision. In e 40th In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval.
Martín Abadi, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Martín Abadi et al. 2015. TensorFlow: Large-Scale Machine Learning on Het- erogeneous Systems. (2015). h p://tensor ow.org/ So ware available from tensor ow.org.
Model inversion a acks that exploit con dence information and basic countermeasures. Ma Fredrikson, Somesh Jha, Omas Ristenpart, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. the 22nd ACM SIGSAC Conference on Computer and Communications SecurityACMMa Fredrikson, Somesh Jha, and omas Ristenpart. 2015. Model inversion a acks that exploit con dence information and basic countermeasures. In Pro- ceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 1322-1333.
Geo Rey Hinton, arXiv:1503.02531Oriol Vinyals, and Je Dean. 2015. Distilling the knowledge in a neural network. arXiv preprintGeo rey Hinton, Oriol Vinyals, and Je Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
Distributed Representations of Words and Phrases and their Compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Je Dean, NIPS '13. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Je Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In NIPS '13. 3111-3119.
Neural Text Embeddings for Information Retrieval. Bhaskar Mitra, Nick Craswell, Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. the Tenth ACM International Conference on Web Search and Data MiningACMBhaskar Mitra and Nick Craswell. 2017. Neural Text Embeddings for Information Retrieval. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. ACM, 813-814.
Ismail Kezban Dilek Onal, Pinar Sengor Altingovde, Maarten Karagoz, De Rijke, arXiv:1611.03305Ge ing Started with Neural Models for Semantic Matching in Web Search. arXiv preprintKezban Dilek Onal, Ismail Sengor Altingovde, Pinar Karagoz, and Maarten de Rijke. 2016. Ge ing Started with Neural Models for Semantic Matching in Web Search. arXiv preprint arXiv:1611.03305 (2016).
Semi-supervised knowledge transfer for deep learning from private training data. Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, Kunal Talwar, International Conference on Learning Representations (ICLR'17. Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. 2017. Semi-supervised knowledge transfer for deep learning from private training data. International Conference on Learning Representations (ICLR'17) (2017).
Di erential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction. Nhathai Phan, Yue Wang, Xintao Wu, Dejing Dou, AAAI. NhatHai Phan, Yue Wang, Xintao Wu, and Dejing Dou. 2016. Di erential Pri- vacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction.. In AAAI. 1309-1316.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, arXiv:1412.6550Carlo Ga a, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprintAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Ga a, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014).
Privacy-preserving deep learning. Reza Shokri, Vitaly Shmatikov, Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. the 22nd ACM SIGSAC conference on computer and communications securityACMReza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM, 1310-1321.
Membership inference a acks against machine learning models. Reza Shokri, Marco Stronati, Vitaly Shmatikov, arXiv:1610.05820arXiv preprintReza Shokri, Marco Stronati, and Vitaly Shmatikov. 2016. Membership inference a acks against machine learning models. arXiv preprint arXiv:1610.05820 (2016).
Revisiting Unreasonable E ectiveness of Data in Deep Learning Era. Chen Sun, Abhinav Shrivastava, Saurabh Singh, Abhinav Gupta, arXiv:1707.02968arXiv preprintChen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. 2017. Re- visiting Unreasonable E ectiveness of Data in Deep Learning Era. arXiv preprint arXiv:1707.02968 (2017).
Yuan Tang, arXiv:1612.04251TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning. arXiv preprintYuan Tang. 2016. TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning. arXiv preprint arXiv:1612.04251 (2016).
Stealing machine learning models via prediction apis. Florian Tramèr, Fan Zhang, Ari Juels, K Michael, Omas Reiter, Ristenpart, USENIX Security. Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and omas Ristenpart. 2016. Stealing machine learning models via prediction apis. In USENIX Security.
Anonymizing ery Logs by Di erential Privacy. Sicong Zhang, Hui Yang, Lisa Singh, SIGIR '16. Sicong Zhang, Hui Yang, and Lisa Singh. 2016. Anonymizing ery Logs by Di erential Privacy. In SIGIR '16. 753-756.
| [] |
[
"CASCADED FAST AND SLOW MODELS FOR EFFICIENT SEMANTIC CODE SEARCH",
"CASCADED FAST AND SLOW MODELS FOR EFFICIENT SEMANTIC CODE SEARCH"
] | [
"Akhilesh Deepak Gotmare akhilesh.gotmare@salesforce.com \nSalesforce Research Asia\n\n",
"Junnan Li junnan.li@salesforce.com \nSalesforce Research Asia\n\n",
"Shafiq Joty sjoty@salesforce.com \nSalesforce Research Asia\n\n",
"Steven C H Hoi shoi@salesforce.com \nSalesforce Research Asia\n\n"
] | [
"Salesforce Research Asia\n",
"Salesforce Research Asia\n",
"Salesforce Research Asia\n",
"Salesforce Research Asia\n"
] | [] | The goal of natural language semantic code search is to retrieve a semantically relevant code snippet from a fixed set of candidates using a natural language query. Existing approaches are neither effective nor efficient enough towards a practical semantic code search system. In this paper, we propose an efficient and accurate semantic code search framework with cascaded fast and slow models, in which a fast transformer encoder model is learned to optimize a scalable index for fast retrieval followed by learning a slow classification-based re-ranking model to improve the performance of the top K results from the fast retrieval. To further reduce the high memory cost of deploying two separate models in practice, we propose to jointly train the fast and slow model based on a single transformer encoder with shared parameters. The proposed cascaded approach is not only efficient and scalable, but also achieves state-of-the-art results with an average mean reciprocal ranking (MRR) score of 0.7795 (across 6 programming languages) as opposed to the previous state-of-the-art result of 0.713 MRR on the CodeSearchNet benchmark. Recent work on code generation like Chen et al. (2021)'s 12B parameter CodeX and Austin et al. (2021)'s 137B parameter LM use large scale autoregressive language models to demonstrate impressive capabilities of generating multiple lines of code from natural language descriptions, well beyond what previous generation models like GPT-C (Svyatkovskiy et al., 2020) could accomplish.However, this impressive performance is often predicated on being able to draw many samples from the model and machine-check them for correctness. This setup will often not be the case in practice. Code generation models also entail security implications (possibility of producing vulnerable or misaligned code) making their adoption tricky.Given this current landscape, code retrieval systems can serve as attractive alternatives when building tools to assist developers. With efficient implementations, code search for a single query can typically be much faster for most practical index sizes than generating code with large scale LMs. As opposed to code generation, code retrieval offers the possibility of a much greater control over the quality of the result -the index entries can be verified beforehand. Leveraging additional data post training is easier when working with code search systems as this would simply require extending the index by encoding the new instances. Code search systems can be particularly of value for organizations with internal proprietary code. Indexing source code data internally for search can prevent redundancy and boost programmer productivity. A recent study by surveys developers to understand the effectiveness of code generation and code retrieval systems. Their results indicate that the two systems serve complementary roles and developers prefer retrieval modules over generation when working with complex functionalities, thus advocating the need for better code search systems.1 | null | [
"https://arxiv.org/pdf/2110.07811v1.pdf"
] | 239,009,824 | 2110.07811 | 21e8e76386aaaa00e0971af70ce84a8a544e1aa1 |
CASCADED FAST AND SLOW MODELS FOR EFFICIENT SEMANTIC CODE SEARCH
Akhilesh Deepak Gotmare akhilesh.gotmare@salesforce.com
Salesforce Research Asia
Junnan Li junnan.li@salesforce.com
Salesforce Research Asia
Shafiq Joty sjoty@salesforce.com
Salesforce Research Asia
Steven C H Hoi shoi@salesforce.com
Salesforce Research Asia
CASCADED FAST AND SLOW MODELS FOR EFFICIENT SEMANTIC CODE SEARCH
The goal of natural language semantic code search is to retrieve a semantically relevant code snippet from a fixed set of candidates using a natural language query. Existing approaches are neither effective nor efficient enough towards a practical semantic code search system. In this paper, we propose an efficient and accurate semantic code search framework with cascaded fast and slow models, in which a fast transformer encoder model is learned to optimize a scalable index for fast retrieval followed by learning a slow classification-based re-ranking model to improve the performance of the top K results from the fast retrieval. To further reduce the high memory cost of deploying two separate models in practice, we propose to jointly train the fast and slow model based on a single transformer encoder with shared parameters. The proposed cascaded approach is not only efficient and scalable, but also achieves state-of-the-art results with an average mean reciprocal ranking (MRR) score of 0.7795 (across 6 programming languages) as opposed to the previous state-of-the-art result of 0.713 MRR on the CodeSearchNet benchmark. Recent work on code generation like Chen et al. (2021)'s 12B parameter CodeX and Austin et al. (2021)'s 137B parameter LM use large scale autoregressive language models to demonstrate impressive capabilities of generating multiple lines of code from natural language descriptions, well beyond what previous generation models like GPT-C (Svyatkovskiy et al., 2020) could accomplish.However, this impressive performance is often predicated on being able to draw many samples from the model and machine-check them for correctness. This setup will often not be the case in practice. Code generation models also entail security implications (possibility of producing vulnerable or misaligned code) making their adoption tricky.Given this current landscape, code retrieval systems can serve as attractive alternatives when building tools to assist developers. With efficient implementations, code search for a single query can typically be much faster for most practical index sizes than generating code with large scale LMs. As opposed to code generation, code retrieval offers the possibility of a much greater control over the quality of the result -the index entries can be verified beforehand. Leveraging additional data post training is easier when working with code search systems as this would simply require extending the index by encoding the new instances. Code search systems can be particularly of value for organizations with internal proprietary code. Indexing source code data internally for search can prevent redundancy and boost programmer productivity. A recent study by surveys developers to understand the effectiveness of code generation and code retrieval systems. Their results indicate that the two systems serve complementary roles and developers prefer retrieval modules over generation when working with complex functionalities, thus advocating the need for better code search systems.1
INTRODUCTION
Building tools that enhance software developer productivity has recently garnered a lot of attention in the deep learning research community. Parallel to the progress in natural language processing, pre-trained language models (LM) like CodeBERT (Feng et al., 2020b), CodeGPT (Lu et al., 2021), CodeX (Chen et al., 2021), PLBART (Ahmad et al., 2021) and CodeT5 (Wang et al., 2021b) have now been proposed for understanding and generation tasks involving programming languages. Figure 1: Illustration of the fast encoder (left) and slow classifier (right) based semantic code search approaches (at inference stage). With the encoder based approach, we independently compute representations of the NL query and candidate code sequences. The code snippet with representation nearest to the query vector is then returned as the search result. With the classifier based approach, we jointly process the query with each code sequence to predict the probability of the code matching the query description. The code sequence corresponding to the highest classifier confidence score is then returned as the search result. With CasCode, we are able to achieve performance comparable to the optimal classifier based approach (top right), while requiring substantially lesser inference time.
Neural approaches to code search (Sachdev et al., 2018;Guo et al., 2021;Ye et al., 2016;Gu et al., 2018) involve encoding query and code independently into dense representations in the same semantic space. Retrieval is then performed using representational similarity (based on cosine or euclidean distances) of these dense vectors. An orthogonal approach involves encoding the query and the code jointly and training semantic code search systems as binary classifiers that predict whether a code answers a given query (Lu et al., 2021;. With this approach, the model processes the query paired with each candidate code sequence. Intuitively, this approach helps in sharpening the cross information between query and code and is a better alternative for capturing matching relationships between the two modalities (natural language and programming language) than simple similarity metric between the encoder based sequence representations.
While this latter approach can be promising for code retrieval, previous methods have mostly leveraged it for binary classification tasks involving NL-PL sequence pairs. Directly adapting this approach to code search tasks would be impractical due to the large number of candidates to be considered for each query. We depict the complementary nature of these approaches in Figure 1 when using a transformer (Vaswani et al., 2017) encoder based model for retrieval and classification.
In order to leverage the potential of such nuanced classifier models for the task of retrieval, we propose a cascaded scheme (CasCode) where we process a limited number of candidates with the classifier model. This limiting is performed by employing the encoder based approach and picking its top few candidate choices from the retrieval set for processing by the second classifier stage. Our cascaded approach leads to state of the art performance on the CodeSearchNet benchmark with an overall mean reciprocal ranking (MRR) score of 0.7795, substantially surpassing previous results. We propose a variant of the cascaded scheme with shared parameters, where a single transformer model can serve in both the modes -encoding and classification. This shared variant substantially reduces the memory requirements, while offering comparable retrieval performance with an MRR score of 0.7700. Figure 2 illustrates the trade off involved between inference speed and MRR for different algorithmic choices, where we have the (fast) encoder model on one extreme, and the (slow) classifier model on the other. With CasCode, we offer performance comparable to the optimal scores attained by the classifier model, while requiring substantially lesser inference time, thus making it computationally feasible. Our codebase will be made publicly available for research purposes.
BACKGROUND
Early work on neural approaches to code search include Sachdev et al. (2018) who used unsupervised word embeddings to construct representations for documents (code snippets), followed by Cambronero et al. (2019)'s supervised approach leveraging the pairing of code and queries. Feng et al. (2020b) proposed pre-training BERT-style (Devlin et al., 2019) masked language models with unlabeled (and unpaired) source code and docstrings, and fine-tuning them for text-to-code retrieval task. With this approach, the query representation can be compared during inference against a preconstructed index of code representations and the nearest instance is returned as the search result. Miech et al. (2021) and Li et al. (2021) have previously proposed similar approaches for text-tovisual retrieval. Guo et al. (2021) leverage pairs of natural language and source code sequences to train text-to-code retrieval models. They adopt the contrastive learning framework (Chen et al., 2020) to train the retrieval model, where representations of natural language (NL) and programming language (PL) sequences that match in semantics (a positive pair from the bimodal dataset) are pulled together, while representations of negative pairs (randomly paired NL and PL sequences) are pushed apart. The infoNCE loss (a form of contrastive loss function (Gutmann & Hyvärinen, 2010)) used for this approach can be defined as follows:
L infoNCE = 1 N N i=1 − log exp(f θ (x i ) T f θ (y i )/σ) j∈B exp(f θ (x i ) T f θ (y j )/σ)(1)
where f θ (x i ) is the dense representation for the NL input x i , and y i is the corresponding semantically equivalent PL sequence. N is the number of training examples in the bimodal dataset, σ is a temperature hyper-parameter, and B denotes the current training minibatch.
While the above approach applies for any model architecture, Guo et al. (2021) employ GraphCodeBERT (a structure-aware transformer encoder pre-trained on code) and CodeBERT for f θ in their experiments. We refer to this approach as the one using fast encoders for retrieval. During inference, we are given a set of candidate code snippets C = {y 1 , y 2 , . . . y |C| }, which are encoded offline into an index {f θ (y j ) ∀j ∈ C}. For a test NL query x i , we then compute f θ (x i ) and return the code snippet from C corresponding to the nearest neighbor (as per some distance metric e.g. cosine similarity) in the index. The rank r i assigned to the correct code snippet (for the query x i ) from C is then used to compute the mean reciprocal ranking (MRR) metric 1
Ntest
Ntest i=1 1 ri . During inference, we are only required to perform the forward pass associated with f θ (x i ) and the nearest neighbor lookup in the PL index, as the PL index itself can be constructed offline. This makes the approach very suitable for practical scenarios where the number of candidate code snippets |C| could be very large.
In a related line of work, Lu et al. (2021) propose a benchmark (NL-code-search-WebQuery) where natural language code search is framed as the problem of analysing a query-code pair to predict whether the code answers the query or not. release a new dataset with manually written queries (as opposed to docstrings extracted automatically), and propose a similar benchmark based on binary classification of query-code pairs.
CASCODE
Although the approach proposed by Guo et al. (2021) is efficient for practical scenarios, the independent encodings of the query and the code make it less effective. We could instead encode the query and the code candidate jointly within a single transformer encoder and perform binary classification. In particular, the model could take as input the concatenation of NL and PL sequences [x i ; y j ] and predict whether the two match in semantics.
NL Query
Candidate code snippets
Search result
Transformer Encoder
Figure 3: CasCode: Our proposed cascaded scheme for semantic code search. At the top, the transformer encoder independently processes the query x i and the code snippets in the fast retrieval stage. The top K candidates (based on the nearest neighbor lookup) from this stage are passed on to the second stage, where a transformer classifier jointly processes the query sequence with each of the filtered candidates to predict the probability of their semantics matching. The second stage classifiers are thus accelerated for the code retrieval task by the first stage of encoders.
The training batches for this binary classification setup can again be constructed using the bimodal dataset (positive pairs denoting semantic matches), and the negative pairs (mismatch) can be constructed artificially.
Given a set of paired NL-PL semantically equivalent sequences {x i , y i } N i=1 , the cross-entropy objective function for this training scheme would be:
L CE = − 1 N N i=1,j =i log p θ (x i , y i ) + log(1 − p θ (x i , y j ))(2)
where p θ (x i , y j ) represents the probability that the NL sequence x i semantically matches the PL sequence y j , as predicted by the classifier. With a minibatch of positive pairs {x i , y i } ∀i ∈ B, we can randomly pick y j (j ∈ B; j = i) from the PL sequences in the minibatch and pair it with x i to serve as a negative pair. When using a transformer encoder based classifier, the interactions between the NL and PL tokens in the self-attention layers can help in improving the precision of this approach over the previous (independent encoding) one.
During inference, we can pair the NL sequence x i with each of the y j from C and rank the candidates as per the classifier's confidence scores of the pair being a match. This involves C forward passes (each on a joint NL-PL sequence, thus longer inputs than the previous approach), making this approach infeasible when dealing with large retrieval sets. We refer to this approach as the one using slow classifiers for retrieval. Figure 1 provides an illustration of these two different approaches.
We propose unifying the strengths of the two approaches -the speed of the fast encoders with the precision of the slow classifiers, with a cascaded scheme, called CasCode. Figure 3 shows the overall framework of our approach. Our hybrid strategy combines the strengths of the two approaches in the following manner -the first stage of fast encoders provides top-K candidates from the set C of candidate code snippets. In practice, the size of the retrieval set (|C|) can often be very large, and varies from 4360 to 52660 for the CodeSearchNet datasets we study in our experiments.
Prompt the user to continue or not d e f c o n t i n u e p r o m p t ( m e s s a g e = " " ) : a n s w e r = F a l s e m e s s a g e = m e s s a g e + """\n" Y e s " o r "No" t o c o n t i n u e : """ w h i l e a n s w e r n o t i n ( " Yes " , "No" ) : a n s w e r = p r o m p t ( message , e v e n t l o o p = e v e n t l o o p ( ) ) i f a n s w e r == " Yes " : break i f a n s w e r == "No" : break r e t u r n a n s w e r Sends a message to the framework scheduler. (Python) The top K candidates are then passed to the second stage of slow classifiers where each of them is paired with the NL input (query) x i and fed to the model. For a given pair, this second stage classifier will return the probability of the NL and PL components of the input matching in semantics. Using these as confidence scores, the rankings of the K candidates are refined.
The resulting scheme is preferable for K << |C|, as this would add a minor computational overhead on top of what is required by the fast encoder based retrieval. The second stage of refinement can then improve retrieval performance provided that the value of K is set such that the recall of the fast encoder is reasonably high. K would be a critical hyper-parameter in this scheme, as setting a very low K would lead to high likelihood of missing the correct snippet in the set of inputs passed to the second stage slow classifier, while a very high K would make the scheme infeasible for retrieval. As we discuss ahead in Section 4, CasCode with a K as small as 10 already offers significant gains in retrieval performance over the baselines, with marginal gains as we increment K to 100 and beyond.
In order to minimize the memory overhead incurred by the two stage model, we propose to share the weights of the transformer layers of the fast encoders and the slow classifiers. This can be achieved by training a model with the joint objective of infoNCE (L infoNCE ) and binary cross-entropy (L CE .) While the number of parameters in this shared variant would be nearly half of the separat (nonshared) case, the computational cost at inference would be the same. Note that we would need some exclusive parameters for the classifier model, specifically the classification head (MLP) on top of the encoder. Thus, in this shared parameter variant of CasCode, the transformer model consuming the three kinds of inputs -NL only and PL only (for the fast encoder stage) and NL-PL (for the slow classifier stage) is identical except for the MLP layers in the second stage.
EXPERIMENTS
DATASET, BASELINES & METRICS
We use the CodeSearchNet code corpus from Husain et al. (2019) that includes six programming languages -Ruby, Javascript, Go, Python, Java and Php. Our pre-processing and train-val-test splits are identical to the setting from Guo et al. (2021) 1 , who filter low-quality queries and expand the retrieval set to make the code search task more challenging and realistic. Table 1 shows 2 examples of bimodal pairs from the resulting dataset and the statistics of the dataset after pre-processing are provided in Table 2.
Our fast encoder baseline is based on the CodeBERT model from Feng et al. (2020b) that is pretrained on programming languages. In order to have a strong baseline, we use a newer CodeBERT checkpoint that is pre-trained (using masked language modeling and replaced token detection tasks) for longer, after we found that the CodeBERT checkpoint from Feng et al. (2020b) was not trained till convergence. When starting from our new checkpoint, we find that the CodeBERT baseline, if fine-tuned with a larger batch-size (largest possible that we can fit on 8 A100 GPUs) and for a larger number of epochs, is able to perform substantially better than the results reported before. We report the baselines from Guo et al. (2021) in Table 3 along with the results for our replication of two of these baselines. Previous studies have emphasized this effect -larger batch sizes are known to typically work well when training with the infoNCE loss in a contrastive learning framework, due to more negative samples from the batch (Chen et al., 2020).
We also train GraphCodeBERT, which is proposed by Guo et al. (2021) as a structure aware model pre-trained on programming languages. GraphCodeBERT leverages data flow graphs during pretraining to incorporate structural information into its representations. However, for the code search task, we report ( Table 3) MRR score for this CodeBERT baseline is shown in Table 3. For this baseline and the variants we propose, along with MRR, we also report Recall@K for K = {1, 2, 5, 8, 10}, that indicates the hit rate (ratio of instances where we find the correct output in the top K results). We encourage future work on code search to report these additional metrics, as these are important in evaluating the utility of a retrieval system and is commonly reported in similar work in text based image or video retrieval (Miech et al., 2021;Bain et al., 2021). Figure 5 shows the Recall@K (K varied over the horizontal axis) for the 6 different programming languages, with the fast encoder models, over the validation set. As alluded to in Section 3, for designing the cascaded scheme, we need to pick a K that is large enough to provide reasonably high recall, and small enough for the second stage to be reasonably fast. We pick K = 10 and 100 where the recall for all 6 datasets is over 85% and 90% respectively.
RESULTS WITH CASCODE
We first show results for the slow classifiers, trained using the CodeSearhNet datasets that we mention above. We finetune the CodeBERT pre-trained checkpoint (mentioned above) with a classification head (fully connected layers) for this task. On the validation set, we study the performance of this finetuned classifier for retrieval and report the MRR scores in Figure 4 for different values of K, where K is the number of top candidates passed from the first (fast encoder) stage to the second. Interestingly, the retrieval performance of this joint classifier fails to improve beyond certain values of K. For example, increasing K from 10 to 100 only marginally improves the MRR for Ruby, Javascript and Java, while for other languages there is no significant improvement beyond K = 10. Further training details for CasCode variants and the fast encoder baselines are provided in Appendix A.
Next, we train fast and slow models with shared parameters, denoted by CasCode (shared). The training objective for this model is the average of the binary cross-entropy loss L CE and the infoNCE loss L infoNCE as described in Section 3. The MRR scores for the baselines and our separate and shared variants are listed in Table 3. With our cascaded approach, we observe Feng et al. (2020a), GraphCodeBERT: pre-trained using structure-aware tasks by Guo et al. (2021)). SYNCOBERT: pre-trained using syntax-aware tasks by Wang et al. (2021a). In the last four rows, we report the results with the shared and separate variants of our CasCode scheme using the fine-tuned CodeBERT models for K of 10 and 100. significant improvements over the fast encoder baselines, the overall MRR 2 averaged over the 2 we report MRR on the scale of 0-1, some works (eg. Wang et al. (2021a)) use the scale 0-100 six programming languges for CasCode (separate) is 0.7795, whereas the fast encoder baseline (CodeBERT) reaches 0.7422. The improvements with CasCode are noticeably greater over the baseline for Ruby, Javascript, Python and Java. We report modest improvements on the Go dataset, where the fast encoder baseline is already quite strong (0.9145 MRR).
The shared variant of CasCode attains an overall MRR score of 0.77, which is comparable to the separate variant performance. This slight difference can be attributed to the limited model capacity in the shared case, as the same set of transformer layers serve in the encoder and classifier models. We also evaluate the MRR scores for the CasCode (shared) model in the fast encoder stage where the test set MRR scores were 0.7308, 0.6634, 0.9048, 0.7193, 0.7244, 0.6803 for Ruby, Javascript, Go, Python, Java and PHP respectively, with the overall MRR being 0.7372. We note in passing, that the cascaded model that was trained in a multi-task manner, gives competitive retrieval performance, even when used only in its first (encoder only) stage.
We also report the Recall@K metric for CasCode separate and CasCode shared variants in Figure 6. For all six programming languages, we observe improvements over the fast encoder baseline with our cascaded scheme. Similar to our observation from Table 3, the shared variant of CasCode is slightly worse than the separate one.
Retrieval speed comparison: Having established the improvements in retrieval performance with CasCode, we proceed to analyze the trade-off between inference speed and performance, for the different methods discussed. For each variant, we record the time duration (averaged over 100 instances) required to process (obtain a relevant code snippet from the retrieval set) a natural langugae query from the held-out set. We use the Ruby dataset of CodeSearchNet for this analysis, which contains 4360 candidate code snippets for each NL query. We conduct this study on a single Nvidia A100 GPU. Our results are shown in Table 4. For the fast encoder approach (using infoNCE-finetuned CodeBERT), we first incur some computational cost to encode all the candidate code snippets and construct the PL index (6.76 seconds for Ruby's retrieval set). This computation is common to all approaches, except the slow (binary, joint) classifier one. Since this computation can be performed offline before the model is deployed to serve user queries, we do not include this cost in our results in neighbor lookup on the PL index with the encoding, in the first row of Table 4. This computation is again performed by all the CasCode variants, and thus acts as the lower bound on time taken by CasCode for retrieval. For the analysis to be as close to real world scenarios as possible, we do not batch the queries and encode them one by one. Batching them would require assuming that we have the NL queries beforehand, while we would be receiving them on the fly from users when deployed.
With the slow classifier approach, we would pair a given query with each of the 4360 candidates, and thus this would lead to the slowest inference of all the variants. For all variants of CasCode, the inference duration listed in Table 4 includes the time taken by the fast encoder based retrieval (first stage). For CasCode's second stage, we can pass the K combinations (query concatenated with each of the top-K candidate from the fast stage) in a batched manner. The shared variant, while requiring half the parameters, incurs the same computational cost when used in the cascaded fashion. We note from Table 4 that at a minor drop in the MRR score, lowering CasCode's K from 100 can lead to almost 3x faster inference for the shared case.
CONCLUSION & FUTURE WORK
We propose CasCode, which is a cascaded scheme consisting of transformer encoder and joint binary classifier stages for the task of semantic code search and achieve state of the art performance on the CodeSearchNet benchmark, with significant improvements over previous results. We also propose a shared parameter variant of CasCode, where a single transformer encoder can operate in the two different stages when trained in a multi-task fashion. At almost half the number of parameters, CasCode's shared variant offers comparable performance to the non-shared (separate) variant.
A limitation of our current cascaded scheme is that the computation spent in generating representations in the first stage of fast encoders is not leveraged in the second stage. We process raw token level inputs in the second stage. Ideally the representations designed in the first stage should be useful for the classification stage too (Li et al., 2021). Our initial attempts along this direction did not turn fruitful, and future work could address this aspect. Another limitation warranting further investigation is associated with the training of the shared variant of CasCode. Here, training with the multitask learning framework (joint objective of infoNCE and binary cross entropy) leads to a model that performs slightly worse than the separate variant (individually finetuned models). We tried augmenting the capabilites of this model with solutions like using independent CLS tokens for the three modes the model has to operate in (NL only, PL only, NL-PL concatenation), and adjusting the relative weight of the two losses involved, but could not achieve any improvement over the separate variant.
A APPENDIX
Training details: We begin with the baseline implementation of GraphCodeBERT (publicly available) and adapt their codebase to also implement the CodeBERT model. For the cascaded schemes, many of our training design decisions are therefore the same as GraphCodeBERT.
We use 8 A100 GPUs (each with 40 GB RAM) to train our baselines and CasCode variants. During training, we set the batch-size to a value that occupies as much available GPU RAM as possible. This happens to be 576 for the CodeBERT and GraphCodeBERT baseline finetuning with the infoNCE loss (fast encoders). For training the joint NL-PL classifier of CasCode (separate), we use a batch size of 216. For CasCode (shared), we need to further reduce the batch size to 160. All models are trained for 100 epochs.
For all our experiments we use a learning rate of 2e-5. We use the Adam optimizer to update model parameters and perform early stopping on the development set. For the CasCode variants, when performing evaluation on the development set, we use K = 100 candidates from the fast encoder stage. Using 8 A100 GPUs, rough training durations for CasCode on the Ruby, Javascript, Go, Python, Java and PHP datasets are 6.5, 8, 33, 41, 15 and 21 hours respectively with separate variant and 13, 17, 38, 42, 55.5, 56 hours with the sharef variant. We typically perform evaluation on the validation set once every epoch, but make it infrequent in some cases to speed up training for larger datasets like Python and PHP. Given the significant amount of computation invested in training these retrieval models, we plan to release these checkpoints to avoid wasteful redundant training and encourage future work on semantic code search.
Figure 2 :
2Overview of the speed versus performance trade-off of current code search approaches. Areas of the circles here are proportional to model sizes.
that GraphCodeBERT does not offer any significant improvements in performance over CodeBERT, when both variants are trained with a large batch size. For simplicity, we finetune the CodeBERT pre-trained model (architecturally equivalent to RoBERTa-baseLiu et al. (2019) model -12 layers, 768 dimensional hidden states and 12 attention heads) and refer this as the fast encoder baseline for the remainder of our experiments.
Figure 4 :
4Mean reciprocal ranking (MRR) at different values of K over the validation set of CodeSearchNet Husain et al. (2019) when using a finetuned CodeBERT (slow) binary classifier (match or not) for text-code retrieval.
Figure 5 :
5Recall at different values of K over the validation set of CodeSearchNet Husain et al. (2019) when using a finetuned CodeBERT encoder (fast) for text-code retrieval.
Figure 6 :
6Recall @ K = {1, 2, 5, 8, 10} with the fast encoder and CasCode (shared and separate) methods on the test set queries of CodeSearchNet dataset.
Table 1 :
1Examples of bimodal pairs (natural language/docstring with corresponding code sequence) from CodeSearchNet
Table 2 :
2Data statistics of the filtered CodeSearchNet corpus for Go, Java, Javascript, PHP, Python and Ruby programming languages. For each query in the dev and test sets, the answer is retrieved from the set of candidate codes (last row)
Table 3 :
3Mean Reciprocal Ranking (MRR) values of different methods on the codesearch task on
6 Programming Languages from the CodeSearchNet corpus (test set). The first set consists of four
finetuning-based baseline methods (NBow: Bag of words, CNN: convolutional neural network,
BiRNN: bidirectional recurrent neural network, and multi-head attention), followed by the second
set of models that are pre-trained then finetuned for code search (RoBERTa: pre-trained on text by
Liu et al. (2019), RoBERTa (code): RoBERTa pre-trained only on code, CodeBERT: pre-trained on
code-text pairs by
Table 4 .
4Inference speed comparison for the different variants studied. The number of parameters corresponding to the classifier head are separated with a ' + ' sign in the second column. Inference duration is averaged for 100 queries from the Ruby subset of CodeSearchNet, using a single A100 GPU. Constructing the PL index offline requires 6.76 seconds for the Ruby dataset and is not included in the durations listed here. MRR scores are reported on the entire test set. Throughput of the retrieval model (measured in # queries processed per second) is listed in the last column.With the PL index
https://github.com/microsoft/CodeBERT/tree/master/GraphCodeBERT
Unified pre-training for program understanding and generation. Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang, 10.18653/v1/2021.naacl-main.211Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsWasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Unified pre-training for program understanding and generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2655-2668, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.211. URL https://aclanthology.org/2021
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, arXiv:2108.07732Program synthesis with large language models. arXiv preprintJacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
Frozen in time: A joint video and image encoder for end-to-end retrieval. Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman, IEEE International Conference on Computer Vision. Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In IEEE International Conference on Computer Vision, 2021.
When deep learning met code search. Jose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, Satish Chandra, Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringJose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra. When deep learning met code search. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 964- 974, 2019.
. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harri Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, arXiv:2107.03374arXiv preprintet al. Evaluating large language models trained on codeMark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harri Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, International conference on machine learning. PMLRTing Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597-1607. PMLR, 2020.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https: //aclanthology.org/N19-1423.
CodeBERT: A pre-trained model for programming and natural languages. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou, 10.18653/v1/2020.findings-emnlp.139Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational LinguisticsZhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. CodeBERT: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1536-1547, Online, November 2020a. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.139. URL https://www.aclweb.org/anthology/ 2020.findings-emnlp.139.
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, arXiv:2002.08155A pre-trained model for programming and natural languages. arXiv preprintZhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 2020b.
Deep code search. Xiaodong Gu, Hongyu Zhang, Sunghun Kim, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE). IEEEXiaodong Gu, Hongyu Zhang, and Sunghun Kim. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pp. 933-944. IEEE, 2018.
Daya Guo, Shuai Shuo Ren, Zhangyin Lu, Duyu Feng, Shujie Tang, Long Liu, Nan Zhou, Alexey Duan, Shengyu Svyatkovskiy, Fu, Pre-training code representations with data flow. ICLR 2021. 2021Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. Graphcodebert: Pre-training code representations with data flow. ICLR 2021, 2021.
Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Michael Gutmann, Aapo Hyvärinen, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsMichael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 297-304. JMLR Workshop and Conference Proceedings, 2010.
Junjie Huang, Duyu Tang, Linjun Shou, Ming Gong, Ke Xu, Daxin Jiang, Ming Zhou, Nan Duan, arXiv:2105.13239Cosqa: 20,000+ web queries for code search and question answering. arXiv preprintJunjie Huang, Duyu Tang, Linjun Shou, Ming Gong, Ke Xu, Daxin Jiang, Ming Zhou, and Nan Duan. Cosqa: 20,000+ web queries for code search and question answering. arXiv preprint arXiv:2105.13239, 2021.
Codesearchnet challenge: Evaluating the state of semantic code search. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, Marc Brockschmidt, arXiv:1909.09436arXiv preprintHamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436, 2019.
Align before fuse: Vision and language representation learning with momentum distillation. Junnan Li, Ramprasaath R Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi, NeurIPS. 2021Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, and Steven Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In NeurIPS, 2021.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, arXiv:2102.04664A machine learning benchmark dataset for code understanding and generation. arXiv preprintShuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664, 2021.
Thinking fast and slow: Efficient text-to-visual retrieval with transformers. Antoine Miech, Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, Andrew Zisserman, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionAntoine Miech, Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, and Andrew Zisserman. Thinking fast and slow: Efficient text-to-visual retrieval with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9826-9836, 2021.
Retrieval on source code: a neural code search. Saksham Sachdev, Hongyu Li, Sifei Luan, Seohyun Kim, Koushik Sen, Satish Chandra, Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages. the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming LanguagesSaksham Sachdev, Hongyu Li, Sifei Luan, Seohyun Kim, Koushik Sen, and Satish Chandra. Retrieval on source code: a neural code search. In Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, pp. 31-41, 2018.
Intellicode compose: Code generation using transformer. Alexey Svyatkovskiy, Shengyu Shao Kun Deng, Neel Fu, Sundaresan, Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringAlexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. Intellicode compose: Code generation using transformer. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 1433-1443, 2020.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman GarnettLong Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5998-6008, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
Syncobert: Syntax-guided multi-modal contrastive pre-training for code representation. Xin Wang, Fei Mi Yasheng Wang, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, Xin Jiang, arXiv:2108.04556arXiv preprintXin Wang, Fei Mi Yasheng Wang, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, and Xin Jiang. Syncobert: Syntax-guided multi-modal contrastive pre-training for code representation. arXiv preprint arXiv:2108.04556, 2021a.
Codet5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. Yue Wang, Weishi Wang, Shafiq Joty, C H Steven, Hoi, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language Processing2021Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. Codet5: Identifier-aware unified pre- trained encoder-decoder models for code understanding and generation. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, 2021b.
F Frank, Bogdan Xu, Graham Vasilescu, Neubig, arXiv:2101.11149-ide code generation from natural language: Promise and challenges. arXiv preprintFrank F Xu, Bogdan Vasilescu, and Graham Neubig. In-ide code generation from natural language: Promise and challenges. arXiv preprint arXiv:2101.11149, 2021.
From word embeddings to document similarities for improved information retrieval in software engineering. Xin Ye, Hui Shen, Xiao Ma, Razvan Bunescu, Chang Liu, Proceedings of the 38th international conference on software engineering. the 38th international conference on software engineeringXin Ye, Hui Shen, Xiao Ma, Razvan Bunescu, and Chang Liu. From word embeddings to document similarities for improved information retrieval in software engineering. In Proceedings of the 38th international conference on software engineering, pp. 404-415, 2016.
| [
"https://github.com/microsoft/CodeBERT/tree/master/GraphCodeBERT"
] |
[
"LTP: A New Active Learning Strategy for Bert-CRF Based Named Entity Recognition",
"LTP: A New Active Learning Strategy for Bert-CRF Based Named Entity Recognition"
] | [
"Mingyi Liu \nHarbin Institute of Technology Harbin\nChina\n",
"Zhiying Tu \nHarbin Institute of Technology Harbin\nChina\n",
"Zhongjie Wang \nHarbin Institute of Technology Harbin\nChina\n",
"Xiaofei Xu xiaofei@hit.edu.cn \nHarbin Institute of Technology Harbin\nChina\n"
] | [
"Harbin Institute of Technology Harbin\nChina",
"Harbin Institute of Technology Harbin\nChina",
"Harbin Institute of Technology Harbin\nChina",
"Harbin Institute of Technology Harbin\nChina"
] | [] | In recent years, deep learning has achieved great success in many natural language processing tasks including named entity recognition. The shortcoming is that a large amount of manually-annotated data is usually required. Previous studies have demonstrated that both transfer learning and active learning could elaborately reduce the cost of data annotation in terms of their corresponding advantages, but there is still plenty of room for improvement. We assume that the convergence of the two methods can complement with each other, so that the model could be trained more accurately with less labelled data, and active learning method could enhance transfer learning method to accurately select the minimum data samples for iterative learning. However, in real applications we found this approach is challenging because the sample selection of traditional active learning strategy merely depends on the final probability value of its model output, and this makes it quite difficult to evaluate the quality of the selected data samples. In this paper, we first examine traditional active learning strategies in a specific case of BERT-CRF that has been widely used in named entity recognition. Then we propose an uncertainty-based active learning strategy called Lowest Token Probability (LTP) which considers not only the final output but also the intermediate results. We test LTP on multiple datasets, and the experiments show that LTP performs better than traditional strategies (incluing LC and NLC) on both token-level F 1 and sentence-level accuracy, especially in complex imbalanced datasets. | null | [
"https://arxiv.org/pdf/2001.02524v1.pdf"
] | 210,064,296 | 2001.02524 | 4364ed5520e032edd4eb2b371e41bc4d83af1b0b |
LTP: A New Active Learning Strategy for Bert-CRF Based Named Entity Recognition
Mingyi Liu
Harbin Institute of Technology Harbin
China
Zhiying Tu
Harbin Institute of Technology Harbin
China
Zhongjie Wang
Harbin Institute of Technology Harbin
China
Xiaofei Xu xiaofei@hit.edu.cn
Harbin Institute of Technology Harbin
China
LTP: A New Active Learning Strategy for Bert-CRF Based Named Entity Recognition
active learningnamed entity recognitiontransfer learningCRF
In recent years, deep learning has achieved great success in many natural language processing tasks including named entity recognition. The shortcoming is that a large amount of manually-annotated data is usually required. Previous studies have demonstrated that both transfer learning and active learning could elaborately reduce the cost of data annotation in terms of their corresponding advantages, but there is still plenty of room for improvement. We assume that the convergence of the two methods can complement with each other, so that the model could be trained more accurately with less labelled data, and active learning method could enhance transfer learning method to accurately select the minimum data samples for iterative learning. However, in real applications we found this approach is challenging because the sample selection of traditional active learning strategy merely depends on the final probability value of its model output, and this makes it quite difficult to evaluate the quality of the selected data samples. In this paper, we first examine traditional active learning strategies in a specific case of BERT-CRF that has been widely used in named entity recognition. Then we propose an uncertainty-based active learning strategy called Lowest Token Probability (LTP) which considers not only the final output but also the intermediate results. We test LTP on multiple datasets, and the experiments show that LTP performs better than traditional strategies (incluing LC and NLC) on both token-level F 1 and sentence-level accuracy, especially in complex imbalanced datasets.
INTRODUCTION
Over the past few years, papers applying deep neural networks (DNNs) to the task of named entity recognition (NER) have achieved noteworthy success [3], [11], [13].However, under typical training procedures, the advantages of deep learning are established mostly relied on the huge amount of labeled data. When applying these methods on domain-related tasks, their main problem lies in their need for considerable human-annotated training corpus, which requires tedious and expensive work from domain experts. Thus, to make these methods more widely applicable and easier to adapt to various domains, the key is how to reduce the number of manually annotated training samples.
Both transfer learning and active learning are designed to reduce the amount of data annotation. However, the two methods work differently.
Transfer Learning is the migration of trained model parameters to new models to facilitate the new model training. We can share the learned model parameters into the new model in a certain way to accelerate and optimize the learning efficiency of the model, instead of learning from zero. So transfer learning could help to achieve better results on a small dataset. However, it should be noted that transfer learning works well only when the sample distributions of the source and target domain are similar. While significant distribution divergence might cause a negative transfer.
Unlike the supervised learning setting, in which samples are selected and annotated at random, the process of Active Learning employs one or more human annotators by asking them to label new samples that are supposed to be the most informative in the creation of a new classifier. The greatest challenge in active learning is to determine which sample is more informative. The most common approach is uncertainty sampling, in which the model preferentially selects samples whose current prediction is least confident.
Quite a lot of works have been done to reduce the amount of data annotation for NER tasks through either transfer learning or active learning, but few researches have combined these two techniques to reduce labeling cost and avoid negative transfer. In this work, we try to integrate a widely used transfer learning based NER model, called Bert-CRF, with active learning.
When evaluating the effect of NER, most of the works only use the value of the token-level F 1 score or entity-level F 1 score. However, in some cases, this could be misleading, especially for languages that do not have a natural separator, such as Chinese. And the NER task is often used to support downstream tasks, which prefer that all entities in the sentence are correctly identified. Figure 1 shows an example where only one token gets the wrong label (the corresponding token-level and entity-level F 1 values are 0.947 and 0.857). When this wrong result is used in the user intention understanding, the phone will be considered as the user demand rather than the phone cases. So in this work, we not only evaluate the token-level F 1 score but also the sentence-level accuracy. We first experiment with the traditional uncertainty-based active learning algorithms, and then we proposed our own active learning strategy based on the lowest token probability with the best labeling sequence. Experiments show that our selection strategy is superior to traditional uncertainty-based active selection strategies in multiple Chinese datasets both in token-level F 1 score and overall sentence-level accuracy. Especially in the case of a large number of entity types.
Finally, we make empirical analysis with different active selection strategies and give some suggestions for using them.
The remainder of this paper is organized as follows. In Section 2 we summarize the related works in transfer learning and active learning. In section 3 we introduce an NER model called Bert-CRF, and the active learning framework. Section 4 describes in details the active learning strategies we propose. Section 5 describes the experimental setting, the datasets, and discusses the empirical results.
RELATED WORK 2.1 Named entity recognition
The framework of NER using deep neural network can be regarded as a composition of encoder and decoder. For encoders, there are many options. Collobert et al. [5] first used convolutional neural network (CNN) as the encoder. Traditional CNN cannot solve the problem of long-distance dependency. In order to solve this problem, RNN [17], BiLSTM [9] , Dilated CNN [24] and bidirectional Transformers [8] are proposed to replace CNN as encoder. For decoders, some works used RNN for decoding tags [15], [17]. However, most competitive approaches relied on CRF as decoder [11], [27].
Transfer learning
Transfer learning could help have satisfied results on small datasets. There are two methods to apply the pre-training language model to downstream tasks. Feature-based approach (eg. Word2Vec [16], ELMO [18]) that includes pre-trained representations as additional features into embedding. Fine-tuning approach (eg. GPT [19], BERT [8]) that fine-tunes the pre-trained parameters in the specific downstream tasks. In this work, we use BERT (as encoder) in upstream to do pre-training and CRF (as decoder) in downstream to fine-tune.
Active learning
Active learning strategies have been well studied [7], [1], [23]. These strategies can be grouped into following categories: Uncertainty sample [12][6][20] [10], query-by-committee [22] [25], information density [26], fisher information [21]. There were some works that compared the performance of different types of selection strategies in NER/sequence labeling tasks with CRF model [2] [14] [21] [4]. These results show that, in most case, uncertainty-based methods perform better and cost less time. However, we found that these traditional uncertainty-based strategies did not perform well with transfer learning. So, we propose our own uncertainty-based strategy in this work. The entire learning processing is shown in Figure 2ãĂĆ There are two main stages in the process, model training (discussed in detail in this section) and sample selection (discussed in detail in Section 4). Through multiple iterations of two stages, we can get the ideal results in quite low annotation cost.
The architecture of the NER network comprises of an input layer, a pre-training language model, a fully connected layer and finally a CRF (conditional random field) layer, which simulates label dependencies in the output. It is must be noted that while some work have been done with Bert-BiLSTM-CRF taht replace the full connectivity layer in the Bert-CRF with BiLSTM layer. However, we found in the experiment that there was no significant difference between the performance of Bert-BiLSTM-CRF and Bert-CRF, and the network structure of Bert-BiLSTM-CRF is more complex than Bert-CRF and with slower training speed. So Bert-CRF was selected in this paper.
Data Representation
We represent each input sentence following Bert format; Each token in the sentence is marked with BIO scheme tags. Special [CLS] and [SEP] tokens are added at the beginning and the end of the tag sequence, respectively.
[PAD] tokens are added at the end of sequences to make their lengths uniform. The formatted sentence in length N is denoted as x =< x 1 , x 2 , . . . , x N >, and the corresponding tag sequence is denoted as y =< y 1 , y 2 , . . . , y N >.
Bert & CRF Layer
Bert is one of the most successful pre-training language models, here we use it as the character-level encoder. For each character x i in the input sequence x, bert will convert it into a fixed-length vector w. CRF are statistical graphical models which have demonstrated state-of-art accuracy on virtually all of the sequence labeling tasks including NER task. Particularly, we use linear-chain CRF that is a popular choice for tag decoder, adopted by most DNNs for NER. A linear-chain CRF model defines the posterior probability of y given x to be:
P(y|x; A) = 1 Z (x) exp h 1 (y 1 ; x) + n−1 k =1 h k+1 (y k +1 ; x) + A y k ,y k +1
(1) where Z (x) is a normalization factor over all possible tags of x, and h k (y k ; x) (#TODO: ambiguous?) indicates the probability of taking the y k tag at position k which is the output of the previous softmax layer. A is a parameter called a transfer matrix, which can be set manually or by model learning. In our experiment, we let the model learn the parameter by itself. A y k ,y k +1 means the probability of a transition from tag states y k to y k +1 . We use y * to represent the most likely tag sequence of x:
y * = arg max y P(y|x)(2)
The parameters A are learnt through the maximum log-likelihood estimation, that is to maximize the log-likelihood function ℓ of training set sequences in the labeled data set L:
ℓ(L; A) = L l =1 log P(y (l) |x (l) ; A)(3)
where L is the size of the tagged set L.
ACTIVE LEARNING STRATEGIES
The biggest challenge in active learning is how to select instances that need to be manually labeled. A good selection strategy ϕ(x), which is a function used to evaluate each instance x in the unlabeled pool U, will select the most informative instance x. Algorithm 1 illustrate the entire pool-based active learning process. In the remainder of this section, we describe various query strategy formulations of ϕ(·) in detail.
Least Confidence (LC)
Culotta and McCallum employ a simple uncertainty-based strategy for sequence models called least confidence(LC), which sort examples in ascending order according to the probability assigned by the model to the most likely sequence of tags:
ϕ LC (x) = 1 − P(y * |x; A)(4)
This confidence can be calculated using the posterior probability given by Equation 1. Preliminary analysis revealed that the LC Algorithm 1 Pool-based active learning framework
Require: Labeled data set L, unlabeled data pool U, selection strategy ϕ(·), query batch size B while not reach stop condition do // Train the model using labeled set L train(L);
for b = 1 to B do //select the most informative instance x * = arg max x∈U ϕ(x) L = L∪ < x * , label(x * ) > U = U − x * end for end while
strategy prefer selects longer sentences:
P(y * |x; A) ∝ exp h 1 (y * 1 ; x) + n−1 k =1 h k +1 (y * k +1 ; x) + A y * k ,y * k +1(5)
Since Equation 5 contains summation over tokens, LC method naturally favors longer sentences. Although the LC method is very simple and has some shortcomings, many works prove the effectiveness of the method in sequence labeling tasks.
Normalized Least Confidence (NLC)
As mentioned in Section 4.1, LC favors longer sentences which requires more labor for annotation. To overcome this drawback, we normalize the confidence as follow:
ϕ N LC (x) = 1 − 1 N P(y * |x; A)(6)
where N is the length of x.
Lowest Token Probability (LTP)
Inspired by Minimum Token Probability (MTP) strategies that select the most informative tokens, regardless of the assignment performed by CRF. This strategy greedily samples the tokens whose highest probability among the labels is lowest:
ϕ MT P (x) = 1 − min i max j h i (y i = j |x; A)(7)
h i (y i = j |x; A) is the probability that j is the label at position i in the sequences. Unlike MTP, we believe that the sequence selected by CRF is valuable. We look for the most probable sequence assignment, and hope that each token in the sequence has a high probability. We proposed our select strategy called Lowest Token Probability (LTP), which selects the tokens whose probability under the most likely tag sequence y * is lowest.
ϕ LT P (x) = 1 − min y * i ∈y * h i (y * i |x; A)(8)
EXPERIMENTS 5.1 Datasets
We have experimented and evaluated the active learning strategies of Section 4 on three Chinese datasets. People's Daily is a collection of newswire articles annotated with three entities: person, organization, location. Boson-NER 1 is a set of online news annotations published by bosonNLP, which contains 6 entities, such as person,product, time. OntoNotes-5.0 Chinese data (bn part) (#TODO: CITE) which contains 18 entities. All corpora are formatted in the "BIO" sequence representation(#TODO cite). Table 2 shows some statistics of the datasets in terms of dimensions, number of entity types, distribution of the labels, etc.
Experimental Setting
We random chose an initial training set L 1 of 99 sentences on People's Daily dataset, 75 sentences on Boson-NER dataset, 76 sentences on OntoNotes-5.0 dataset. The dimension of the batch update B has been seen as a trade-off between an ideal case in which the system is retrained after every single annotation and a practical case with higher B to limit the algorithmic complexity and improve manual labeling efficiency in real-world. In our experiment, B is setting to 200. We fixed the number of active learning iterations at 12 because of each algorithm does not improve obviously after 12 iterations.
In the NER Model, we use BERT-Base-Chinese 2 which has 110M parameters. The training batch size is set to 32, and the max_seq_lenдth is set to 80. We set the learning rate to 0.00001. For each iteration, we train 30 epoch to ensure model convergence. And other parameters related to BERT are set to default values. In the fully connected layer, we set the dropout rate to be 0.9 to prevent overfitting. The transfer matrix in CRF is also left to the model to learn by itself.
We empirically compare the selection strategy proposed in Section 4, as well as the uniformly random baseline (RAND). We evaluate each selection strategy by constructing learning curves that plot the overall F 1 (for tokens, O tag doesn't in the metric) and 1 https://bosonnlp.com/resources/BosonNLP_NER_6C.zip 2 https://github.com/google-research/bert accuracy (for sentences). In order to prevent the contingency of the experiment, we have done 10 experiments for each selection strategy using different random initial training sets L 1 . All results are averaged across these experiments.
Results
From the learning curves of Figure 3, it is clear that all active learning algorithms perform better than the random baseline on People's Daily and Boson-NER datasets. On OntoNote5.0 dataset, LC and NLC are better than RAND in the early iteration. Our approach LTP performs best on all datasets. Figure 3 also shows that combining transfer learning with appropriate active learning strategy, the amount of data that needs to be labeled can be greatly reduced. For example, LTP achieves 99% performance of FULL using only 4.3% (5.0%, 5.4%) of training sentences (tokens, entities) on People's Daily dataset, 22.8% (31.1%, 40.8%) of training sentences (tokens, entities) on Boson-NER dataset and 19.32% (26.1%, 30.2%) of training sentences (tokens, entities) on OntoNote5.0 dataset. Figure 4 shows the results of sentence-level accuracy on three datasets. The results exceeded our expectations and were very interesting. Firstly, the results confirm that the token-level F 1 value is sometimes misleading what we mentioned in Section 1. For example, on Boson-NER dataset, althought the token-level F 1 scores of LTP and LC are similar in the last few iterations, the sentence-level accuracy is 2% difference. Secondly, in the experiment, LTP is much better than the rest of the methods, and can be close to the use of all training data especially on complex datasets (eg. Boson-NRR, OntoNotes). Thirdly, LC and NLC perform poorly at sentence-level accuracy.
The two operations that are time-consuming in the actual sequence labeling process are the amount of text that the labeler needs to read and the amount of text that the labeler needs to label. In Figure 5, we use the total number of tokens and the total number of entities to represent these two factors. One can see that compared with LC, LTP can achieve better performance with less label effort.
DISCUSSION AND SUGGESTION
In this section, we will discuss possible reasons for the gap between different selection strategies. Then we give some suggestions on how to choose different active learning strategies in practical applications. We first give the detailed distribution of the entities under different datasets as shown in Table 3 -5. One can see that the biggest difference between OntoNote dataset and the remaining two datasets is that the entity distribution is extremely unbalanced. The number of GPE is approximately 162 times the number of LANGUAGE in the OntoNote5.0 dataset.
Due to page limitations, we are unable to present the analysis results for all datasets here. So we select the first 6 iterations of OntoNote dataset for explanation, for two reasons:
(1) On the OntoNote dataset, the difference in the effect of the different strategies is most obvious. (2) The first 6 iterations have large preformace changes. Figure 7 shows the deviation between the sample distribution selected by different selection strategies and the overall sample distribution. We can see two differences between LTP and other active learning strategies in sampling:
, ,
Mingyi Liu and Zhiying Tu, et al. This sampling strategy is consistent with our intuition, that is, to reduce the sampling frequency of the samples that are already well learned, and increase the sampling frequency of the samples that are not sufficiently learned. Finally, we explore the stability of sampling with different active selection strategies. We use two adjacent sampling offsets to indicate stability:
o f f set i = e ∈E |pro i (e) − pro i−1 (e)|(9)
Where E is a set of entity types, pro i (e) represents the proportion of entity class e in iteration i. Figure 6 shows this result. We found that LTP can obtain stable sampling distribution faster, which means that LTP is more stable than other active learning strategies. Based on the discussion above, we give several suggestions here.
(1) If your dataset is simple, the distribution of different entities is relatively balanced (eg. People's Daily), and you don't need to worry about sentence-level accuracy. Transfer learning is sufficient (although adding active learning at this point improves performance slightly, but incurs the cost of model retraining). (2) If your data set is complex, there are many entity classes involved, and the distribution of different entities is uneven (eg. OntoNote5.0), then choose LTP.
CONCLUSION
We proposed a new active learning strategy for Bert-CRF based named entity recognition. The experiment shows that compared with the traditional active selection strategies, our strategy has better performance, especially in complex datasets. Further, we analyze the different selection strategies and give some Suggestions on how to use them.
Figure 1 :
1An example of F 1 score leading to misunderstanding.
Figure 2 :
2The overall framework
Figure 3 :Figure 4 :Figure 5 :
345Token-level F 1 results on three datasets Sentence-level accuracy on three corpus Total number of selected tokens and entities.
Figure 6 :
6Offset of adjacent two iterations of selection (1) For entities with high overall proportion (such as GPE, PER-SON, ORG), LTP samples consistently below overall proportion. (2) For entities with low overall proportion (such as LAW, LAN-GUAGE, PRODUCT ), LTP samples consistently above overall proportion.
( 3 )
3If you need to care about sentence-level accuracy, always use LTP.
, ,
,
Table 1 :
1Example of data representation. [PAD] tag are not shown.Sentence
Trump was born in the
United States
Tag
[CLS] B-PER O
O
O B-LOC I-LOC I-LOC [SEP]
Table 2 :
2Training(Testing) Data Statistics. #S is the number of total sentences in the dataset, #T is the number of tokens in the dataset, #E is the number of entity types, ASL is the average length of a sentence, ASE is the average number of entities in a sentence, AEL is the average length of a entity, %PT is the percentage of tokens with positive label,%AC is the percentage of a sentences with more than one entity, %DAC is the percentage of sentences that have two or more entities.Corpus
#S
#T
#E
ASL
ASE
AEL
%PT
%AC
%DAC
People's Daily
38950
(16608)
1651613
(701518)
3
(3)
42.4
(42.2)
1.45
(1.47)
3.24
(3.22)
11.1%
(11.2%)
57.8%
(58.3%)
34.9%
(36.0%)
Boson-NER
7348
(3133)
378865
(160693)
6
(6)
51.5
(51.2)
2.20
(2.19)
3.99
(4.00)
17.1%
(17.1%)
73.2%
(72.9%)
50.3%
(50.0%)
OntoNotes-5.0
(bn-zh)
7637
(917)
368803
(45601)
18
(18)
48.2
(49.7)
3.45
(3.71)
3.14
(3.07)
22.4%
(23.0%)
87.5%
(89.6%)
71.8%
(75.02%)
Table 3 :
3Entity Distribution in People's Daily Training dataset.Person Location Organization
23.85% 48.88%
27.27%
Table 4 :
4Entity Distribution in Boson-NER Training dataset. 92% 21.46% 12.46% 18.79% 13.63% 10.75%PER
LOC
ORG
TIM
PRODUCT COMPANY
22.
Table 5 :
5Entity Distribution in OntoNote5.0 Training dataset.GPE
QUANTITY DATE
EVENT
PERSON
CARDINAL
29.20%
1.11%
11.84%
1.10%
18.43%
10.21%
NORP
ORG
FAC
ORDINAL TIME
MONEY
4.07%
13.80%
1.86%
1.36%
2.83%
0.96%
WORK_OF_ART LOC
PRODUCT LAW
PERCENT LANGUAGE
0.83%
2.75%
0.36%
0.46%
0.62%
0.18%
ACKNOWLEDGMENTS
The power of localization for efficiently learning linear separators with noise. Pranjal Awasthi, Maria Florina Balcan, Philip M Long, Proceedings of the forty-sixth annual ACM symposium on Theory of computing. the forty-sixth annual ACM symposium on Theory of computingACMPranjal Awasthi, Maria Florina Balcan, and Philip M Long. 2014. The power of localization for efficiently learning linear separators with noise. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing. ACM, 449-458.
A study of active learning methods for named entity recognition in clinical text. Yukun Chen, Thomas A Lasko, Qiaozhu Mei, Joshua C Denny, Hua Xu, 10.1016/j.jbi.2015.09.010Journal of Biomedical Informatics. 58Yukun Chen, Thomas A. Lasko, Qiaozhu Mei, Joshua C. Denny, and Hua Xu. 2015. A study of active learning methods for named entity recognition in clinical text. Journal of Biomedical Informatics 58 (2015), 11 -18. https://doi.org/10.1016/ j.jbi.2015.09.010
Named entity recognition with bidirectional LSTM-CNNs. P C Jason, Eric Chiu, Nichols, Transactions of the Association for Computational Linguistics. 4Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics 4 (2016), 357-370.
Strategies to Select Examples for Active Learning with Conditional Random Fields. Vincent Claveau, Ewa Kijak, Computational Linguistics and Intelligent Text Processing. ChamSpringer International PublishingVincent Claveau and Ewa Kijak. 2018. Strategies to Select Examples for Active Learning with Conditional Random Fields. In Computational Linguistics and Intel- ligent Text Processing, Alexander Gelbukh (Ed.). Springer International Publishing, Cham, 30-43.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, Journal of machine learning research. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research 12, Aug (2011), 2493-2537.
Reducing labeling effort for structured prediction tasks. Aron Culotta, Andrew Mccallum, In AAAI. 5Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In AAAI, Vol. 5. 746-751.
Analysis of perceptron-based active learning. Sanjoy Dasgupta, Adam Tauman Kalai, Claire Monteleoni, International Conference on Computational Learning Theory. SpringerSanjoy Dasgupta, Adam Tauman Kalai, and Claire Monteleoni. 2005. Analysis of perceptron-based active learning. In International Conference on Computational Learning Theory. Springer, 249-263.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
Zhiheng Huang, Wei Xu, Kai Yu, arXiv:1508.01991Bidirectional LSTM-CRF models for sequence tagging. arXiv preprintZhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015).
Mmr-based active machine learning for bio named entity recognition. Seokhwan Kim, Yu Song, Kyungduk Kim, Jeong-Won Cha, Gary Geunbae Lee, Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. the Human Language Technology Conference of the NAACL, Companion Volume: Short PapersAssociation for Computational LinguisticsSeokhwan Kim, Yu Song, Kyungduk Kim, Jeong-Won Cha, and Gary Geunbae Lee. 2006. Mmr-based active machine learning for bio named entity recognition. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. Association for Computational Linguistics, 69-72.
Neural Architectures for Named Entity Recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, Proceedings of NAACL-HLT. NAACL-HLTGuillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In Proceedings of NAACL-HLT. 260-270.
Heterogeneous uncertainty sampling for supervised learning. D David, Jason Lewis, Catlett, Machine learning proceedings. ElsevierDavid D Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Machine learning proceedings 1994. Elsevier, 148-156.
Bidirectional LSTM for named entity recognition in Twitter messages. Nut Limsopatham, Nigel Henry Collier, Nut Limsopatham and Nigel Henry Collier. 2016. Bidirectional LSTM for named entity recognition in Twitter messages. (2016).
An Experimental Comparison of Active Learning Strategies for Partially Labeled Sequences. Diego Marcheggiani, Thierry Artières, EMNLP. Diego Marcheggiani and Thierry Artières. 2014. An Experimental Comparison of Active Learning Strategies for Partially Labeled Sequences. In EMNLP.
Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding. Grégoire Mesnil, Xiaodong He, Li Deng, Yoshua Bengio, Interspeech. Grégoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding.. In Interspeech. 3771-3775.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111-3119.
Toward mention detection robustness with recurrent neural networks. arXiv preprint LTP: A New Active Learning Strategy for Bert-CRF Based Named Entity Recognition , Figure 7: Comparison of selected entities distribution and overall entities distribution for first 6 iterations on OntoNote dataset. Avirup Thien Huu Nguyen, Georgiana Sil, Radu Dinu, Florian, arXiv:1602.07749Thien Huu Nguyen, Avirup Sil, Georgiana Dinu, and Radu Florian. 2016. Toward mention detection robustness with recurrent neural networks. arXiv preprint LTP: A New Active Learning Strategy for Bert-CRF Based Named Entity Recognition , , Figure 7: Comparison of selected entities distribution and overall entities distribution for first 6 iterations on OntoNote dataset. arXiv:1602.07749 (2016).
E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018).
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by genera- tive pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf (2018).
Active hidden markov models for information extraction. Tobias Scheffer, Christian Decomain, Stefan Wrobel, International Symposium on Intelligent Data Analysis. SpringerTobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active hidden markov models for information extraction. In International Symposium on Intelli- gent Data Analysis. Springer, 309-318.
An analysis of active learning strategies for sequence labeling tasks. Burr Settles, Mark Craven, Proceedings of the conference on empirical methods in natural language processing. the conference on empirical methods in natural language processingAssociation for Computational LinguisticsBurr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, 1070- 1079.
Query by committee. Manfred H Sebastian Seung, Haim Opper, Sompolinsky, Proceedings of the fifth annual workshop on Computational learning theory. the fifth annual workshop on Computational learning theoryACMH Sebastian Seung, Manfred Opper, and Haim Sompolinsky. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory. ACM, 287-294.
Deep active learning for named entity recognition. Yanyao Shen, Hyokun Yun, Zachary C Lipton, Yakov Kronrod, Animashree Anandkumar, arXiv:1707.05928arXiv preprintYanyao Shen, Hyokun Yun, Zachary C Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. arXiv preprint arXiv:1707.05928 (2017).
Fast and accurate entity recognition with iterated dilated convolutions. Emma Strubell, Patrick Verga, David Belanger, Andrew Mccallum, arXiv:1702.02098arXiv preprintEmma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. arXiv preprint arXiv:1702.02098 (2017).
Evidential query-by-committee active learning for pedestrian detection in high-density crowds. Jennifer Vandoni, Emanuel Aldea, Sylvie Le Hégarat-Mascle, International Journal of Approximate Reasoning. 104Jennifer Vandoni, Emanuel Aldea, and Sylvie Le Hégarat-Mascle. 2019. Evidential query-by-committee active learning for pedestrian detection in high-density crowds. International Journal of Approximate Reasoning 104 (2019), 166-184.
Submodularity in data subset selection and active learning. Kai Wei, Rishabh Iyer, Jeff Bilmes, International Conference on Machine Learning. Kai Wei, Rishabh Iyer, and Jeff Bilmes. 2015. Submodularity in data subset selection and active learning. In International Conference on Machine Learning. 1954-1963.
Multi-task crosslingual sequence tagging from scratch. Zhilin Yang, Ruslan Salakhutdinov, William Cohen, arXiv:1603.06270arXiv preprintZhilin Yang, Ruslan Salakhutdinov, and William Cohen. 2016. Multi-task cross- lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270 (2016).
| [
"https://github.com/google-research/bert"
] |
[
"Learning Question-Guided Video Representation for Multi-Turn Video Question Answering",
"Learning Question-Guided Video Representation for Multi-Turn Video Question Answering",
"Learning Question-Guided Video Representation for Multi-Turn Video Question Answering",
"Learning Question-Guided Video Representation for Multi-Turn Video Question Answering"
] | [
"Guan-Lin Chao guanlinchao@cmu.edu ",
"Abhinav Rastogi ",
"Google Ai ",
"Semih Yavuz syavuz@cs.ucsb.edu ",
"Hakkani-Tür Dilek dilek@ieee.org ",
"Alexa Amazon ",
"Ai ",
"Jindong Chen jdchen@google.com ",
"Google Ai ",
"Ian Lane lane@cmu.edu ",
"\nCarnegie Mellon University\nUniversity of California\nSanta Barbara\n",
"\nCarnegie Mellon University\n\n",
"Guan-Lin Chao guanlinchao@cmu.edu ",
"Abhinav Rastogi ",
"Google Ai ",
"Semih Yavuz syavuz@cs.ucsb.edu ",
"Hakkani-Tür Dilek dilek@ieee.org ",
"Alexa Amazon ",
"Ai ",
"Jindong Chen jdchen@google.com ",
"Google Ai ",
"Ian Lane lane@cmu.edu ",
"\nCarnegie Mellon University\nUniversity of California\nSanta Barbara\n",
"\nCarnegie Mellon University\n\n"
] | [
"Carnegie Mellon University\nUniversity of California\nSanta Barbara",
"Carnegie Mellon University\n",
"Carnegie Mellon University\nUniversity of California\nSanta Barbara",
"Carnegie Mellon University\n"
] | [] | Understanding and conversing about dynamic scenes is one of the key capabilities of AI agents that navigate the environment and convey useful information to humans. Video question answering is a specific scenario of such AI-human interaction where an agent generates a natural language response to a question regarding the video of a dynamic scene. Incorporating features from multiple modalities, which often provide supplementary information, is one of the challenging aspects of video question answering. Furthermore, a question often concerns only a small segment of the video, hence encoding the entire video sequence using a recurrent neural network is not computationally efficient. Our proposed question-guided video representation module efficiently generates the token-level video summary guided by each word in the question. The learned representations are then fused with the question to generate the answer. Through empirical evaluation on the Audio Visual Scene-aware Dialog (AVSD) dataset (Alamri et al., 2019a), our proposed models in single-turn and multiturn question answering achieve state-of-theart performance on several automatic natural language generation evaluation metrics. | 10.18653/v1/w19-5926 | [
"https://arxiv.org/pdf/1907.13280v1.pdf"
] | 199,000,972 | 1907.13280 | fb5af7b7d5ebf9e8ff4164233a73b8fe4fe737a3 |
Learning Question-Guided Video Representation for Multi-Turn Video Question Answering
Guan-Lin Chao guanlinchao@cmu.edu
Abhinav Rastogi
Google Ai
Semih Yavuz syavuz@cs.ucsb.edu
Hakkani-Tür Dilek dilek@ieee.org
Alexa Amazon
Ai
Jindong Chen jdchen@google.com
Google Ai
Ian Lane lane@cmu.edu
Carnegie Mellon University
University of California
Santa Barbara
Carnegie Mellon University
Learning Question-Guided Video Representation for Multi-Turn Video Question Answering
Understanding and conversing about dynamic scenes is one of the key capabilities of AI agents that navigate the environment and convey useful information to humans. Video question answering is a specific scenario of such AI-human interaction where an agent generates a natural language response to a question regarding the video of a dynamic scene. Incorporating features from multiple modalities, which often provide supplementary information, is one of the challenging aspects of video question answering. Furthermore, a question often concerns only a small segment of the video, hence encoding the entire video sequence using a recurrent neural network is not computationally efficient. Our proposed question-guided video representation module efficiently generates the token-level video summary guided by each word in the question. The learned representations are then fused with the question to generate the answer. Through empirical evaluation on the Audio Visual Scene-aware Dialog (AVSD) dataset (Alamri et al., 2019a), our proposed models in single-turn and multiturn question answering achieve state-of-theart performance on several automatic natural language generation evaluation metrics.
Introduction
Nowadays dialogue systems are becoming more and more ubiquitous in our lives. It is essential for such systems to perceive the environment, gather data and convey useful information to humans in an accessible fashion. Video question answering (VideoQA) systems provide a convenient way for humans to acquire visual information about the environment. If a user wants to obtain information about a dynamic scene, one can simply ask the VideoQA system a question in natural language, and the system generates a natural-language answer. The task of a VideoQA dialogue system in
User System
Video Can you tell me what is happening in the video?
A person is packing a bag and then looking into the mirror.
Is the person a woman?
No, the person is a youngman.
What room is this person in ?
It looks like a bedroom or a dorm room.
What color are the walls?
The walls look like light purple. Figure 1: An example from the AVSD dataset. Each example contains a video and its associated question answering dialogue regarding the video scene. this paper is described as follows. Given a video as grounding evidence, in each dialogue turn, the system is presented a question and is required to generate an answer in natural language. Figure 1 shows an example of multi-turn VideoQA. It is composed of a video clip and a dialogue, where the dialogue contains open-ended question answer pairs regarding the scene in the video. In order to answer the questions correctly, the system needs to be effective at understanding the question, the video and the dialogue context altogether. Recent work on VideoQA has shown promising performance using multi-modal attention fusion for combination of features from different modalities (Xu et al., 2017;Zeng et al., 2017;Zhao et al., 2018;Gao et al., 2018). However, one of the challenges is that the length of the video sequence can be very long and the question may concern only a small segment in the video. Therefore, it may be time inefficient to encode the entire video sequence using a recurrent neural network.
In this work, we present the question-guided video representation module which learns 1) to summarize the video frame features efficiently using an attention mechanism and 2) to perform feature selection through a gating mechanism. The learned question-guided video representation is a compact video summary for each token in the question. The video summary and question information are then fused to create multi-modal representations. The multi-modal representations and the dialogue context are then passed as input to a sequence-to-sequence model with attention to generate the answer (Section 3). We empirically demonstrate the effectiveness of the proposed methods using the AVSD dataset (Alamri et al., 2019a) for evaluation (Section 4). The experiments show that our model for single-turn VideoQA achieves state-of-the-art performance, and our multi-turn VideoQA model shows competitive performance, in comparison with existing approaches (Section 5).
Related Work
In the recent years, research on visual question answering has accelerated following the release of multiple publicly available datasets. These datasets include COCO-QA (Ren et al., 2015a), VQA (Agrawal et al., 2017), and Visual Madlibs (Yu et al., 2015) for image question answering and MovieQA (Tapaswi et al., 2016), TGIF-QA (Jang et al., 2017), and TVQA for video question answering.
Image Question Answering
The goal of image question answering is to infer the correct answer, given a natural language question related to the visual content of an image. It assesses the system's capability of multi-modal understanding and reasoning regarding multiple aspects of humans and objects, such as their appearance, counting, relationships and interactions . State-of-the-art image question answering models make use of spatial attention to obtain a fixed length question-dependent embedded representation of the image, which is then combined with the question feature to predict the answer Xu and Saenko, 2016;Kazemi and Elqursh, 2017;Anderson et al., 2018).
Dynamic memory (Kumar et al., 2016;Xiong et al., 2016) and co-attention mechanism (Lu et al., 2016;Ma et al., 2018) are also adopted to model sophisticated cross-modality interactions.
Video Question Answering
VideoQA is a more complex task. As a video is a sequence of images, it contains not only appearance information but also motion and transitions. Therefore, VideoQA requires spatial and temporal aggregation of image features to encode the video into a question-relevant representation. Hence, temporal frame-level attention is utilized to model the temporal dynamics, where framelevel attribute detection and unified video representation are learned jointly (Ye et al., 2017;Xu et al., 2017;Mun et al., 2017). Similarly, use Faster R-CNN (Ren et al., 2015b) trained with the Visual Genome (Krishna et al., 2017) dataset to detect object and attribute regions in each frame, which are used as input features to the question answering model. Previous works also adopt various forms of external memory (Sukhbaatar et al., 2015;Kumar et al., 2016;Graves et al., 2016) to store question information, which allows multiple iterations of questionconditioned inference on the video features (Na et al., 2017;Zeng et al., 2017;Gao et al., 2018;Chenyou Fan, 2019).
Video Question Answering Dialogue
Recently in DSTC7, Alamri et al. (2019a) introduce the Audio-Visual Scene-aware Dialog (AVSD) dataset for multi-turn VideoQA. In addition to the challenge of integrating the questions and the dynamic scene information, the dialogue system also needs to effectively incorporate the dialogue context for coreference resolution to fully understand the user's questions across turns. To this end, Alamri et al. (2019b) use twostream inflated 3D ConvNet (I3D) model (Carreira and Zisserman, 2017) to extract spatiotemporal visual frame features (I3D-RGB features for RGB input and I3D-flow features for optical flow input), and propose the Naïve Fusion method to combine multi-modal inputs based on the hierarchical recurrent encoder (HRE) architecture (Das et al., 2017). Hori et al. (2018) extend the Naïve Fusion approach and propose the Attentional Fusion method which learns multi-modal attention weights to fuse features from different modalities. Zhuang et al. (2019) ) also explore various attention mechanisms to incorporate the different modal inputs, such as hierarchical attention (Libovickỳ and Helcl, 2017) and cross attention . For modeling visual features, propose to use Dynamic memory networks (Kumar et al., 2016) and Nguyen et al. (2019) propose to use feature-wise linear modulation layers (Perez et al., 2018).
Approach
We formulate the multi-turn VideoQA task as follows. Given a sequence of raw video frames f , the embedded question sentence x = {x 1 , . . . , x K } and the single concatenated embedded sentence of the dialogue context d = {d 1 , . . . , d M }, the output is an answer sentence y = {y 1 , . . . , y N }. The architecture of our proposed approach is illustrated in Figure 2. First the Video Frame Feature Extraction Module extracts the I3D-RGB frame features from the video frames (Section 3.1). The Question-Guided Video Representation Module takes as input the embedded question sentence and the I3D-RGB features, and generates a compact video representation for each token in the question sentence (Section 3.2). In the Video-Augmented Question Encoder, the question tokens are first augmented by their corresponding per-token video representations and then encoded by a bidirectional LSTM (Section 3.3). Similarly, in the Dialogue Context Encoder, the dialogue context is encoded by a bidirectional LSTM (Section 3.4). Finally, in the Answer Decoder, the outputs from the Video-Augmented Question Encoder and the Dialogue Context Encoder are used as attention memory for the LSTM decoder to predict the answer sentence (Section 3.5). Our encoders and decoder work in the same way as the multi-source sequence-to-sequence models with attention (Zoph and Knight, 2016;Firat et al., 2016).
Video Frame Feature Extraction Module
In this work, we make use of the I3D-RGB frame features as the visual modality input, which are pre-extracted and provided in the AVSD dataset (Alamri et al., 2019a). Here we briefly describe the I3D-RGB feature extraction process, and we refer the readers to (Carreira and Zisser-man, 2017) for more details of the I3D model. Two-stream Inflated 3D ConvNet (I3D) is a stateof-the-art action recognition model which operates on video inputs. The I3D model takes as input two streams of video frames: RGB frames and optical flow frames. The two streams are separately passed to a respective 3D ConvNet, which is inflated from 2D ConvNets to incorporate the temporal dimension. Two sequences of spatiotemporal features are produced by the respective 3D ConvNet, which are jointly used to predict the action class. The I3D-RGB features provided in the AVSD dataset are intermediate spatiotemporal representations from the "Mixed 5c" layer of the RGB stream's 3D ConvNet. The AVSD dataset uses the I3D model parameters pre-trained on the Kinetics dataset (Kay et al., 2017). To reduce the number of parameters in our model, we use a trainable linear projection layer to reduce the dimensionality of I3D-RGB features from 2048 to 256. Extracted from the video frames f and projected to a lower dimension, the sequence of dimensionreduced I3D-RGB frame features are denoted by r = {r 1 , . . . , r L }, where r i ∈ R 256 , ∀i.
Question-Guided Video Representation Module
We use a bidirectional LSTM network to encode the sequence of question token embedding x = {x 1 , . . . , x K }. The token-level intermediate representations are denoted by x tok = {x tok 1 , . . . , x tok K }, and the embedded representation of the entire question is denoted by x sen . These outputs will be used to guide the video representation.
h 0 = h K+1 = 0 (1) h k = LSTM forw guide (x k , h k−1 ) (2) h k = LSTM back guide (x k , h k+1 )(3)x tok k = h k ⊕ h k (4) ∀k ∈ {1, . . . , K} x sen = h K ⊕ h 1(5)
where ⊕ denotes vector concatenation; h and h represent the local forward and backward LSTM hidden states.
Per-Token Visual Feature Summarization
Generally the sequence length of the video frame features is quite large, as shown in Table 1. There-fore it is not computationally efficient to encode the video features using a recurrent neural network. We propose to use the attention mechanism to generate a context vector to efficiently summarize the I3D-RGB features. We use the trilinear function as a similarity measure to identify the frames most similar to the question tokens. For each question token x k , we compute the similarity scores of its encoded representation x tok k with each of the I3D-RGB features r. The similarity scores s k are converted to an attention distribution w att k over the I3D-RGB features by the softmax function. And the video summary v k corresponding to the question token x k is defined as the attention weighted linear combination of the I3D-RGB features. We also explored using dot product for computing similarity and empirically found out it yields suboptimal results.
s k,l = trilinear(x tok k , r l ) (6) = W sim [x tok k ⊕ r l ⊕ (x tok k r l )] (7) ∀l ∈ {1, . . . , L} w att k = softmax(s k ) (8) v k = L l=1 w att k,l r l (9) ∀k ∈ {1, . . . , K}
where denotes element-wise multiplication, and W sim is a trainable variable.
Visual Feature Gating
Not all details in the video are important for answering a question. Attention helps in discarding the unimportant frames in the time dimension. We propose a gating mechanism which enables us to perform feature selection within each frame. We project the sentence-level question representation x sen through fully-connected layers with ReLU nonlinearity to generate a gate vector g. For each question token x k , its corresponding video summary v k is then multiplied element-wise with the gate vector g to generate a gated visual summary v g k . We also experimented applying gating on the dimension-reduced I3D-RGB features r, prior to the per-token visual feature summarization step, but it resulted in an inferior performance. g = sigmoid(W g, 1 (ReLU(W g, 2 x sen + b g, 2 )
+ b g, 1 ) (10) v g k = v k g (11) ∀k ∈ {1, . . . , K}
where W g, 1 , b g, 1 , W g, 2 , b g, 2 are trainable variables.
Video-Augmented Question Encoder
Given the sequence of per-token gated visual summary v g = {v g 1 , . . . , v g K }, we augment the question features by concatenating the embedded question tokens x = {x 1 , . . . , x K } with their associated per-token video summary. The augmented question features are then encoded using a bidirectional LSTM. The token-level video-augmented question features are denoted by q tok = {q tok 1 , . . . , q tok K }, and the sentence-level feature is denoted by q sen .
h 0 = h K+1 = 0 (12) h k = LSTM forw ques (x k ⊕ v g k , h k−1 ) (13) h k = LSTM back ques (x k ⊕ v g k , h k+1 ) (14) q tok k = h k ⊕ h k (15) ∀k ∈ {1, . . . , K} q sen = h K ⊕ h 1(16)
where h and h represent the local forward and backward LSTM hidden states.
Dialogue Context Encoder
Similar to the video-augmented question encoder, we encode the embedded dialogue context tokens d = {d 1 , . . . , d M } using a bidirectional LSTM. The embedded token-level representations are denoted by d tok = {d tok 1 , . . . , d tok M }.
h 0 = h M +1 = 0 (17) h m = LSTM forw dial (d m , h m−1 ) (18) h m = LSTM back dial (d m , h m+1 ) (19) d tok m = h m ⊕ h m (20) ∀m ∈ {1, . . . , M }
where h and h represent the local forward and backward LSTM hidden states.
Answer Decoder
The final states of the forward and backward LSTM units of the question encoder are used to initialize the state of answer decoder. Let y n be the output of the decoder at step n, where 1 ≤ n ≤ N , y 0 be the special start of sentence token and y emb n be the embedded representation of y n . At a decoder step n, the previous decoder hidden state h n−1 is used to attend over q tok and d tok to get the attention vectors h att, q n and h att, d n respectively. These two vectors retrieve the relevant features from the intermediate representations of the videoaugmented question encoder and the dialogue context encoder, both of which are useful for generating the next token of the answer. At each decoder step, the decoder hidden state h n is used to generate a distribution over the vocabulary. The decoder output y * n is defined to be argmax yn p(y n |y ≤n−1 ). where h represents the local LSTM hidden states, and W ans, q , W ans, d , W ans , b ans are trainable variables.
h 0 = q sen (21) s q n,k = v ans, q tanh(W ans, q [h n−1 ⊕ q tok k ]) (22) ∀k ∈ {1, . . . , K} w q n = softmax(s q n ) (23) h att, q n = K k=1 w q n,k q tok k (24) s d n,m = v ans, d tanh(W ans, d [h n−1 ⊕ d tok m ]) (25) ∀m ∈ {1, . . . , M } w d n = softmax(s d n )(26
Experiments
Dataset
We consider the Audio-Visual Scene-aware Dialog (AVSD) dataset (Alamri et al., 2019a) for evaluating our proposed model in single-turn and multi-turn VideoQA. We use the official release of train set for training, and the public (i.e., prototype) validation and test sets for inference. The AVSD dataset is a collection of text-based humanhuman question answering dialogues based on the video clips from the CHARADES dataset (Sigurdsson et al., 2016). The CHARADES dataset contains video clips of daily indoor human activities, originally purposed for research in video activity classification and localization. Along with Table 1: Data statistics of the AVSD dataset. We use the official training set, and the public (i.e., prototype) validation and test sets. We also present the average length of the question token sequences and the I3D-RGB frame feature sequences to highlight the importance of time efficient video encoding without using a recurrent neural network. The sequence lengths of the questions and I3D-RGB frame features are denoted by K and L respectively in the model description (Section 3). the video clips and associated question answering dialogues, the AVSD dataset also provides the pre-extracted I3D-RGB visual frame features using a pre-trained two-stream inflated 3D ConvNet (I3D) model (Carreira and Zisserman, 2017). The pre-trained I3D model was trained on the Kinetics dataset (Kay et al., 2017) for human action recognition.
In Table 1, we present the statistics of the AVSD dataset. Given the fact that the lengths of the I3D-RGB frame feature sequences are more than 20 times longer than the questions, using a recurrent neural network to encode the visual feature sequences will be very time consuming, as the visual frames are processed sequentially. Our proposed question-guided video representation module summarizes the video sequence efficientlyaggregating the visual features by question-guided attention and weighted summation and performing gating with a question-guided gate vector, both of which can be done in parallel across all frames.
Experimental Setup
We implement our models using the Ten-sor2Tensor framework (Vaswani et al., 2018). The question and dialogue context tokens are both embedded with the same randomly-initialized word embedding matrix, which is also shared with the answer decoder's output embedding. The dimension of the word embedding is 256, the same dimension to which the I3D-RGB features are transformed. All of our LSTM encoders and decoder have 1 hidden layer. Bahdanau attention mechanism (Bahdanau et al., 2015) is used in the answer decoder. During training, we apply dropout rate 0.2 in the encoder and decoder cells. We use the ADAM optimizer (Kingma and Ba, 2015) with α = 2 × 10 −4 , β 1 = 0.85, β 2 = 0.997, = 10 −6 , and clip the gradient with L2 norm threshold 2.0 (Pascanu et al., 2013). The models are trained up to 100K steps with early stopping on the validation BLEU-4 score using batch size 1024 on a single GPU. During inference, we use beam search decoding with beam width 3. We experimented with word embedding dimension {256, 512}, dropout rate {0, 0.2}, Luong and Bahdanau attention mechanisms, {1, 2} hidden layer(s) for both encoders and the decoder. We found the aforementioned setting worked best for most models.
Results
Comparison with Existing Methods
We evaluate our proposed approach using the same natural language generation evaluation toolkit NLGEval (Sharma et al., 2017) as the previous approaches. The corpus-wide scores of the following unsupervised automated metrics are reported, including BLEU-1 through BLEU-4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-L (Lin and Och, 2004) and CIDEr (Vedantam et al., 2015). The results of our models in comparison with the previous approaches are shown in Table 2. We report the mean and standard deviation scores of 5 runs using random initialization and early stopping on the public (prototype) validation set. We apply our model in two scenarios: single-turn and multi-turn VideoQA. The only difference is that in singleturn VideoQA, the dialogue context encoder is excluded from the model.
First we observe that our proposed multi-turn VideoQA model significantly outperforms the single-turn VideoQA model. This suggests that the additional dialogue context input can provide supplementary information from the question and visual features, and thus is helpful for generating the correct answer. Secondly, comparing the single-turn VideoQA models, our approach outperforms the existing approaches across all automatic evaluation metrics. This suggests the effectiveness of our proposed question-guided video representations for VideoQA. When comparing (Alamri et al., 2019b;Zhuang et al., 2019), Attentional Fusion (Hori et al., 2018;Zhuang et al., 2019), Multi-Source Sequence-to-Sequence model (Pasunuru and Bansal, 2019), Modified Attentional Fusion with Maximum Mutual Information objective (Zhuang et al., 2019) and Hierarchical Attention with pre-trained embedding (Le et al., 2019), on the AVSD public test set. For each approach, we report its corpus-wide scores on BLEU-1 through BLEU-4, METEOR, ROUGE-L and CIDEr. We report the mean and standard deviation scores of 5 runs using random initialization and early stopping on the public (prototype) validation set. with previous multi-turn VideoQA models, our approach that uses the dialogue context (questions and answers in previous turns) yields stateof-the-art performance on the BLEU-3, BLEU-4, ROUGE-L and CIDEr metrics and competitive results on BLEU-1, BLEU-2 and METEOR. It is worth mentioning that our model does not use pretrained word embedding or audio features as in the previous hierarchical attention approach (Le et al., 2019).
Ablation Study and Weights Visualization
We perform ablation experiments on the validation set in the multi-turn VideoQA scenario to analyze the effectiveness of the two techniques in the question-guided video representation module. The results are shown in Table 3.
Question-Guided Per-Token Visual Feature Summarization (TokSumm)
Instead of using token-level question representations x tok = {x tok 1 , . . . , x tok K } to generate per-token video summary v = {v 1 , . . . , v K }, we experiment with using the sentence-level representation of the question x sen as the query vector to attend over the I3D-RGB visual features to create a visual summary v, and use v to augment each of the question tokens in the video-augmented question encoder. s l = trilinear(x sen , r l ) (30) ∀l ∈ {1, . . . , L}
w att = softmax(s) (31) v = L l=1 w att l r l(32)
We observe the performance degrades when the sentence-level video summary is used instead of the token-level video summary. Figure 3 shows an example of the attention weights in the question-guided per-token visual feature summarization. We can see that for different question tokens, the attention weights are shifted to focus on the different segment in the sequence of the video frame features. augment the question information in the videoaugmented question encoder. We observe the model's performance declines when the questionguided gating is not applied on the video summary feature. Removing both the per-token visual feature summarization and the gating mechanism results in further degradation in the model performance. Figure 4 illustrates the question-guided gate weights g of several example questions. We observe that the gate vectors corresponding to the questions regarding similar subjects assign weights on similar dimensions of the visual feature. Although many of the visual feature dimensions have low weights across different questions, the feature dimensions of higher gate weights still exhibit certain topic-specific patterns.
Conclusion and Future Work
In this paper, we present an end-to-end trainable model for single-turn and multi-turn VideoQA.
Our proposed framework takes the question, I3D-RGB video frame features and dialogue context as input. Using the question information as guidance, the video features are summarized as compact representations to augment the question information, which are jointly used with the dialogue context to generate a natural language answer to the question. Specifically, our proposed question-guided video representation module is able to summarize the video features efficiently for each question token using an attention mechanism and perform feature selection through a gating mechanism. In empirical evaluation, our proposed models for single-turn and multi-turn VideoQA outperform existing approaches on several automatic natural language generation evaluation metrics. Detailed analyses are performed, and it is shown that our model effectively attends to relevant frames in the video feature sequence for summarization, and the gating mechanism shows topic-specific patterns in the feature dimension selection within a frame. In future work, we plan to extend the models to incorporate audio features and experiment with more advanced techniques to incorporate the dialogue context with the question and video information, such as hierarchical attention and co-attention mechanisms. We also plan to employ our model on TVQA, a larger scale VideoQA dataset.
Figure 2 :
2Overview of the proposed approach. First the I3D-RGB frame features are extracted. The questionguided video representation module takes as input the question sentence and the I3D-RGB features, generates a video representation for each token and applies gating using question as guidance. Then the question tokens are augmented by the per-token video representations and encoded by a bidirectional LSTM encoder. Similarly, the dialogue context is encoded by a bidirectional LSTM encoder. Finally, the LSTM answer decoder predicts the answer sequence.sion method and propose to use Maximum Mutual Information (MMI)(Bahl et al., 1986) as the training objective. Besides the HRE architecture, the multi-source sequence-to-sequence (Multi-Source Seq2Seq) architecture with attention(Zoph and Knight, 2016;Firat et al., 2016) is also commonly applied(Pasunuru and Bansal, 2019;Kumar et al., 2019;Yeh et al., 2019). Previous works(Sanabria et al., 2019;Le et al., 2019;Pasunuru and Bansal, 2019
y n |y ≤n−1 ) = softmax(W ans h n + b ans ) (29) ∀n ∈ {1, . . . , N }
Figure 3 :Figure 4 :
34Question-guided per-token visual feature summary weights on a question. Each row represents the attention weights w att k of the corresponding encoded question token x tok k over the I3D-RGB visual features. We can observe that the attention weights are shifted to focus on the relevant segment of the visual frame features for the question tokens "after the younger man leaves <eos>?" Question-guided gate weights g for some example questions. Across the questions about similar subjects, we observe a similar trend of weight distribution over visual feature dimensions. Conversely, questions about different topics show different gate weights patterns.
modify the Attentional Fu-r 2
r 3
r 1
r L
I3D-RGB
Feature
Extraction
Question
"What is he doing ?"
BiLSTM
Encoder
x 2
x 1
x K
x tok
2
x tok
1
x tok
K
x sen
Feedforward
Layers
gate
v 2
v 1
v K
v g
2
v g
1
v g
K
○
○
○
+
• • •
• • •
• • •
• • •
• • •
+
+
Question-Guided
Visual Feature
Summarization
Question-Guided
Visual Feature
Gating
Embedding
Lookup
x 2
x 1
x K
• • •
BiLSTM
Encoder
LSTM
Decoder
Attention
Answer
"Looks like he's cooking something to eat"
y 2
y 1
y N
• • •
y 3
Video
Frames
d 2
d 3
d 1
d M
• • •
Dialogue Context
"What is happening there ?
<SEP> Someone is cooking food it appears
<SEP> What gender ?
<SEP> Appears to be a male"
Embedding
Lookup
BiLSTM
Encoder
Attention
Table 2 :
2Comparison with existing approaches: Naïve Fusion
Table 3 :
3Ablation study on the AVSD validation set. We observe that the performance degrades when either of both of the question-guided per-token visual feature summarization (TokSumm) and feature gating (Gating) techniques are removed.
.2.2 Question-Guided Visual Feature Gating (Gating)We also experiment with using the non-gated token-level video summary v = {v 1 , . . . , v K } to
VQA: Visual question answering. Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, Lawrence Zitnick, Devi Parikh, Dhruv Batra, International Journal of Computer Vision. IJCVAishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Mar- garet Mitchell, C Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2017. VQA: Visual question an- swering. International Journal of Computer Vision (IJCV).
Audio visual scene-aware dialog. Huda Alamri, Vincent Cartillier, Abhishek Das, Jue Wang, Stefan Lee, Peter Anderson, Irfan Essa, Devi Parikh, Dhruv Batra, Anoop Cherian, Tim K Marks, Chiori Hori, Computer Vision and Pattern Recognition (CVPR). Huda Alamri, Vincent Cartillier, Abhishek Das, Jue Wang, Stefan Lee, Peter Anderson, Irfan Essa, Devi Parikh, Dhruv Batra, Anoop Cherian, Tim K. Marks, and Chiori Hori. 2019a. Audio visual scene-aware dialog. In Computer Vision and Pattern Recognition (CVPR).
Audio visual scene-aware dialog (avsd) track for natural language generation in dstc7. Huda Alamri, Chiori Hori, Tim K Marks, Dhruv Batra, Devi Parikh, DSTC7 at AAAI 2019 Workshop. Huda Alamri, Chiori Hori, Tim K. Marks, Dhruv Batra, and Devi Parikh. 2019b. Audio visual scene-aware dialog (avsd) track for natural language generation in dstc7. In DSTC7 at AAAI 2019 Workshop.
Bottom-up and top-down attention for image captioning and visual question answering. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang, Computer Vision and Pattern Recognition (CVPR). Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Computer Vision and Pattern Recognition (CVPR).
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. ICLRDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).
Maximum mutual information estimation of hidden markov model parameters for speech recognition. R Lalit, Bahl, F Peter, Peter V De Brown, Robert L Souza, Mercer, International Conference on Acoustics, Speech and Signal Processing. ICASSPLalit R Bahl, Peter F Brown, Peter V De Souza, and Robert L Mercer. 1986. Maximum mutual infor- mation estimation of hidden markov model param- eters for speech recognition. In International Con- ference on Acoustics, Speech and Signal Processing (ICASSP).
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL work- shop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization.
Quo vadis, action recognition? a new model and the Kinetics dataset. Joao Carreira, Andrew Zisserman, Computer Vision and Pattern Recognition (CVPR). Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the Kinetics dataset. In Computer Vision and Pattern Recognition (CVPR).
Heterogeneous memory enhanced multimodal attention model for video question answering. Computer Vision and Pattern Recognition (CVPR). Shu Zhang Wensheng Wang Chi Zhang Heng Huang Chenyou Fan, Xiaofan ZhangShu Zhang Wensheng Wang Chi Zhang Heng Huang Chenyou Fan, Xiaofan Zhang. 2019. Heterogeneous memory enhanced multimodal attention model for video question answering. In Computer Vision and Pattern Recognition (CVPR).
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, M F José, Devi Moura, Dhruv Parikh, Batra, Computer Vision and Pattern Recognition (CVPR). Visual dialogAbhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Computer Vision and Pattern Recognition (CVPR).
Multi-way, multilingual neural machine translation with a shared attention mechanism. Orhan Firat, Kyunghyun Cho, Yoshua Bengio, Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Annual Conference of the North American Chap- ter of the Association for Computational Linguistics (NAACL).
Motion-appearance co-memory networks for video question answering. Jiyang Gao, Runzhou Ge, Kan Chen, Ram Nevatia, Computer Vision and Pattern Recognition (CVPR). Jiyang Gao, Runzhou Ge, Kan Chen, and Ram Nevatia. 2018. Motion-appearance co-memory networks for video question answering. In Computer Vision and Pattern Recognition (CVPR).
Hybrid computing using a neural network with dynamic external memory. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Nature. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature.
End-to-end audio visual scene-aware dialog using multimodal attentionbased video features. Chiori Hori, Huda Alamri, Jue Wang, Gordon Winchern, Takaaki Hori, Anoop Cherian, Tim K Marks, Vincent Cartillier, Raphael Gontijo Lopes, Abhishek Das, arXiv:1806.08409Computing Research Repository. Chiori Hori, Huda Alamri, Jue Wang, Gordon Winch- ern, Takaaki Hori, Anoop Cherian, Tim K Marks, Vincent Cartillier, Raphael Gontijo Lopes, Ab- hishek Das, et al. 2018. End-to-end audio vi- sual scene-aware dialog using multimodal attention- based video features. Computing Research Reposi- tory, arXiv:1806.08409.
TGIF-QA: Toward spatiotemporal reasoning in visual question answering. Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, Gunhee Kim, Computer Vision and Pattern Recognition (CVPR). Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. TGIF-QA: Toward spatio- temporal reasoning in visual question answering. In Computer Vision and Pattern Recognition (CVPR).
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, arXiv:1705.06950The kinetics human action video dataset. Computing Research Repository. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijaya- narasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. Computing Research Repository, arXiv:1705.06950.
Show, ask, attend, and answer: A strong baseline for visual question answering. Vahid Kazemi, Ali Elqursh, arXiv:1704.03162Computing Research Repository. Vahid Kazemi and Ali Elqursh. 2017. Show, ask, at- tend, and answer: A strong baseline for visual ques- tion answering. Computing Research Repository, arXiv:1704.03162.
DeepStory: video story QA by deep embedded memory networks. Kyung-Min Kim, Min-Oh Heo, Seong-Ho Choi, Byoung-Tak Zhang, International Joint Conferences on Artificial Intelligence (IJCAI). Kyung-Min Kim, Min-Oh Heo, Seong-Ho Choi, and Byoung-Tak Zhang. 2017. DeepStory: video story QA by deep embedded memory networks. In Inter- national Joint Conferences on Artificial Intelligence (IJCAI).
A method for stochastic optimization. D Kingma, Jimmy Ba, International Conference on Learning Representations (ICLR). D Kingma and Jimmy Ba. 2015. A method for stochas- tic optimization. In International Conference on Learning Representations (ICLR).
Visual Genome: Connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, International Journal of Computer Vision. IJCVRanjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual Genome: Connecting language and vi- sion using crowdsourced dense image annotations. International Journal of Computer Vision (IJCV).
Ask me anything: Dynamic memory networks for natural language processing. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, Richard Socher, International Conference on Machine Learning (ICML). Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Con- ference on Machine Learning (ICML).
Context, attention and audio feature explorations for audio visual scene-aware dialog. H Shachi, Eda Kumar, Saurav Okur, Juan Jose Alvarado Sahay, Jonathan Leanos, Lama Huang, Nachman, DSTC7 at AAAI 2019 workshop. Shachi H Kumar, Eda Okur, Saurav Sahay, Juan Jose Alvarado Leanos, Jonathan Huang, and Lama Nachman. 2019. Context, attention and audio fea- ture explorations for audio visual scene-aware dia- log. In DSTC7 at AAAI 2019 workshop.
End-to-end multimodal dialog systems with hierarchical multimodal attention on video features. Hung Le, Doyen Hoi, N Sahoo, Chen, DSTC7 at AAAI 2019 workshop. Hung Le, S Hoi, Doyen Sahoo, and N Chen. 2019. End-to-end multimodal dialog systems with hierar- chical multimodal attention on video features. In DSTC7 at AAAI 2019 workshop.
TVQA: Localized, compositional video question answering. Jie Lei, Licheng Yu, Mohit Bansal, Tamara Berg, Conference on Empirical Methods in Natural Language Processing (EMNLP). Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. TVQA: Localized, compositional video ques- tion answering. In Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).
Attention strategies for multi-source sequence-to-sequence learning. Jindřich Libovickỳ, Jindřich Helcl, Annual Meeting of the Association for Computational Linguistics (ACL). Jindřich Libovickỳ and Jindřich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In Annual Meeting of the Association for Computational Linguistics (ACL).
Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. Chin-Yew Lin, Franz Josef Och, Annual Meeting of the Association for Computational Linguistics (ACL). Chin-Yew Lin and Franz Josef Och. 2004. Auto- matic evaluation of machine translation quality us- ing longest common subsequence and skip-bigram statistics. In Annual Meeting of the Association for Computational Linguistics (ACL).
Entropy-enhanced multimodal attention model for scene-aware dialogue generation. Kuan-Yen, Chao-Chun Lin, Yun-Nung Hsu, Lun-Wei Chen, Ku, DSTC7 at AAAI 2019 workshop. Kuan-Yen Lin, Chao-Chun Hsu, Yun-Nung Chen, and Lun-Wei Ku. 2019. Entropy-enhanced multimodal attention model for scene-aware dialogue genera- tion. In DSTC7 at AAAI 2019 workshop.
Hierarchical question-image coattention for visual question answering. Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh, Advances in Neural Information Processing Systems. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co- attention for visual question answering. In Advances in Neural Information Processing Systems.
Visual question answering with memory-augmented networks. Chao Ma, Chunhua Shen, Anthony Dick, Qi Wu, Peng Wang, Computer Vision and Pattern Recognition (CVPR). Anton van den Hengel, and Ian ReidChao Ma, Chunhua Shen, Anthony Dick, Qi Wu, Peng Wang, Anton van den Hengel, and Ian Reid. 2018. Visual question answering with memory-augmented networks. In Computer Vision and Pattern Recogni- tion (CVPR).
MarioQA: Answering questions by watching gameplay videos. Jonghwan Mun, Paul Hongsuck Seo, Ilchae Jung, Bohyung Han, International Conference on Computer Vision (ICCV). Jonghwan Mun, Paul Hongsuck Seo, Ilchae Jung, and Bohyung Han. 2017. MarioQA: Answering ques- tions by watching gameplay videos. In International Conference on Computer Vision (ICCV).
A read-write memory network for movie story understanding. Seil Na, Sangho Lee, Jisung Kim, Gunhee Kim, International Conference on Computer Vision (ICCV). Seil Na, Sangho Lee, Jisung Kim, and Gunhee Kim. 2017. A read-write memory network for movie story understanding. In International Conference on Computer Vision (ICCV).
From film to video: Multiturn question answering with multi-modal context. Shikhar Dat Tien Nguyen, Hannes Sharma, Layla El Schulz, Asri, DSTC7 at AAAI 2019 workshop. Dat Tien Nguyen, Shikhar Sharma, Hannes Schulz, and Layla El Asri. 2019. From film to video: Multi- turn question answering with multi-modal context. In DSTC7 at AAAI 2019 workshop.
BLEU: A method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Annual Meeting of the Association for Computational Linguistics (ACL). Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Annual Meet- ing of the Association for Computational Linguistics (ACL).
On the difficulty of training recurrent neural networks. Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, International Conference on Machine Learning (ICML). Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning (ICML).
Dstc7-avsd: Scene-aware video-dialogue systems with dual attention. Ramakanth Pasunuru, Mohit Bansal, DSTC7 at AAAI 2019 workshop. Ramakanth Pasunuru and Mohit Bansal. 2019. Dstc7- avsd: Scene-aware video-dialogue systems with dual attention. In DSTC7 at AAAI 2019 workshop.
FiLM: Visual reasoning with a general conditioning layer. Ethan Perez, Florian Strub, Harm De, Vincent Vries, Aaron Dumoulin, Courville, AAAI Conference on Artificial Intelligence (AAAI). Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. FiLM: Vi- sual reasoning with a general conditioning layer. In AAAI Conference on Artificial Intelligence (AAAI).
Exploring models and data for image question answering. Mengye Ren, Ryan Kiros, Richard Zemel, Advances in Neural Information Processing Systems. Mengye Ren, Ryan Kiros, and Richard Zemel. 2015a. Exploring models and data for image question an- swering. In Advances in Neural Information Pro- cessing Systems.
Faster R-CNN: Towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, Advances in Neural Information Processing Systems. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015b. Faster R-CNN: Towards real-time ob- ject detection with region proposal networks. In Ad- vances in Neural Information Processing Systems.
Cmu sinbads submission for the dstc7 avsd challenge. Ramon Sanabria, Shruti Palaskar, Florian Metze, DSTC7 at AAAI 2019 workshop. Ramon Sanabria, Shruti Palaskar, and Florian Metze. 2019. Cmu sinbads submission for the dstc7 avsd challenge. In DSTC7 at AAAI 2019 workshop.
Bidirectional attention flow for machine comprehension. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, International Conference on Learning Representations. ICLRMinjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations (ICLR).
Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. Shikhar Sharma, Layla El Asri, Hannes Schulz, Jeremie Zumer, arXiv:1706.09799Computing Research Repository. Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating nat- ural language generation. Computing Research Repository, arXiv:1706.09799.
Hollywood in homes: Crowdsourcing data collection for activity understanding. Gül Gunnar A Sigurdsson, Xiaolong Varol, Ali Wang, Ivan Farhadi, Abhinav Laptev, Gupta, European Conference on Computer Vision (ECCV). Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collec- tion for activity understanding. In European Con- ference on Computer Vision (ECCV).
End-to-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, Advances in Neural Information Processing Systems. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems.
MovieQA: Understanding stories in movies through question-answering. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, Sanja Fidler, Computer Vision and Pattern Recognition (CVPR). Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. MovieQA: Understanding stories in movies through question-answering. In Computer Vision and Pattern Recognition (CVPR).
Tensor2Tensor for neural machine translation. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Conference of the Association for Machine Translation in the Americas. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Par- mar, et al. 2018. Tensor2Tensor for neural machine translation. In Conference of the Association for Machine Translation in the Americas.
CIDEr: Consensus-based image description evaluation. Ramakrishna Vedantam, Lawrence Zitnick, Devi Parikh, Computer Vision and Pattern Recognition (CVPR). Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image de- scription evaluation. In Computer Vision and Pat- tern Recognition (CVPR).
Dynamic memory networks for visual and textual question answering. Caiming Xiong, Stephen Merity, Richard Socher, International Conference on Machine Learning (ICML). Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In International Con- ference on Machine Learning (ICML).
Video question answering via gradually refined attention over appearance and motion. Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, International Conference on Multimedia. Hanwang Zhang, Xiangnan He, and Yueting ZhuangDejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. 2017. Video question answering via gradually refined at- tention over appearance and motion. In Interna- tional Conference on Multimedia.
Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. Huijuan Xu, Kate Saenko, European Conference on Computer Vision (ECCV). Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In European Confer- ence on Computer Vision (ECCV).
Stacked attention networks for image question answering. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola, Computer Vision and Pattern Recognition (CVPR). Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Computer Vision and Pattern Recognition (CVPR).
Video question answering via attribute-augmented attention network learning. Yunan Ye, Zhou Zhao, Yimeng Li, Long Chen, SIGIR Conference on Research and Development in Information Retrieval. Yunan Ye, Zhou Zhao, Yimeng Li, Long Chen, Jun Xiao, and Yueting Zhuang. 2017. Video question answering via attribute-augmented attention net- work learning. In SIGIR Conference on Research and Development in Information Retrieval.
Reactive multi-stage feature fusion for multimodal dialogue modeling. Yi-Ting Yeh, Tzu-Chuan Lin, Hsiao-Hua Cheng, Yi-Hsuan Deng, Shang-Yu Su, Yun-Nung Chen, DSTC7 at AAAI 2019 Workshop. Yi-Ting Yeh, Tzu-Chuan Lin, Hsiao-Hua Cheng, Yi- Hsuan Deng, Shang-Yu Su, and Yun-Nung Chen. 2019. Reactive multi-stage feature fusion for multi- modal dialogue modeling. In DSTC7 at AAAI 2019 Workshop.
Visual Madlibs: Fill in the blank description generation and question answering. Licheng Yu, Eunbyung Park, C Alexander, Tamara L Berg, Berg, International Conference on Computer Vision (ICCV). Licheng Yu, Eunbyung Park, Alexander C Berg, and Tamara L Berg. 2015. Visual Madlibs: Fill in the blank description generation and question answer- ing. In International Conference on Computer Vi- sion (ICCV).
Leveraging video descriptions to learn video question answering. Kuo-Hao Zeng, Tseng-Hung Chen, Ching-Yao Chuang, Yuan-Hong Liao, Juan Carlos Niebles, Min Sun, AAAI Conference on Artificial Intelligence (AAAI). Kuo-Hao Zeng, Tseng-Hung Chen, Ching-Yao Chuang, Yuan-Hong Liao, Juan Carlos Niebles, and Min Sun. 2017. Leveraging video descrip- tions to learn video question answering. In AAAI Conference on Artificial Intelligence (AAAI).
Open-ended long-form video question answering via adaptive hierarchical reinforced networks. Zhou Zhao, Zhu Zhang, Shuwen Xiao, Zhou Yu, Jun Yu, Deng Cai, Fei Wu, Yueting Zhuang, International Joint Conferences on Artificial Intelligence (IJCAI). Zhou Zhao, Zhu Zhang, Shuwen Xiao, Zhou Yu, Jun Yu, Deng Cai, Fei Wu, and Yueting Zhuang. 2018. Open-ended long-form video question answering via adaptive hierarchical reinforced networks. In International Joint Conferences on Artificial Intel- ligence (IJCAI).
Investigation of attention-based multimodal fusion and maximum mutual information objective for dstc7 track3. Bairong Zhuang, Wenbo Wang, Takahiro Shinozaki, DSTC7 at AAAI 2019 workshop. Bairong Zhuang, Wenbo Wang, and Takahiro Shi- nozaki. 2019. Investigation of attention-based mul- timodal fusion and maximum mutual information objective for dstc7 track3. In DSTC7 at AAAI 2019 workshop.
Multi-source neural translation. Barret Zoph, Kevin Knight, Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
| [] |
[
"L-VECTOR: NEURAL LABEL EMBEDDING FOR DOMAIN ADAPTATION",
"L-VECTOR: NEURAL LABEL EMBEDDING FOR DOMAIN ADAPTATION"
] | [
"Zhong Meng \nMicrosoft Corporation\nRedmondWAUSA\n",
"Hu Hu \nMicrosoft Corporation\nRedmondWAUSA\n\nGeorgia Institute of Technology\nAtlantaGAUSA\n",
"Jinyu Li \nMicrosoft Corporation\nRedmondWAUSA\n",
"Changliang Liu \nMicrosoft Corporation\nRedmondWAUSA\n",
"Yan Huang \nMicrosoft Corporation\nRedmondWAUSA\n",
"Yifan Gong \nMicrosoft Corporation\nRedmondWAUSA\n",
"Chin-Hui Lee \nGeorgia Institute of Technology\nAtlantaGAUSA\n"
] | [
"Microsoft Corporation\nRedmondWAUSA",
"Microsoft Corporation\nRedmondWAUSA",
"Georgia Institute of Technology\nAtlantaGAUSA",
"Microsoft Corporation\nRedmondWAUSA",
"Microsoft Corporation\nRedmondWAUSA",
"Microsoft Corporation\nRedmondWAUSA",
"Microsoft Corporation\nRedmondWAUSA",
"Georgia Institute of Technology\nAtlantaGAUSA"
] | [] | We propose a novel neural label embedding (NLE) scheme for the domain adaptation of a deep neural network (DNN) acoustic model with unpaired data samples from source and target domains. With NLE method, we distill the knowledge from a powerful sourcedomain DNN into a dictionary of label embeddings, or l-vectors, one for each senone class. Each l-vector is a representation of the senone-specific output distributions of the source-domain DNN and is learned to minimize the average L2, Kullback-Leibler (KL) or symmetric KL distance to the output vectors with the same label through simple averaging or standard back-propagation. During adaptation, the l-vectors serve as the soft targets to train the target-domain model with cross-entropy loss. Without parallel data constraint as in the teacher-student learning, NLE is specially suited for the situation where the paired target-domain data cannot be simulated from the source-domain data. We adapt a 6400 hours multi-conditional US English acoustic model to each of the 9 accented English (80 to 830 hours) and kids' speech (80 hours). NLE achieves up to 14.1% relative word error rate reduction over direct re-training with one-hot labels. | 10.1109/icassp40776.2020.9053300 | [
"https://arxiv.org/pdf/2004.13480v1.pdf"
] | 216,491,353 | 2004.13480 | fdfe0cfb98112d40c91e30e2d6b9be6565180860 |
L-VECTOR: NEURAL LABEL EMBEDDING FOR DOMAIN ADAPTATION
Zhong Meng
Microsoft Corporation
RedmondWAUSA
Hu Hu
Microsoft Corporation
RedmondWAUSA
Georgia Institute of Technology
AtlantaGAUSA
Jinyu Li
Microsoft Corporation
RedmondWAUSA
Changliang Liu
Microsoft Corporation
RedmondWAUSA
Yan Huang
Microsoft Corporation
RedmondWAUSA
Yifan Gong
Microsoft Corporation
RedmondWAUSA
Chin-Hui Lee
Georgia Institute of Technology
AtlantaGAUSA
L-VECTOR: NEURAL LABEL EMBEDDING FOR DOMAIN ADAPTATION
Index Terms-deep neural networklabel embeddingdomain adaptationteacher-student learningspeech recognition
We propose a novel neural label embedding (NLE) scheme for the domain adaptation of a deep neural network (DNN) acoustic model with unpaired data samples from source and target domains. With NLE method, we distill the knowledge from a powerful sourcedomain DNN into a dictionary of label embeddings, or l-vectors, one for each senone class. Each l-vector is a representation of the senone-specific output distributions of the source-domain DNN and is learned to minimize the average L2, Kullback-Leibler (KL) or symmetric KL distance to the output vectors with the same label through simple averaging or standard back-propagation. During adaptation, the l-vectors serve as the soft targets to train the target-domain model with cross-entropy loss. Without parallel data constraint as in the teacher-student learning, NLE is specially suited for the situation where the paired target-domain data cannot be simulated from the source-domain data. We adapt a 6400 hours multi-conditional US English acoustic model to each of the 9 accented English (80 to 830 hours) and kids' speech (80 hours). NLE achieves up to 14.1% relative word error rate reduction over direct re-training with one-hot labels.
INTRODUCTION
Deep neural networks (DNNs) [1,2,3] have greatly advanced the performance of automatic speech recognition (ASR) with a large amount of training data. However, the performance degrades when test data is from a new domain. Many DNN adaptation approaches were proposed to compensate for the acoustic mismatch between training and testing. In [4,5,6], regularization-based approaches restrict the neuron output distributions or the model parameters to stay not too far away from the source-domain model. In [7,8], transformation-based approaches reduce the number of learnable parameters by updating only the transform-related parameters. In [9,10], the trainable parameters are further reduced by singular value decomposition of weight matrices of a neural network. In addition, i-vector [11] and speaker-code [12,13] are used as auxiliary features to a neural network for model adaptation. In [14,15], these adaptation methods were further investigated in end-to-end ASR [16,17]. However, all these methods focus on addressing the overfitting issue given very limited adaptation data in the target-domain.
Teacher-student (T/S) learning [18,19,20,21] has shown to be effective for large-scale unsupervised domain adaptation by minimizing the Kullback-Leibler (KL) divergence between the output distributions of the teacher and student models. The input to the teacher and student models needs to be parallel source-and targetdomain adaptation data, respectively, since the output vectors of a * Work performed during an internship at Microsoft. teacher network need to be frame-by-frame aligned with those of the student network to construct the KL divergence between two distributions. Compared to one-hot labels, the use of frame-level senone (tri-phone states) posteriors from the teacher as the soft targets to train the student model well preserves the relationships among different senones at the output of the teacher network. However, the parallel data constraint of T/S learning restricts its application to the scenario where the paired target-domain data can be easily simulated from the source domain data (e.g. from clean to noisy speech). Actually, in many scenarios, the generation of parallel data in a new domain is almost impossible, e.g., to simulate paired accented or kids' speech from standard adults' speech.
Recently, adversarial learning [22,23] was proposed for domaininvariant training [24,25,26], speaker adaptation [27], speech enhancement [28,29,30] and speaker verification [31,32]. It was also shown to be effective for unsupervised domain adaptation without using parallel data [33,34]. In adversarial learning, an auxiliary domain classifier is jointly optimized with the source model to minimaximize an adversarial loss. A deep representation is learned to be invariant to domain shifts and discriminative to senone classification. However, adversarial learning does not make use of the target-domain labels which carry important class identity information and is only suitable for the situation where neither parallel data nor target-domain labels are available.
How to perform effective domain adaptation using unpaired source-and target-domain data with labels? We propose a neural label embedding (NLE) method: instead of frame-by-frame knowledge transfer in T/S learning, we distill the knowledge of a sourcedomain model to a fixed set of label embeddings, or l-vectors, one for each senone class, and then transfer the knowledge to the targetdomain model via these senone-specific l-vectors. Each l-vector is a condensed representation of the DNN output distributions given all the features aligned with the same senone at the input. A simple DNN-based method is proposed to learn the l-vectors by minimizing the average L2, Kullback-Leibler (KL) and symmetric KL distance to the output vectors with the same senone label. During adaptation, the l-vectors are used in lieu of their corresponding one-hot labels to train the target-domain model with cross-entropy loss.
NLE can be viewed as a knowledge quantization [35] in form of output-distribution vectors where each l-vector is a code-vector (centroid) corresponding to a senone codeword. With the NLE method, knowledge is transferred from the source-domain model to the target-domain through a fixed codebook of senone-specific l-vectors instead of variable-length frame-specific output-distribution vectors in T/S learning. These distilled l-vectors decouple the target-domain model's output distributions from those of the source-domain model and thus enable a more flexible and efficient senone-level knowledge transfer using unpaired data. When parallel data is available, compared to the T/S learning, NLE significantly reduces the computational cost during adaptation by replacing the forward-propagation of each source-domain frame through the source-domain model with a fast look-up in l-vector codebook. In the experiments, we adapt a multi-conditional acoustic model trained with 6400 hours of US English to each of the 9 different accented English (120 hours to 830 hours) and kids' speech (80 hours), the proposed NLE method achieves 5.4% to 14.1% and 6.0% relative word error rate (WER) reduction over one-hot label baseline on 9 accented English and kids' speech, respectively.
NEURAL LABEL EMBEDDING (NLE) FOR DOMAIN ADAPTATION
In this section, we present the NLE method for domain adaptation without using parallel data. Initially, we have a well-trained sourcedomain network M S with parameters θ S predicting a set of senones C and source-domain speech frames
X S = {x S 1 , . . . , x S N S } with senone labels Y S = {y S 1 , . . . , y S N S }.
We distill the knowledge of this powerful source-domain model into a dictionary of l-vectors, one for each senone label (class) predicted at the output layer. Each l-vector has the same dimentionality as the number of senone classes. Before training the target-domain model M T with parameters θ T , we query the dictionary with the ground-truth one-hot senone labels
Y T = {y T 1 , . . . , y T N T } of the target-domain speech frames X T = {x T 1 , . . . , x T N T } to get their corresponding l-vectors.
During adaptation, in place of the one-hot labels, the l-vectors are used as the soft targets to train the target-domain model. For NLE domain adaptation, the source-domain data X S does not have to be parallel to the target-domain speech frames X T , i.e., X S and X T do not have to be frame-by-frame synchronized and the number of frames NS does not have to be equal to NT .
The key step of the NLE method is to learn l-vectors from the source-domain model and data. As the carrier of knowledge transferred from the source-domain to the target-domain, the l-vector ec of a senone class c should be a representation of the output distributions (senone-posterior distributions) of the source-domain DNN given features aligned with senone c at the input, encoding the dependency between senone c and all the other senones C \ c. A reasonable candidate is the centroid vector that minimizes the average distance to the output vectors generated from all the frames aligned with senone c. Therefore, we need to learn a dictionary of |C| lvectors corresponding to |C| senones in the complete set C, with each l-vector being |C|-dimensional. To serve as the training target of the target-domain model, the l-vector ec needs to be normalized such that its elements satisfy
ec,i > 0, |C| i=1 ec,i = 1.
(1)
NLE Based on L2 Distance Minimization (NLE-L2)
To compute the senone-specific centroid, the most intuitive solution is to minimize the average L2 distance between the centroid and all the output vectors with the same senone label, which is equivalent to calculating the arithmetic mean of the output vectors aligned with the senone. Let o S n denote a |C|-dimensional output vector of M S given the input frame x S n . o S n,i equals to the posterior probability of senone i given x S n , i.e., o S n,i = P (i|x S n ; θ S ), i ∈ C. For senone c, the l-vector ec based on L2 distance minimization is computed aŝ
ec = 1 NS,c N S n=1 o S n 1[x S n ∈ senone c], c ∈ C,(2)
where NS,c is the number of source-domain frames aligned with senone c and NS = c∈C NS,c. The l-vectors under NLE-L2 are automatically normalized since each posterior vector o S n in the mean computation satisfy Eq. (1).
NLE Based on KL Distance Minimization (NLE-KL)
KL divergence is an effective metric to measure the distance between two distributions. In NLE framework, the l-vector ec can be learned as a centroid with a minimum average KL distance to the output vectors of senone c. Many methods have been proposed to iteratively compute the centroid of KL distance [36,37,38]. In this paper, we propose a simple DNN-based solution to compute this KL-based centroid. As shown in Fig. 1, we have an initial |C| × |C| embedding matrix E consisting of all the l-vectors, i.e., E = [e1, . . . , e |C| ] . For each source-domain sample, we look up the senone label y S n in E to get its l-vector e y S n and forwardpropagate x S n through M S to obtain the output vector o S n . The KL distance between o S n and its corresponding centroid l-vector e y S n is
KL(e y S n ||o S n ) = |C| i=1 e y S n ,i log e y S n ,i o S n,i .(3)
We sum up all the KL distances and get the KL distance loss below
LNLE-KL(E) = 1 NS N S n=1 KL(e y S n ||o S n ).(4)
To ensure each l-vector is normalized to satisfy Eq. (1), we perform a softmax operation over a logit vector zc ∈ R |C| to obtain ec below ec,i = exp(zc,i) |C| j=1 exp (zc,j)
, c ∈ C.
For fast convergence, zc is initialized with the arithmetic mean of the pre-softmax logit vectors of the source-domain network aligned with senone c. The embedding matrix E is trained to minimize LNLE-KL(E) by updating z1, . . . , z |C| through standard back-propagation while the parameters θ S of M S are fixed.
NLE Based on Symmetric KL Distance Minimization (NLE-SKL)
One shortcoming of KL distance is that it is asymmetric: the minimization of KL(e y S n ||o S n ) does not guarantee KL(o S n ||e y S n ) is also minimized. SKL compensates for this by adding up the two KL terms together and is thus a more robust distance metric for clustering. Therefore, for each senone, we learn a centroid l-vector with a minimum average SKL distance to the output vectors of M S aligned with that senone by following the same DNN-based method in Section 2.2 except for replacing the KL distance loss with an SKL one.
The SKL distance between an l-vector e y S n and an output vector on is defined as
SKL(e y S n ||o S n ) = |C| i=1 e y S n ,i − o S n,i log e y S n ,i o S n,i ,(6)
and the SKL distance loss is computed by summing up all pairs of SKL distances between output vectors and their centroids as follows
LNLE-SKL(E) = 1 NS N S n=1
SKL(e y S n ||o S n ).
Train Target-Domain Model with NLE
As the condensed knowledge distilled from a large amount of sourcedomain data, the l-vectors serve at the soft targets for training the target-domain model M T . As shown in Fig. 2, we look up target- domain label y T n in the optimized label embedding matrixÊ for its lvectorê y T n and forward-propagate x T n through M T to get the output vector o T n . We construct a cross-entropy loss using l-vectorsê T yn as the soft targets below
LCE(θ T ) = 1 NT N T n=1 |C| i=1ê y T n ,i log o T n,i ,(8)
where o T n,i = P (i|x T n , θ T ), i ∈ C is the posterior of senone i given x T n . We train M T to minimize LCE(θ T ) by updating only θ T . The optimized M T withθ T is used for decoding.
Compared with the traditional one-hot training targets that convey only class identities, the soft l-vectors transfer additional quantized knowledge that encodes the probabilistic relationships among different senone classes. Benefiting from this, the NLE-adapted acoustic model is expected to achieve higher ASR performance than using one-hot labels on target-domain test data. The steps of NLE for domain adaptation are summarized in Algorithm 1.
Algorithm 1 Neural Label Embedding (NLE) for Domain Adaptation
Input: Source-domain model M S , data X S , and labels Y S .
Target-domain data X T and labels Y T . Output: Target Forward-propagate X T through M T to generate output vectors O T . 10: Look up each y T n inÊL 2 ,ÊKL orÊKL for l-vectorê y T n .
11:
Compute and back-propagate the error signal of loss LCE in Eq. (8) by updating θ T . 12: until convergence
EXPERIMENTS
We perform two domain adaptation tasks where parallel sourceand target-domain data is not accessible through data simulation: 1) adapt a US English acoustic model to accented English from 9 areas of the world; 2) adapt the same acoustic model to kids' speech. In both tasks, the source-domain training data is 6400 hours of multi-conditional Microsoft US English production data, including Cortana, xBox and Conversation data. The data is collected from mostly adults from all over the US. It is a mixture of close-talk and far-field utterances from a variety of devices.
For the first task, the adaptation data consists of 9 different types of accented English A1-A9 in which A1, A2, A3, A8 are from Europe, A4, A5, A6 are from Asia, A7 is from Oceania, A9 is from North America. A7-A9 are native accents because they are from countries where most people use English as their first language. On the contrary, A1-A6 are non-native accents. Each English accent forms a specific target domain. For the second task, the adaptation data is 80 hours of US English speech collected from kids. The durations of different adaptation and test data are listed in Table 1. The training and adaptation data is transcribed. All data is anonymized with personally identifiable information removed.
Baseline System
We train a source-domain bi-directional long short-term memory (BLSTM)-hidden Markov model acoustic model [39,40,41] with Task A1 A2 A3 A4 A5 A6 A7 A8 A9 Kids Adapt 160 140 190 120 150 830 250 330 150 80 Test 11 8 11 7 11 11 11 11 13 3 Table 1. Durations (hours) of adaptation and test data for each of the 9 accented English (A1-A9) and kids' speech. For accent adaptation, we train an accent-dependent BLSTM for each accented English using one-hot label with cross-entropy loss. Each accent-dependent model is trained with the speech of only one accent. As shown in Table 2, the one-hot re-training achieves 9.71% to 20.37% WERs on different accents. For kids adaptation, we train a kids-dependent BLSTM using kids' speech with one-hot labels. In Table 2, we see that one-hot re-training achieves 26.99% WER on kids test data. We use these results as the baseline.
Note that, in this work, we do not compare NLE with KLD adaptation [4] since the effectiveness of KLD regularization reduces as the adaptation data increases and it is normally used when the adaptation data is very small (10 min or less).
NLE for Accent Adaptation
It is hard to simulate parallel accented speech from US English. We adapt the 6400 hours BLSTM acoustic model to 9 different English accents using NLE. We learn 9404-dimensional l-vectors using NLE-L2, NLE-KL, and NLE-SKL as described in Sections 2.2 and 2.3 with the source-domain data and acoustic model. These l-vectors are used as the soft targets to train the accent-dependent models with cross-entropy loss as in Section 2.4.
As shown in Table 2, NLE-L2, NLE-KL, and NLE-SKL achieve 9.48% to 18.54%, 9.43% to 18.74%, and 9.19% to 17.97% WERs, respectively, on different accents. NLE-SKL performs the best among the three NLE adaptation methods, with 11.8%, 14.1%, 11.5%, 10.3%, 8.3%, 7.5%, 13.5%, 12.2%, and 5.4% relative WER reductions over the one-hot label baseline on A1 to A9, respectively. NLE-SKL consistently outperforms NLE-L2 and NLE-KL on all the accents, with up to 4.0% and 4.9% relative WER reductions over NLE-L2 and NLE-KL, respectively. The relative reductions for native and non-native accents are similar except for A9. NLE-KL performs slightly better than NLE-L2 on 6 out of 9 accents, but slightly worse than NLE-L2 on the other 3. All the three NLE methods achieve much smaller relative WER reductions (about 5%) on A9 than the other accents (about 10%). This is reasonable because North American English is much more similar to the source-domain US English than the other accents. The source-domain model is not adapted much to the accent of the target-domain speech.
NLE for Kid Adaptation
Parallel kids' speech cannot be obtained through data simulation either. We adapt the 6400 hours BLSTM acoustic model to the collected real kids' speech using NLE. We use the same l-vectors learned in Section 3.2 as the soft targets to train the kid-dependent BLSTM acoustic model by minimizing the cross-entropy loss. As shown in Table 2, NLE-L2, NLE-KL, and NLE-SKL achieve 25.93%, 25.83%, and 25.36% WERs on kids' test set, respectively. NLE-SKL outperforms the other two NLE methods with a 6.0% relative WER reduction over the one-hot baseline. We find that NLE is more effective for accent adaptation than kids adaptation. One possible reason is that a portion of kids are at the age of teenagers whose speech is very similar to that of the adults' in the 6400 hours source-domain data. Note that all the kids speech is collected in US and no accent adaptation is involved.
CONCLUSION
We propose a novel neural label embedding method for domain adaptation. Each senone label is represented by an l-vector that minimizes the average L2, KL or SKL distances to all the source-domain output vectors aligned with the same senone. l-vectors are learned through a simple average or a proposed DNN-based method. During adaptation, l-vectors serve as the soft targets to train the targetdomain model. Without parallel data constraint as in T/S learning, NLE is specially suited for the situation where paired target-domain data samples cannot be simulated from the source-domain ones. Given parallel data, NLE has significantly lower computational cost than T/S learning during adaptation since it replaces the DNN forward-propagation with a fast dictionary lookup.
We adapt a multi-conditional BLSTM acoustic model trained with 6400 hours US English to 9 different accented English and kids' speech. NLE achieves 5.4% to 14.1% and 6.0% relative WER reductions over one-hot label baseline. NLE-SKL consistently outperforms NLE-L2 and NLE-KL on all adaptation tasks by up to relatively 4.0% and 4.9%, respectively. As a simple arithmetic mean, NLE-L2 performs similar to NLE-KL with dramatically reduced computational cost for l-vector learning.
Fig. 1 .
1The diagram of learning neural label embeddings (l-vectors) through KL or SKL minimization. Only modules with red dotted lines are updated.
Fig. 2 .
2Train target-domain model using label embeddings (lvectors). Only modules with red dotted lines are updated.
-domain model M T with parametersθ T Forward-propagate X S through M S to generate O S .1: Forward-propagate X S through M S to generate output vectors
O S .
2: Learn label embedding matrix EL 2 by computing senone-
specific arithmetic means of O S as in Eq. (2).
3: repeat
4:
5:
Look up each y S
n in EKL or ESKL for l-vector e y S
n .
6:
Compute and back-propagate the error signal of loss LNLE-KL
in Eq. (4) or LNLE-SKL in Eq. (7) by updating EKL or ESKL,
respectively.
7: until convergence
8: repeat
9:
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. G Hinton, L Deng, D Yu, IEEE Signal Processing Magazine. 296G. Hinton, L. Deng, D. Yu, et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
Making deep belief networks effective for large vocabulary continuous speech recognition. T Sainath, B Kingsbury, B Ramabhadran, Proc. ASRU. ASRUT. Sainath, B. Kingsbury, B. Ramabhadran, et al., "Making deep belief networks effective for large vocabulary continuous speech recognition," in Proc. ASRU, 2011, pp. 30-35.
Recent advances in deep learning for speech research at Microsoft. L Deng, J Li, J Huang, ICASSP. L. Deng, J. Li, J. Huang, et al., "Recent advances in deep learning for speech research at Microsoft," in ICASSP, 2013.
Kl-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. D Yu, K Yao, H Su, Proc. ICASSP. ICASSPD. Yu, K. Yao, H. Su, et al., "Kl-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition," in Proc. ICASSP, May 2013.
Maximum a posteriori adaptation of network parameters in deep models. Z Huang, S Siniscalchi, I Chen, Proc. Interspeech. InterspeechZ. Huang, S. Siniscalchi, I. Chen, et al., "Maximum a posteri- ori adaptation of network parameters in deep models," in Proc. Interspeech, 2015.
Speaker adaptation of context dependent deep neural networks. H Liao, Proc. ICASSP. ICASSPH. Liao, "Speaker adaptation of context dependent deep neural networks," in Proc. ICASSP, May 2013.
Linear hidden transformations for adaptation of hybrid ann/hmm models. R Gemello, F Mana, S Scanzio, Speech Communication. 4910R. Gemello, F. Mana, S. Scanzio, et al., "Linear hidden trans- formations for adaptation of hybrid ann/hmm models," Speech Communication, vol. 49, no. 10, pp. 827 -835, 2007.
Feature engineering in context-dependent deep neural networks for conversational speech transcription. F Seide, G Li, X Chen, D Yu, Proc. ASRU. ASRUF. Seide, G. Li, X. Chen, and D. Yu, "Feature engineering in context-dependent deep neural networks for conversational speech transcription," in Proc. ASRU, Dec 2011, pp. 24-29.
Restructuring of deep neural network acoustic models with singular value decomposition. J Xue, J Li, Y Gong, InterspeechJ. Xue, J. Li, and Y. Gong, "Restructuring of deep neural net- work acoustic models with singular value decomposition.," in Interspeech, 2013.
Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network. J Xue, J Li, D Yu, Proc. ICASSP. ICASSPJ. Xue, J. Li, D. Yu, et al., "Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network," in Proc. ICASSP, May 2014.
Speaker adaptation of neural network acoustic models using i-vectors. G Saon, H Soltau, ASRUG. Saon, H. Soltau, et al., "Speaker adaptation of neural net- work acoustic models using i-vectors," in ASRU, 2013.
Fast speaker adaptation of hybrid nn/hmm model for speech recognition based on discriminative learning of speaker code. O , Abdel-Hamid , H Jiang, Proc. ICASSP. ICASSPO. Abdel-Hamid and H. Jiang, "Fast speaker adaptation of hy- brid nn/hmm model for speech recognition based on discrimi- native learning of speaker code," in Proc. ICASSP, May 2013.
Fast adaptation of deep neural network based on discriminant codes for speech recognition. S Xue, O Abdel-Hamid, H Jiang, TASLP. 2212S. Xue, O. Abdel-Hamid, H. Jiang, et al., "Fast adaptation of deep neural network based on discriminant codes for speech recognition," in TASLP, vol. 22, no. 12, Dec 2014.
Listen, attend, spell and adapt: Speaker adapted sequence-to-sequence asr. F Weninger, J Andrés-Ferrer, X Li, Proc. Interspeech. InterspeechF. Weninger, J. Andrés-Ferrer, X. Li, et al., "Listen, attend, spell and adapt: Speaker adapted sequence-to-sequence asr," Proc. Interspeech, 2019.
Speaker adaptation for attention-based end-to-end speech recognition. Z Meng, Y Gaur, J Li, Proc. Interspeech. InterspeechZ. Meng, Y. Gaur, J. Li, et al., "Speaker adaptation for attention-based end-to-end speech recognition," Proc. Inter- speech, 2019.
Attention-based models for speech recognition. Dzmitry Jan K Chorowski, Dmitriy Bahdanau, Serdyuk, NIPS. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, et al., "Attention-based models for speech recognition," in NIPS, 2015, pp. 577-585.
Character-aware attention-based end-to-end speech recognition. Z Meng, Y Gaur, J Li, Y Gong, Proc. ASRU. IEEE. ASRU. IEEEZ. Meng, Y. Gaur, J. Li, and Y. Gong, "Character-aware attention-based end-to-end speech recognition," in Proc. ASRU. IEEE, 2019.
Learning small-size DNN with output-distribution-based criteria. J Li, R Zhao, J.-T Huang, Proc. IN-TERSPEECH. IN-TERSPEECHJ. Li, R. Zhao, J.-T. Huang, et al., "Learning small-size DNN with output-distribution-based criteria.," in Proc. IN- TERSPEECH, 2014, pp. 1910-1914.
Large-scale domain adaptation via teacher-student learning. J Li, M Seltzer, X Wang, J. Li, M. L Seltzer, X. Wang, et al., "Large-scale domain adap- tation via teacher-student learning," in INTERSPEECH, 2017.
Conditional teacher-student learning. Z Meng, J Li, Y Zhao, Proc. ICASSP. ICASSPZ. Meng, J. Li, Y. Zhao, et al., "Conditional teacher-student learning," in Proc. ICASSP, 2019.
Domain adaptation via teacherstudent learning for end-to-end speech recognition. Z Meng, J Li, Y Gaur, Proc. ASRU. IEEE. ASRU. IEEEZ. Meng, J. Li, Y. Gaur, et al., "Domain adaptation via teacher- student learning for end-to-end speech recognition," in Proc. ASRU. IEEE, 2019.
Generative adversarial nets. I Goodfellow, J Pouget-Adadie, Proc. NIPS. NIPSI. Goodfellow, J. Pouget-Adadie, et al., "Generative adversarial nets," in Proc. NIPS, pp. 2672-2680. 2014.
Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, PMLRProc. ICML. ICMLLille, France37Yaroslav Ganin and Victor Lempitsky, "Unsupervised domain adaptation by backpropagation," in Proc. ICML, Lille, France, 2015, vol. 37, pp. 1180-1189, PMLR.
Adversarial multi-task learning of deep neural networks for robust speech recognition. Yusuke Shinohara, INTER-SPEECH. Yusuke Shinohara, "Adversarial multi-task learning of deep neural networks for robust speech recognition.," in INTER- SPEECH, 2016, pp. 2369-2372.
Speaker-invariant training via adversarial learning. Z Meng, J Li, Z Chen, Proc. ICASSP. ICASSPZ. Meng, J. Li, Z. Chen, et al., "Speaker-invariant training via adversarial learning," in Proc. ICASSP, 2018.
Adversarial teacherstudent learning for unsupervised domain adaptation. Z Meng, J Li, Y Gong, Proc. ICASSP. IEEE. ICASSP. IEEEZ. Meng, J. Li, Y. Gong, et al., "Adversarial teacher- student learning for unsupervised domain adaptation," in Proc. ICASSP. IEEE, 2018, pp. 5949-5953.
Adversarial speaker adaptation. Z Meng, J Li, Y Gong, Proc. ICASSP. ICASSPZ. Meng, J. Li, and Y. Gong, "Adversarial speaker adaptation," in Proc. ICASSP, 2019.
Segan: Speech enhancement generative adversarial network. S Pascual, A Bonafonte, InterspeechS. Pascual, A. Bonafonte, et al., "Segan: Speech enhancement generative adversarial network," in Interspeech, 2017.
Cycle-consistent speech enhancement. Z Meng, J Li, Y Gong, InterspeechZ. Meng, J. Li, and Y. Gong, "Cycle-consistent speech en- hancement," Interspeech, 2018.
Adversarial feature-mapping for speech enhancement. Z Meng, J Li, Y Gong, InterspeechZ. Meng, J. Li, and Y. Gong, "Adversarial feature-mapping for speech enhancement," Interspeech, 2018.
Unsupervised domain adaptation via domain adversarial training for speaker recognition. Q Wang, W Rao, S Sun, ICASSP. Q. Wang, W. Rao, S. Sun, et al., "Unsupervised domain adap- tation via domain adversarial training for speaker recognition," ICASSP, 2018.
Adversarial speaker verification. Z Meng, Y Zhao, J Li, Y Gong, Proc. ICASSP. ICASSPZ. Meng, Y. Zhao, J. Li, and Y. Gong, "Adversarial speaker verification," in Proc. ICASSP, 2019.
An unsupervised deep domain adaptation approach for robust speech recognition. S Sun, B Zhang, L Xie, Neurocomputing. 257S. Sun, B. Zhang, L. Xie, et al., "An unsupervised deep do- main adaptation approach for robust speech recognition," Neu- rocomputing, vol. 257, pp. 79 -87, 2017.
Unsupervised adaptation with domain separation networks for robust speech recognition. Z Meng, Z Chen, V Mazalov, J Li, Y Gong, Proc. ASRU. ASRUZ. Meng, Z. Chen, V. Mazalov, J. Li, and Y. Gong, "Unsuper- vised adaptation with domain separation networks for robust speech recognition," in Proc. ASRU, 2017.
Vector quantization. Robert Gray, IEEE Assp Magazine. 12Robert Gray, "Vector quantization," IEEE Assp Magazine, vol. 1, no. 2, pp. 4-29, 1984.
Finding metric structure in information theoretic clustering. K Chaudhuri, A Mcgregor, COLT. Citeseer. 810K. Chaudhuri and A. McGregor, "Finding metric structure in information theoretic clustering.," in COLT. Citeseer, 2008, vol. 8, p. 10.
The centroid of the symmetrical kullback-leibler distance. R Veldhuis, IEEE Signal Processing Letters. 9R. Veldhuis, "The centroid of the symmetrical kullback-leibler distance," IEEE Signal Processing Letters, vol. 9, 2002.
Kl divergence based agglomerative clustering for automated vitiligo grading. M Das Gupta, S Srinivasa, M Antony, Proc. CVPR. CVPRM. Das Gupta, S. Srinivasa, M. Antony, et al., "Kl divergence based agglomerative clustering for automated vitiligo grading," in Proc. CVPR, 2015, pp. 2700-2709.
Long short-term memory recurrent neural network architectures for large scale acoustic modeling. H Sak, A Senior, F Beaufays, InterspeechH. Sak, A. Senior, and F. Beaufays, "Long short-term memory recurrent neural network architectures for large scale acoustic modeling," in Interspeech, 2014.
Multi-channel speech recognition: Lstms all the way through. H Erdogan, T Hayashi, J R Hershey, CHiME-4 workshop. H. Erdogan, T. Hayashi, J. R. Hershey, et al., "Multi-channel speech recognition: Lstms all the way through," in CHiME-4 workshop, 2016, pp. 1-4.
Deep long shortterm memory adaptive beamforming networks for multichannel robust speech recognition. Z Meng, S Watanabe, J R Hershey, Z. Meng, S. Watanabe, J. R. Hershey, et al., "Deep long short- term memory adaptive beamforming networks for multichan- nel robust speech recognition," in ICASSP, 2017, pp. 271-275.
| [] |
[
"Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays",
"Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays"
] | [
"Priyanka Dey \nComputer Science Department\nDepartment of Linguistics, Computer Science Department\nUniversity of Illinois\nUrbana-Champaign\n",
"Roxana Girju girju@illinois.edu \nBeckman Institute\nUniversity of Illinois\nUrbana-Champaign\n"
] | [
"Computer Science Department\nDepartment of Linguistics, Computer Science Department\nUniversity of Illinois\nUrbana-Champaign",
"Beckman Institute\nUniversity of Illinois\nUrbana-Champaign"
] | [] | One important aspect of language is how speakers generate utterances and texts to convey their intended meanings. In this paper, we bring various aspects of the Construction Grammar (CxG) and the Systemic Functional Grammar (SFG) theories in a deep learning computational framework to model empathic language. Our corpus consists of 440 essays written by premed students as narrated simulated patient-doctor interactions. We start with baseline classifiers (state-of-the-art recurrent neural networks and transformer models). Then, we enrich these models with a set of linguistic constructions proving the importance of this novel approach to the task of empathy classification for this dataset. Our results indicate the potential of such constructions to contribute to the overall empathy profile of firstperson narrative essays. | 10.48550/arxiv.2302.01839 | [
"https://export.arxiv.org/pdf/2302.01839v1.pdf"
] | 256,598,048 | 2302.01839 | ef263e6f6e6c95980ee2bb6c85c49e41a67c9b51 |
Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays
Priyanka Dey
Computer Science Department
Department of Linguistics, Computer Science Department
University of Illinois
Urbana-Champaign
Roxana Girju girju@illinois.edu
Beckman Institute
University of Illinois
Urbana-Champaign
Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays
One important aspect of language is how speakers generate utterances and texts to convey their intended meanings. In this paper, we bring various aspects of the Construction Grammar (CxG) and the Systemic Functional Grammar (SFG) theories in a deep learning computational framework to model empathic language. Our corpus consists of 440 essays written by premed students as narrated simulated patient-doctor interactions. We start with baseline classifiers (state-of-the-art recurrent neural networks and transformer models). Then, we enrich these models with a set of linguistic constructions proving the importance of this novel approach to the task of empathy classification for this dataset. Our results indicate the potential of such constructions to contribute to the overall empathy profile of firstperson narrative essays.
Introduction
Much of our everyday experience is shaped and defined by actions and events, thoughts and perceptions which can be accounted for in different ways in the system of language. The grammatical choices we make when writing an essay (i.e., pronoun use, active or passive verb phrases, sentence construction) differ from those we use to email someone, or those we utter in a keynote speech. "Word choice and sentence structure are an expression of the way we attend to the words of others, the way we position ourselves in relation to others" (Micciche, 2004). Such choices allow us to compare not only the various options available in the grammar, but also what is expressed in discourse with what is suppressed (Menéndez, 2017).
Given the great variability in the modes of expression of languages, the search for an adequate design of grammar has long motivated research in linguistic theory. One such approach is CxG (Kay and et al., 1999;Goldberg, 1995;Fillmore et al., 2006) which prioritizes the role of constructions, conventional form-meaning pairs, in the continuum between lexis and syntax ( Van Valin, 2007). As such, these constructions form a structured inventory of speakers' knowledge of the conventions of their language (Langacker, 1987).
Another particular grammatical facility for capturing experience in language is Halliday's system of transitivity as part of the Systemic Functional Grammar (SFG) (Halliday, 1994;Halliday et al., 2014), a theory of language centred around the notion of language function. SFG pays great attention to how speakers generate utterances and texts to convey their intended meanings. This can make our writing effective, but also give the audience a sense of our own personality. However, unlike CxG, Halliday's system of transitivity describes the way in which the world of our experience is divided by grammar into a 'manageable set of process types' (Halliday et al., 2014) each offering not only a form-meaning mapping, but also a range of stylistic options for the construal of any given experience through language. In stylistics, researchers have used this model to uncover and study the grammatical patterns through which texts can enact a particular ideology, or an individual's distinctive 'mind style' of language (Fowler, 1996).
The idea of 'style as choice' in Halliday's transitivity system can be best understood as experiential strategies (like avoiding material processes or repeating passive voice constructions) such as those identified as contributing to a reduced sense of awareness, intentionality or control in the human agent responsible (Fowler, 2013;Simpson and Canning, 2014). Such an individual is often said to appear 'helpless' and 'detached' (Halliday, 2019;Simpson, 2003), or 'disembodied' (Hoover, 2004). Take for instance, construction choices like 'I reassured her' vs. 'She was reassured', or "I greeted her upon entrance" vs. "The nurse greeted her upon entrance" vs. "She was greeted upon entrance" -which show the degree of agency and intended involvement on the part of the agent in the action. Such linguistic choices often occur together in stylistic profiling exercises to showcase the techniques contributing to 'passivity', or the degree of suppression of agency and power in characterisation (Kies, 1992).
In this paper, we try to bring CxG and SFG closer together in the study of discourse level construction of arguments for the analysis of empathic content of narrative essays. Specifically, inspired by research in critical discourse analysis, we are taking a step further to show ways in which such construction choices can manipulate (and even reduce) the attention we give to the agency and moral responsibility of individuals (Jeffries, 2017;Van Dijk, 2017). Specifically, such form-meaning-style mappings can be used to capture the point of view as an aspect of narrative organization and the perspective through which a story is told, the way the characters are portrayed in terms of their understanding of the processes they are involved in, as well as their own participation in the story. In this respect, "narratives seem necessary for empathy [..] they give us access to contexts that are broader than our own contexts and that allow us to understand a broad variety of situations" (Gallagher, 2012). They provide a form/structure that allows us to frame an understanding of others, together with a learned set of skills and practical knowledge that shapes our understanding of what we and others are experiencing.
Drawing on Halliday's transitivity framework rooted in Systemic Functional Linguistics, this paper attempts to reveal the (dis)engaged style of empathic student essays from a semantic-grammatical point of view. Specifically, we want to investigate how certain types of processes (i.e., verbs) and constructions (i.e., passive voice) function to cast the essay writers (as main protagonists and agents) as perhaps rather ineffectual, passive, and detached observers of the events around them and of the patient's emotional states.
We take a narrative approach to empathy and explore the experiences of premed students at a large university by analysing their self-reflective writing portfolios consisting of a corpus of firstperson essays written by them as narrated simulated patient-doctor interactions. The corpus has been previously annotated and organized (Shi et al., 2021;Michalski and Girju, 2022) following established practices and theoretical conceptualizations in psychology (Cuff et al., 2016;Eisenberg et al., 2006;Rameson et al., 2012). Computationally, we introduce a set of informative baseline experiments using state-of-the-art recurrent neural networks and transformer models for classifying the various forms of empathy. As initial experiments show relatively low scores, we measure the presence of several grammatical structures, leveraging Halliday's theory of transitivity, and its correlation with the essays' overall empathy scores. We apply this framework to state-of-the-art and representative neural network models and show significant improvement in the empathy classification task for this dataset. Although previous research suggests that narrative-based interventions tend to be effective education-based methods, it is less clear what are some of the linguistic mechanisms through which narratives achieve such an effect, especially applied to empathy, which is another contribution of this research.
Related Work
In spite of its increasing theoretical and practical interest, empathy research in computational linguistics has been relatively sparse and limited to empathy recognition, empathetic response generation, or empathic language analysis in counselling sessions. Investigations of empathy as it relates to clinical practice have received even less attention given the inherent data and privacy concerns.
Most of the research on empathy detection has focused on spoken conversations or interactions, some in online platforms (e.g. (Pérez-Rosas et al., 2017;Khanpour et al., 2017;Otterbacher et al., 2017;Sharma et al., 2021;Hosseini and Caragea, 2021), very little on narrative genre (Buechel et al., 2018;Wambsganss et al., 2021), and even less in clinical settings. Buechel et al. (2018) used crowdsourced workers to self-report their empathy and distress levels and to write empathic reactions to news stories. Wambsganss et al. (2021) built a text corpus of student peer reviews collected from a German business innovation class annotated for cognitive and affective empathy levels. Using Batson's Empathic Concern-Personal Distress Scale (Batson et al., 1987), Buechel et al. (2018) have focused only on negative empathy instances (i.e., pain and sadness "by witnessing another person's suffering"). However, empathy is not always negative (Fan et al., 2011). A dataset reflecting empahatic language should ideally allow for expressions of empathy that encompass a variety of emotions, and even distinguish between sympathy and empathy. 1 Following a multimodal approach to empathy prediction, R. M. Frankel (2000) and Cordella and Musgrave (2009) identify sequential patterns of empathy in video-recorded exchanges between medical graduates and cancer patients. Sharma et al. (2020) analyzed the discourse of conversations in online peer-to-peer support platforms. Novice writers were trained to improve low-empathy responses and provided writers with adequate feedback on how to recognize and interpret others' feelings or experiences. In follow-up research, they performed a set of experiments (Sharma et al., 2021) whose results seemed to indicate that empathic written discourse should be coherent, specific to the conversation at hand, and lexically diverse.
To our knowledge, no previous research has investigated the contribution of grammatical constructions like Halliday's transitivity system to the task of empathy detection in any genre, let alone in clinical education. 2
Self-reflective Narrative Essays in Medical Training
Simulation-based education (SBE) is an important and accepted practice of teaching, educating, training, and coaching health-care professionals in simulated environments (Bearman et al., 2019). Four decades-worth of SBE research has shown that "simulation technology, used under the right conditions . . . can have large and sustained effects on knowledge and skill acquisition and maintenance among medical learners" (McGaghie et al., 2014). In fact, simulation-based education, an umbrella term that covers a very broad spectrum of learning activities from communication skill role-playing to teamwork simulations, is known to contribute to shaping experiences in undergraduate and postgraduate medical, nursing and other health education. In all these activities, learners contextually enact a task which evokes a real-world situation allowing them to undertake it as if it were real, even though they know it is not (Dieckmann et al., 2007;Bearman, 2003). Personal narratives and storytelling can be viewed as central to social existence (Bruner, 1991), as stories of lived experience (Van Manen, 2016), or as a way in which one constructs notions of self (Ezzy, 1998). In this research, we focus on selfreflective narratives written by premed students given a simulated scenario. Simulation is strongly based on our first-person experiences since it relies on resources that are available to the simulator. In a simulation process, the writer puts themselves in the other's situation and asks "what would I do if I were in that situation?" Perspective taking is crucial for fostering affective abilities, enabling writers to imagine and learn about the emotions of others and to share them, too. As empathy is other-directed (De Vignemont and Jacob, 2012;Gallagher, 2012), this means that we, as narrators, are open to the experience and the life of the other, in their context, as we can understand it. Some evidence shows that we can take such reliance on narrative resources to open up the process toward a more enriched and non-simulationist narrative practice (i.e., real doctor-patients interactions in clinical context) (Gallagher, 2012).
This study's intervention was designed as a written assignment in which premed students were asked to consider a hypothetical scenario where they took the role of a physician breaking the news of an unfavorable diagnosis of high blood cholesterol to a middle-aged patient 3 . They were instructed to recount (using first person voice) the hypothetical doctor-patient interaction where they explained the diagnosis and prescribed medical treatment to the patient using layman terms and language they believed would comfort as well as persuade the hypothetical patient to adhere to their prescription. Prior to writing, students completed a standard empathic training reading assignment (Baile et al., 2000). They received the following prompt instructions and scenario information. 4 Prompt Instructions: Imagine yourself as a physician breaking bad news to a patient. Describe the dialogue between the patient and you, as their primary care physician. In your own words, write an essay reporting your recollection of the interaction as it happened (write in past tense). Think of how you would break this news if you were in this scenario in real life. In your essay, you should be reflecting on (1) how the patient felt during this scenario and (2) how you responded to your patient's questions in the scenario below.
Scenario: Betty is 32 years old, has a spouse, and two young children (age 3 and 5). You became Betty's general practitioner last year. Betty has no family history of heart disease. In the past 6 months, she has begun experiencing left-side chest pain. Betty's bloodwork has revealed that her cholesterol is dangerously high. Betty will require statin therapy and may benefit from a healthier diet and exercise.
With the students' consent, we collected a corpus of 774 essays over a period of one academic year (Shi et al., 2021). Following a thorough annotation process, annotators (undergraduate and graduate students in psychology and social work) 5 labeled a subset of 440 randomly selected essays at sentences level following established practices in psychology (Cuff et al., 2016;Eisenberg et al., 2006;Rameson et al., 2012). The labels are: cognitive empathy (the drive and ability to identify and understand another's emotional or mental states; e.g., "She looked tired"); affective empathy (the capacity to experience an appropriate emotion in response to another's emotional or mental state; e.g.: "I felt the pain"); and prosocial behavior (a response to having identified the perspective of another with the intention of acting upon the other's mental and/or emotional state; e.g.: "I reassured her this was the best way"). Everything else was "no empathy". The six paid undergraduate students were trained on the task and instructed to annotate the data. Two meta-annotators, paid graduate students with prior experience with the task, reviewed the work of the annotators and updated the annotation guidelines at regular intervals, in an iterative loop process after each batch of essays 6 . The meta-annotators reached a Cohen's kappa of 0.82, a good level of agreement. Disagreed cases were discussed and mitigated. At the end, all the essays were re-annotated per the most up-to-date guidelines.
In this paper, we collapsed all the affective, cognitive, and prosocial empathy labels into one Empathy Language label -since we are interested here only in emphatic vs. non-empathic sentences. After integrating the annotations and storing the data for efficient search (Michalski and Girju, 2022), our corpus consisted of 10,120 data points (i.e., sentences) highlighted or not with empathy. Each 5 The students were hired based on previous experience with similar projects in social work and psychology. 6 10 essays per week essay was also rated by our annotators with a score on a scale from 1-5 (one being the lowest) to reflect overall empathy content at essay level.
Constructions and Stylistic Profiles in Empathic Narrative Essays
In CxG, constructions can vary in size and complexity -i.e., morphemes, words, idioms, phrases, sentences. In this paper, we focus mainly on simple sentence-level constructions 7 , which, since we work with English, are typically of the form S V [O], where S is the subject, V is the verb, and O is the object (e.g., a thing, a location, an attribute). For instance, "Betty took my hand" matches the construction S V O with the semantics <Agent Predicate Goal>. SFG and CxG give the same semantic analysis, modulo some terminological differences (Lin and Peng, 2006). Specifically, they agree that the sentence above describes a process (or a predicate), which involves two participant roles providing the same linking relationship between the semantic and the syntactic structures: an Actor (or Agent) / Subject, and a Goal (Patient) / Object. We start by checking whether the subject of a sentence consists of a human or a non-human agent. After identifying the grammatical subjects in the dataset's sentences with the Python Spacy package, we manually checked the list of human agents (the five most frequent being I (24.56%), She (5.76%), Betty (18.43%), John (6.24%), Patient (4.86%)). 8 Halliday's transitivity model describes the way in which the world of our experience can be divided by grammar into a manageable set of process types, the most basic of which are: material processes (external actions or events in the world around us; e.g., verbs like "write", "walk", "kick") and mental processes (internal events; e.g., verbs of thinking, feeling, perceiving). We first identify sentences containing material and mental processes by extracting the verbs in each sentence (Table 1). About 75% of the dataset contains such processes, with material processes appearing more frequently than mental ones (by a small margin: 0.9%).
Inspired by the success of Halliday's transitivity system on cognitive effects of linguistic constructions in literary texts (Nuttall, 2019), we also examine a set of construction choices which seem to co-occur in texts as material and mental actions or events. In our quest of understanding empathy expression in student narrative essays, we want to test if such contributions lead to a reduced sense of intentionality, awareness or control for the agentive individual represented (i.e., the essay writer in the role of the doctor), and thus, identifying the stylistic profile of the narrative. Specifically, these constructions are: Human Actor + Process (HA+P); Body Part + Process (BP+P); Other Inanimate Actor + Process (IA+P); Goal + Process (G+P) (see Table 1). We identify HA+P to be the most common construction within our dataset, appearing in just less than half of the sentences (49.82%). The remaining constructions are much rarer with G+P being the least frequent (12.54%).
Drawing from (Langacker, 1987), Nuttall (2019) also notes that these experiences can vary in forcedynamic (energetic) quality and thus sentences exhibiting an energetic tone are linked with 'high' transitivity and those with lower or static energy can be linked to 'low' transitivity. In order to identify energetic sentences, we leverage the IBM Watson Tone Analyzer API (Yin et al., 2017) which assesses the emotions, social propensities, and language styles of a sentence. We denote sentences containing high extroversion and high confidence (values > 0.8) as energetic. Sentences with low scores are marked as static. 61.77% of the sentences exhibit a static tone, energetic tone being less frequent.
In SFG, active and passive voice plays an important role as well. Nuttall (2019) shows that, in some genres, text indicating a lower degree of agentive control tends to use more passive voice constructions. As this is also relevant to our task, we test whether voice contributes indeed to a reduced sense of intentionality, awareness or control for the Agent (in particular the essay writer playing the doctor's role) and how these features correlate with the overall empathy score at essay level. Using an in-house grammatical-role extraction tool developed on top of Spacy's dependency parser, we find that 66% of sentences use active voice and 34% passive voice. 9 77.92% of active-voice sentences exhibit human actor subjects and only 22.08% include non-human actors. Similarly for passive voice, the majority (83.09%) of sentences had human actors. Compar-9 The active/passive voice ratio varies per genre (Strunk Jr and White, 2007). Note that in a sentence using passive voice, the subject is acted upon, which shows the main character's degree of detachment, which is of interest here. Figure 1: Frequency distribution (%) of voice in essays for various overall empathy score ranges ing frequencies of active and passive voice across various essay empathy score ranges (Figure 1), we notice that higher empathy essays (scores >3) seem to rely more on active voice (65-70% of the sentences in active voice) as opposed to lower empathy essays (scores < 3) which have less than 65% of sentences in active voice.
Stylistic research has also shown (Nuttall, 2019) the importance of movement of body parts as nonhuman agents. We, too, parsed sentences for the use of body parts, i.e. eyes, arms, head and curated a list based on anatomical terminology as defined by wiktionary.org (2022) resulting in about 18.61% of the dataset sentences (statistics for top 5 most common bodyparts are in Table 2). Table 1 summarize all the identified constructions and stylistic features discussed in this section.
Empathy Classification Task
Our ultimate goal is to build an informed and performant classifier able to determine the degree of empathetic content of a medical essay overall and at sentence level. Taking advantage of form-meaningstyle mappings in the language system, in this paper, we built and test a number of state-of-theart classifiers enriched with varied constructions and stylistic features (Table 1) which are described next.
Identification of Sentence Themes
In medical training, students learn not only how to diagnose and treat patients' medical conditions, but also how to witness the patient's illness experience. In fact, in practical interactions with patients, they often switch between these positions: empathizing with the patient's situation (i.e., witnessing what it is like for the patient), and providing medical care (i.e., understanding what they need medically).
As such, we wanted to capture the distribution of such emphatic content and medical information in (Ganesan et al., 2016). Sentences containing Empathetic Language were already annotated manually by our annotators for each essay at the sentence level (see Section 3). Sentences containing both medical procedural info and empathetic content were marked as Both, while remaining sentences are marked as Neither. Table 3 shows these categories, their definitions, examples and counts per category (10,120 sentences overall). We also give examples of two essays highlighted with these themes in the Appendix (Section 7).
In the next sections we present the classification results of various multi-class machine learning models (for each of the 4 themes: Medical Procedural Information, Empathetic Language, Both, and Neither).
Baseline Models and Analysis
In evaluating several state-of-the-art machine learning algorithms, we started with two representative baseline models: support vector machines (SVM) and logistic regression (logR). As we are interested in observing the performance of deep learning methods, we also experiment with long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997), bidirectional long-short term memory (bi-LSTM) (Graves and Schmidhuber, 2005), and convolutional neural network (CNN) (Kim, 2014) models; additionally, we use the transformer models BERT (Devlin et al., 2018) and roBERTa.
Theme
Freq.
Example Medical Procedural Information 37.39% "The patient's vitals showed that his body was not healthy and it was necessary to make some diet and lifestyle changes." Empathetic Language 36.49% "I noticed Betty looked confused and so I tried to reassure her we would do everything possible to make the changes in her lifestyle." Both 21.28% "I knew the statin treatment could be difficult, so I wanted to make sure Betty felt comfortable and understood the procedure." Neither 4.84% "The file was left on the counter, and I picked it up before going in to see Betty." As we are performing sentence classification, our features are unigrams (single words). For the logistic regression models, we used a L2 regularization and for the SVM models, a linear kernel function. We initialized the embedding layers in our neural models (LSTM, bi-LSTM, CNN) with GloVe embeddings since the expression of empathy involves larger units than words, and embeddings are known to better capture contextual information. We further decided to apply an attention layer to these models to learn patterns that may improve the classification. For the transformer BERT and roBERTa models, we use the default embeddings and apply a dropout layer with probability 0.4 which helps to regularize the model; we use a linear output layer and apply a sigmoid on the outputs. For each type of theme, we reserve an 80/20 training/test ratio, with 5-fold cross validation. As our dataset is imbalanced, we report the precision, recall, and F1-score (harmonic mean of the precision and recall) as shown in Table 4.
We observe that the classification of Empathetic Language is particularly difficult. The best model is the transformer BERT model which achieves an F-1 score of 0.58. On the other hand, sentences with Medical Procedural Information are much easier to identify with most classifiers achieving an F-1 score above 0.65. Sentences labeled Both are increasingly difficult (best classifier score of 0.6 F-1). Classification scores for sentences containing Neither fall just short of scores from Medical Procedural Information sentences. To better understand how these themes correlate with the overall empathy score at essay level, we compare frequencies and distribution of each theme for various essay empathy score ranges (Figure 2) across the entire dataset. High empathy essays (scores >3) tend to show a large amount of Empathetic Language and Both, while low empathy essays (scores < 3) seem to favor Medical Procedural Information language.
Heatmaps of Medical Narrative Essays. It is also interesting to visually analyze the distribution of these themes in the layout of the narrative essays. Thus, for each essay, we highlight the sentences containing each theme and generate heat maps that might highlight high theme concentrations. We standardized the format of each essay to an A4 paper, 10 generating a 42 x 14 matrix. 11 For each essay and position -i.e., (row, column) -we note the occurrence of each theme. We then build a heat map from these counts, thus generating 3 heatmaps, one for each theme along the following overall empathy score ranges: (1-2), (2-3), (3-4), and (4-5) ( Figure 3).
The heatmaps for theme Medical Procedural Information for low empathy score essays show darker colors (purple) indicating a higher frequency of use at the beginning and middle of the essay. Lighter colors (orange and yellow) showcasing lower concentrations of the theme seems to be more prevalent in higher empathy score essays. Empathetic Language tends to increase in coverage (i.e., darker color portions) from low to high-score empathy essays, with a preference toward the end of the essay. 12 Both themes seem to concentrate, specifically towards the top and middle of the essays for high empathy scores (darker colors). Low empathy essays also show some shades of purple (i.e. some concentration) towards the bottom and lower third of the essays.
Incorporating Halliday Features into the Theme Classifier
In this section, we seek to improve our sentence theme classifier by incorporating the constructions and stylistic features identified in Section 4. For each sentence, we append a Boolean value indicating whether each feature is present in the given sentence -e.g., if a sentence is in active voice (feature Active is 1; feature Passive is 0); if the sentence contains a HA+P (feature value is 1), and so on.
Since in our baseline experiments the BERT model gave the best results across all 4 themes, we extend it here with all the features (construction-BERT) and report new scores (see bottom part of Table 4). Indeed, the inclusion of these features yields better performance, with a large increase for most of our themes including, Empathetic Language, Both, and Neither, and smaller performance increases in Medical Procedural Information.
Leave-one-out feature contribution experiments (see bottom of Table 4) show that removing Voice: Active and Voice: Passive slightly decreases performance in Empathetic Language and Both (with Voice: Active providing the highest decrease).
Removing Processes also shows a fair decrease in all themes except Neither which shows no change in performance. A deeper analysis indicates that Processes: Material helps with Medical Procedural Information but hurts performance on Empathetic Language.
The constructions HA+P and BP+P are most important for classification; the removal of BP+P yields the lowest F-1 score measure for detecting empathy. This shows the doctor (i.e., the student writer) paid particular attention to the patient's emotional state (thus showing empathy). Body parts in this type of discourse are particularly associated with non-verbal emotional language, which is highly indicative of empathy. HA+P is also an important feature for the theme Neither. Removal of IE+P gives a slight decrease in performance, while G+P has almost no effect on the classification results. Finally, the Tone: Energetic and Tone: Static features (constructionBERT-Tone) show to be important for the themes Medical Procedural Information, Empathetic Language, and Both. For Tone: Energetic, there is a 0.02 decrease in F-1 for medical procedural information, and a 0.05 for Empathetic Language and Both. For Tone: Static, we observe a decrease in performance for Empathetic Language by 0.02 and Both by 0.01. With our binary classification task, we see similar patterns as constructionBERT-Tone yields much lower performances. The energetic and static tones yield 0.004 and 0.01 increases in F-1 scores for Medical Procedural Information and Empathetic Language. Our analysis also showed that G+P (Goal+Process), Processes (Mental and Material), and HA+P (Human Actor+Process) were also increasingly important for score improvements.
Interested in directly comparing the Medical Pro- cedural Information and Empathetic Language sentences, we further built a binary version of the simple BERT model, and another of constructionBERT, and found these tasks to be slightly easier. The binary BERT model achieved an F-1 score of 0.75 for Medical Procedural Information and a 0.62 for Empathetic Language. After adding the generated features (i.e., the binary constructionBERT), we see a small increase in F-1 scores (+0.01 for Medical Procedural Information and +0.03 for Empathetic Language).
Overall, the results of the effects of transitivity features on meaning, perceived agency and involvement of the Agent are in line with those obtained for literary genre texts by Nuttall (2019) through manual inspection. More specifically, the stylistic choices given by such linguistic constructions seem to be good indicators of the degree of perceived agency an Agent has in relation to others and the environment, as tested here for the empathy task on our dataset. In research on stylistics, the set and usage of such stylistic constructions and features in a text is known as the stylistic profile of the text. Encouraged by the correlations between Halliday's features with our essay level empathy scores, we would like to extrapolate and maintain that a set of rich stylistic constructions (like those tested in this research) can ultimately lead to informative Empathy Profiles -essay level form-meaning-style structures that can give an indication of the degree of social and empathetic detachment of the doctor toward the patient. Of course, while more research is needed in this direction, we believe we showed here the potential of such an approach to the task of empathy detection classification overall, and to clinical context in particular.
Conclusions
Medical education incorporates guided selfreflective practices that show how important it is for students to develop an awareness of the emotional and relational aspects of the clinical encounter with their patients (Warmington, 2019). The way people identify themselves and perform in particular roles and in relation to others brings together a specific set of values, attitudes, and competencies that can be supported through ongoing self-reflection. Such interactions can be captured in language via constructions as part of CxG and Halliday's transitivity system.
In this paper, we bring various aspects of these theories in a deep learning computational framework to model empathetic language in a corpus of essays written by premed students as narrated simulated patient-doctor interactions. We start with baseline classifiers (state-of-the-art recurrent neural networks and transformer models). Then, we enrich these models with a set of linguistic constructions proving the importance of this novel approach to the task of empathy classification for this dataset. Our results indicate the potential of such constructions to contribute to the overall empathy profile of first-person narrative essays.
Figure 2 :
2Frequency distribution (%) of themes in essays for various empathy score ranges
Figure 3 :
3Heatmaps for themes in sentences of narrative essays across all overall empathy score ranges: Row#1 shows heatmaps for Medical Procedural Information; Row#2 for Empathetic Language; Row#3 for Both. Dark colors (purple) indicate that many essays exhibit the theme in the respective position of the essay. Light colors (yellow) indicate a small number of essays have occurrences of the theme for the given position.
Table 1 :
1Our set of SFG's transitivity constructions with their distribution and examples. Note that the total distribution should not add to 100%, as these are not mutually exclusive features.Body Part
POS Used
Frequency
Example
Eye
subject, indirect object, preposi-
tional object
42.96%
"I saw in her eyes tears forming as she realized the
gravity of the issue at hand."
Hand
subject, prepositional object, in-
direct object, direct object
16.14%
"John began clasping his hands."
Head
direct object, indirect object
8.60%
"John shook his head as he sat down across from
me."
Shoulder
subject, prepositional object, di-
rect object
5.47%
"The patient shrugged his shoulders."
Body
subject, prepositional object, di-
rect object
4.99%
"The vitals showed that the patient's body was not
in its healthiest form."
Table 2 :
2Most common body parts in the empathy essay datasetour narrative essays of hypothetical doctor-patient
interactions. Specifically, we looked at recurring
topics within sentences and identified the following
themes in our dataset at the sentence level: Medi-
cal Procedural Information; Empathetic Language;
Both (Medical and Empathetic Language); and Nei-
ther. Sentences referring to Medical Procedural In-
formation were identified based on keyword match-
ing following established medical term vocabulary
generated from Dr. Kavita Ganesan's work on clin-
ical concepts
Table 3 :
3Examples and distribution of identified themes in sentencesClassifier
Medical Procedural Information Empathetic Language
Both
Neither
Prec. Rec.
F1
Prec. Rec.
Table 4 :
4Precision, recall and F1 scores of all baseline classifiers on the imbalanced test dataset: 770 Medical Procedural Information, 722 Empathetic Language, 433 Both, 98 Neither sentences
Some studies don't seem to differentiate between sympathy and empathy(Rashkin et al., 2018;Lin et al., 2019).2 Besides our own research(Shi et al., 2021;Michalski and Girju, 2022;Dey and Girju, 2022;.
The patient was referred to as Betty, initially. Later in the data collection, students could also identify the patient as John.4 All data collected for this study adheres to the approved Institutional Review Board protocol.
We also consider constructions at word level -i.e., verbs. 8 Other subjects: Nurse, Doctor, Family, Children, Wife, Husband, and Spouse
Times New Roman, size 12: 42 lines of 14 words each11 We generated a separate heatmap (size: 81 x 14) for 24 essays since these were much longer and didn't fit on a standard A4 paper. These showed similar position patterns.12 A closer look indicates that students who wrote lowempathy essays showed a tendency to use some emotional language in the last paragraph -which appeared rather rushed and forced.
AppendixFigure 4shows two examples of essays, one with low empathy and one with high empathy, highlighted with the themes: Medical Procedural Information (cyan), Empathetic Language (yellow), and Both (green). Neither sentences are not highlighted. It is interesting to see that in Essay (a), the sentences mentioning diet and exercise were not identified as Medical Procedural Information given that they were not found in Dr. Kavita Ganesan's work on clinical concepts(Ganesan et al., 2016).
Spikes-a six-step protocol for delivering bad news: application to the patient with cancer. F Walter, Robert Baile, Renato Buckman, Gary Lenzi, Estela A Glober, Andrzej P Beale, Kudelka, Walter F Baile, Robert Buckman, Renato Lenzi, Gary Glober, Estela A Beale, and Andrzej P Kudelka. 2000. Spikes-a six-step protocol for delivering bad news: application to the patient with cancer.
Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences. Jim Daniel Batson, Patricia A Fultz, Schoenrade, Journal of personality. 551C Daniel Batson, Jim Fultz, and Patricia A Schoenrade. 1987. Distress and empathy: Two qualitatively dis- tinct vicarious emotions with different motivational consequences. Journal of personality, 55(1):19-39.
Is virtual the same as real? medical students' experiences of a virtual patient. Margaret Bearman, Academic Medicine. 785Margaret Bearman. 2003. Is virtual the same as real? medical students' experiences of a virtual patient. Academic Medicine, 78(5):538-545.
The power of simulation: a large-scale narrative analysis of learners' experiences. Margaret Bearman, Jennene Greenhill, Debra Nestel, Medical education. 534Margaret Bearman, Jennene Greenhill, and Debra Nes- tel. 2019. The power of simulation: a large-scale narrative analysis of learners' experiences. Medical education, 53(4):369-379.
The narrative construction of reality. Jerome Bruner, Critical inquiry. 181Jerome Bruner. 1991. The narrative construction of re- ality. Critical inquiry, 18(1):1-21.
Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Ungar, Joao Sedoc, arXiv:1808.10399Modeling empathy and distress in reaction to news stories. arXiv preprintSven Buechel, Anneke Buffone, Barry Slaff, Lyle Un- gar, and Joao Sedoc. 2018. Modeling empathy and distress in reaction to news stories. arXiv preprint arXiv:1808.10399.
Oral communication skills of international medical graduates: Assessing empathy in discourse. M Cordella, S Musgrave, Communication and Medicine. 62M. Cordella and S. Musgrave. 2009. Oral communi- cation skills of international medical graduates: As- sessing empathy in discourse. Communication and Medicine, 6(2):129-142.
Empathy: A review of the concept. M P Benjamin, Cuff, J Sarah, Laura Brown, Douglas J Taylor, Howat, Emotion review. 82Benjamin MP Cuff, Sarah J Brown, Laura Taylor, and Douglas J Howat. 2016. Empathy: A review of the concept. Emotion review, 8(2):144-153.
What is it like to feel another's pain? Philosophy of science. De Frédérique, Pierre Vignemont, Jacob, 79Frédérique De Vignemont and Pierre Jacob. 2012. What is it like to feel another's pain? Philosophy of science, 79(2):295-316.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Enriching deep learning with frame semantics for empathy classification in medical narrative essays. Priyanka Dey, Roxana Girju, Proceedings of the 2022 Workshop on Health Text Mining and Information Analysis (LouHI) collocated with the Conference on Empirical Methods in Natural Language Processing (EMNLP), hybrid. Association for Computational Linguistics. the 2022 Workshop on Health Text Mining and Information Analysis (LouHI) collocated with the Conference on Empirical Methods in Natural Language Processing (EMNLP), hybrid. Association for Computational LinguisticsPriyanka Dey and Roxana Girju. 2022. Enriching deep learning with frame semantics for empathy classifi- cation in medical narrative essays. In Proceedings of the 2022 Workshop on Health Text Mining and In- formation Analysis (LouHI) collocated with the Con- ference on Empirical Methods in Natural Language Processing (EMNLP), hybrid. Association for Com- putational Linguistics.
Deepening the theoretical foundations of patient simulation as social practice. Peter Dieckmann, David Gaba, Marcus Rall, Simulation in Healthcare. 23Peter Dieckmann, David Gaba, and Marcus Rall. 2007. Deepening the theoretical foundations of patient sim- ulation as social practice. Simulation in Healthcare, 2(3):183-193.
Prosocial development. In Volume III. Social, Emotional, and Personality Development. Nancy Eisenberg, A Richard, Tracy L Fabes, Spinrad, John Wiley & Sons, IncNancy Eisenberg, Richard A Fabes, and Tracy L Spin- rad. 2006. Prosocial development. In Volume III. So- cial, Emotional, and Personality Development. John Wiley & Sons, Inc.
Theorizing narrative identity: Symbolic interactionism and hermeneutics. Sociological quarterly. Douglas Ezzy, 39Douglas Ezzy. 1998. Theorizing narrative identity: Symbolic interactionism and hermeneutics. Socio- logical quarterly, 39(2):239-252.
Is there a core neural network in empathy? an fmri based quantitative meta-analysis. Y Fan, N W Duncan, M De Greck, Northoff G , Neuroscience Biobehavioral Review. 353Y. Fan, Duncan NW, de Greck M, and Northoff G. 2011. Is there a core neural network in empathy? an fmri based quantitative meta-analysis. Neuroscience Biobehavioral Review, 35(3):903-911.
Construction grammar. Center for the Study of Language and Information. J Charles, Paul Fillmore, Laura A Kay, Michaelis, Charles J Fillmore, Paul Kay, and Laura A Michaelis. 2006. Construction grammar. Center for the Study of Language and Information.
Linguistic Criticism. Roger Fowler, Oxford University PressOxford2nd editionRoger Fowler. 1996. Linguistic Criticism. Oxford: Ox- ford University Press, 2nd edition.
Linguistics and Novel. Roger Fowler, RoutledgeRoger Fowler. 2013. Linguistics and Novel. Rout- ledge.
Empathy, simulation, and narrative. Shaun Gallagher, 10.1017/s0269889712000117Science in Context. 253Shaun Gallagher. 2012. Empathy, simulation, and nar- rative. Science in Context, 25(3):355-381.
Discovering related clinical concepts using large amounts of clinical notes. Kavita Ganesan, Shane Lloyd, Vikren Sarkar, 10.4137/BECB.S3615527-33. Becb-suppl.2-2016- 027Biomed Eng Comput Biol. 7227656096PII. pmidKavita Ganesan, Shane Lloyd, and Vikren Sarkar. 2016. Discovering related clinical concepts using large amounts of clinical notes. Biomed Eng Com- put Biol, 7(Suppl 2):27-33. Becb-suppl.2-2016- 027[PII], 27656096[pmid].
Design considerations for an NLP-driven empathy and emotion interface for clinician training via telemedicine. Roxana Girju, Marina Girju, 10.18653/v1/2022.hcinlp-1.3Proceedings of the Second Workshop on Bridging Human-Computer Interaction and Natural Language Processing. the Second Workshop on Bridging Human-Computer Interaction and Natural Language ProcessingSeattle, WashingtonAssociation for Computational LinguisticsRoxana Girju and Marina Girju. 2022. Design con- siderations for an NLP-driven empathy and emo- tion interface for clinician training via telemedicine. In Proceedings of the Second Workshop on Bridg- ing Human-Computer Interaction and Natural Lan- guage Processing, pages 21-27, Seattle, Washing- ton. Association for Computational Linguistics.
Constructions: a construction grammar approach to argument structure. Adele Goldberg, Chicago: The University of ChicagoAdele Goldberg. 1995. Constructions: a construction grammar approach to argument structure. Chicago: The University of Chicago.
Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural networks. Alex Graves, Jürgen Schmidhuber, 18Alex Graves and Jürgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural net- works, 18(5-6):602-610.
An introduction to functional grammar. A K Michael, Halliday, london: Edward arnoldLongman. SHELL NOUNS131London & New Yorkruqaiya hasan. 1976. Cohesion in EnglishMichael AK Halliday. 1994. An introduction to func- tional grammar, london: Edward arnold.---& ruqaiya hasan. 1976. Cohesion in English. London & New York: Longman. SHELL NOUNS, 131.
Linguistic function and literary style: an inquiry into the language of william golding's' the inheritors. A K Michael, Halliday, Essays in modern stylistics. RoutledgeMichael AK Halliday. 2019. Linguistic function and literary style: an inquiry into the language of william golding's' the inheritors'. In Essays in modern stylistics, pages 325-360. Routledge.
An introduction to functional grammar. Michael Alexander Kirkwood Halliday, Mim Christian, Michael Matthiessen, Christian Halliday, Matthiessen, RoutledgeMichael Alexander Kirkwood Halliday, Christian MIM Matthiessen, Michael Halliday, and Christian Matthiessen. 2014. An introduction to functional grammar. Routledge.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Altered texts, altered worlds, altered styles. Language and Literature. L David, Hoover, 13David L Hoover. 2004. Altered texts, altered worlds, altered styles. Language and Literature, 13(2):99- 118.
Distilling knowledge for empathy detection. Mahshid Hosseini, Cornelia Caragea, 10.18653/v1/2021.findings-emnlp.314Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsMahshid Hosseini and Cornelia Caragea. 2021. Distill- ing knowledge for empathy detection. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3713-3724, Punta Cana, Do- minican Republic. Association for Computational Linguistics.
Critical stylistics: The power of English. Lesley Jeffries, Bloomsbury PublishingLesley Jeffries. 2017. Critical stylistics: The power of English. Bloomsbury Publishing.
Grammatical constructions and linguistic generalizations: the what's x doing y? construction. Language. Paul Kay, 75Paul Kay and et al. 1999. Grammatical constructions and linguistic generalizations: the what's x doing y? construction. Language, 75(1):1-33.
Identifying empathetic messages in online health communities. Hamed Khanpour, Cornelia Caragea, Prakhar Biyani, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingShort Papers2Hamed Khanpour, Cornelia Caragea, and Prakhar Biyani. 2017. Identifying empathetic messages in online health communities. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 246-251.
The uses of passivity: suppressing agency in nineteen eighty-four. Daniel Kies, Advances in systemic linguistics: Recent theory and practice. Daniel Kies. 1992. The uses of passivity: suppress- ing agency in nineteen eighty-four. Advances in sys- temic linguistics: Recent theory and practice, pages 229-250.
Convolutional neural networks for sentence classification. Yoon Kim, Yoon Kim. 2014. Convolutional neural networks for sentence classification.
Foundations of cognitive grammar: Theoretical prerequisites. W Ronald, Langacker, Stanford university press1Ronald W Langacker. 1987. Foundations of cogni- tive grammar: Theoretical prerequisites, volume 1. Stanford university press.
Systemic functional grammar and construction grammar. Fy Lin, Peng, Presented during the 33rd International Systemic Functional Congress. FY Lin and AX Peng. 2006. Systemic functional grammar and construction grammar. In Presented during the 33rd International Systemic Functional Congress, pages 331-347.
Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, Pascale Fung, arXiv:1908.07687Moel: Mixture of empathetic listeners. arXiv preprintZhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. 2019. Moel: Mixture of empa- thetic listeners. arXiv preprint arXiv:1908.07687.
A critical review of simulation-based mastery learning with translational outcomes. William C Mcgaghie, B Saul, Issenberg, H Jeffrey, Diane B Barsuk, Wayne, Medical education. 484William C McGaghie, Saul B Issenberg, Jeffrey H Bar- suk, and Diane B Wayne. 2014. A critical review of simulation-based mastery learning with translational outcomes. Medical education, 48(4):375-385.
Christopher hart: Discourse, grammar and ideology. Enrique Menéndez, Pragmática Sociocultural/Sociocultural Pragmatics5Enrique Menéndez. 2017. Christopher hart: Dis- course, grammar and ideology. Pragmática Socio- cultural/Sociocultural Pragmatics, 5(2):259-262.
Making a case for rhetorical grammar. College Composition and Communication. Laura R Micciche, Laura R Micciche. 2004. Making a case for rhetori- cal grammar. College Composition and Communi- cation, pages 716-737.
An empathy account of premed students' narrative essays. Martin Michalski, Roxana Girju, OSF PreprintsMartin Michalski and Roxana Girju. 2022. An em- pathy account of premed students' narrative essays. OSF Preprints.
Transitivity, agency, mind style: What's the lowest common denominator? Language and Literature. Louise Nuttall, 28Louise Nuttall. 2019. Transitivity, agency, mind style: What's the lowest common denominator? Language and Literature, 28(2):159-179.
Show me you care: Trait empathy, linguistic style, and mimicry on facebook. Jahna Otterbacher, Marina Chee Siang Ang, David Litvak, Atkins, ACM Transactions on Internet Technology (TOIT). 171Jahna Otterbacher, Chee Siang Ang, Marina Litvak, and David Atkins. 2017. Show me you care: Trait empathy, linguistic style, and mimicry on facebook. ACM Transactions on Internet Technology (TOIT), 17(1):1-22.
Understanding and predicting empathic behavior in counseling therapy. Verónica Pérez-Rosas, Rada Mihalcea, Kenneth Resnicow, Satinder Singh, Lawrence An, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Verónica Pérez-Rosas, Rada Mihalcea, Kenneth Resni- cow, Satinder Singh, and Lawrence An. 2017. Un- derstanding and predicting empathic behavior in counseling therapy. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426- 1435.
The (socio) linguistic turn in physician-patient communication research. R M Frankel, Georgetown University PressBoston, MAR. M. Frankel. 2000. The (socio) linguistic turn in physician-patient communication research. George- town University Press, Boston, MA.
The neural correlates of empathy: experience, automaticity, and prosocial behavior. Sylvia A Lian T Rameson, Matthew D Morelli, Lieberman, Journal of cognitive neuroscience. 241Lian T Rameson, Sylvia A Morelli, and Matthew D Lieberman. 2012. The neural correlates of empa- thy: experience, automaticity, and prosocial behav- ior. Journal of cognitive neuroscience, 24(1):235- 245.
Eric Michael Hannah Rashkin, Margaret Smith, Y-Lan Li, Boureau, arXiv:1811.00207Towards empathetic opendomain conversation models: A new benchmark and dataset. arXiv preprintHannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic open- domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207.
Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach. Ashish Sharma, W Inna, Adam S Lin, Miner, C David, Tim Atkins, Althoff, Proceedings of the Web Conference 2021. the Web Conference 2021Ashish Sharma, Inna W Lin, Adam S Miner, David C Atkins, and Tim Althoff. 2021. Towards facilitat- ing empathic conversations in online mental health support: A reinforcement learning approach. In Pro- ceedings of the Web Conference 2021, pages 194- 205.
A computational approach to understanding empathy expressed in text-based mental health support. Ashish Sharma, Adam Miner, David Atkins, Tim Althoff, 10.18653/v1/2020.emnlp-main.425Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsAshish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to un- derstanding empathy expressed in text-based men- tal health support. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276, Online. As- sociation for Computational Linguistics.
Modeling clinical empathy in narrative essays. Shuju Shi, Yinglun Sun, Jose Zavala, Jeffrey Moore, Roxana Girju, 10.1109/ICSC50631.2021.000462021 IEEE 15th International Conference on Semantic Computing (ICSC). Shuju Shi, Yinglun Sun, Jose Zavala, Jeffrey Moore, and Roxana Girju. 2021. Modeling clinical empathy in narrative essays. In 2021 IEEE 15th International Conference on Semantic Computing (ICSC), pages 215-220.
Language, ideology and point of view. Paul Simpson, RoutledgePaul Simpson. 2003. Language, ideology and point of view. Routledge.
Action and event. Paul Simpson, Patricia Canning, The Cambridge handbook of stylistics. Cambridge University PressPaul Simpson and Patricia Canning. 2014. Action and event. In The Cambridge handbook of stylistics, pages 281-299. Cambridge University Press.
The Elements of Style Illustrated. William StrunkJr, Elwyn Brooks White, PenguinWilliam Strunk Jr and Elwyn Brooks White. 2007. The Elements of Style Illustrated. Penguin.
Discourse and power. Teun A Van Dijk, Teun A Van Dijk. 2017. Discourse and power.
Researching lived experience: Human science for an action sensitive pedagogy. Max Van Manen, RoutledgeMax Van Manen. 2016. Researching lived experience: Human science for an action sensitive pedagogy. Routledge.
Adele e. goldberg, constructions at work: the nature of generalization in language. Robert D Van Valin, Journal of Linguistics. 2801Oxford university pressRobert D Van Valin. 2007. Adele e. goldberg, con- structions at work: the nature of generalization in language. oxford: Oxford university press, 2006. pp. vii+ 280. Journal of Linguistics, 43(1):234-240.
Thiemo Wambsganss, Christina Niklaus, arXiv:2105.14815Siegfried Handschuh, and Jan Marco Leimeister. 2021. Supporting cognitive and emotional empathic writing of students. arXiv preprintThiemo Wambsganss, Christina Niklaus, Matthias Söll- ner, Siegfried Handschuh, and Jan Marco Leimeis- ter. 2021. Supporting cognitive and emotional empathic writing of students. arXiv preprint arXiv:2105.14815.
Storytelling encounters as medical education: crafting relational identity. Routledge. wiktionary.org. 2022. Appendix:visual dictionary/human body -body parts. Sally G Warmington, Online; accessed 29-October-2022Sally G Warmington. 2019. Storytelling encounters as medical education: crafting relational identity. Routledge. wiktionary.org. 2022. Appendix:visual dictio- nary/human body -body parts. [Online; accessed 29-October-2022].
Tone analyzer for online customer service: An unsupervised model with interfered training. Peifeng Yin, Zhe Liu, Anbang Xu, Taiga Nakamura, 10.1145/3132847.3132864Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17. the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17New York, NY, USAAssociation for Computing MachineryPeifeng Yin, Zhe Liu, Anbang Xu, and Taiga Naka- mura. 2017. Tone analyzer for online customer ser- vice: An unsupervised model with interfered train- ing. In Proceedings of the 2017 ACM on Confer- ence on Information and Knowledge Management, CIKM '17, page 1887-1895, New York, NY, USA. Association for Computing Machinery.
| [] |
[
"Mixup Decoding for Diverse Machine Translation",
"Mixup Decoding for Diverse Machine Translation"
] | [
"Jicheng Li lijicheng@ict.ac.cn ",
"† ",
"Pengzhi Gao gaopengzhi@baidu.com ",
"Xuanfu Wu wuxuanfu20s@ict.ac.cn ",
"Yang Feng fengyang@ict.ac.cn ",
"Zhongjun He hezhongjun@baidu.com ",
"Hua Wu wu_hua@baidu.com ",
"Haifeng Wang wanghaifeng@baidu.com ",
"\nInstitute of Computing Technology\nKey Laboratory of Intelligent Information Processing\nChinese Academy of Sciences (ICT/CAS\n\n",
"\nUniversity of Chinese Academy of Sciences\n3 Baidu Inc. No. 10, Shangdi 10th Street100085BeijingChina\n"
] | [
"Institute of Computing Technology\nKey Laboratory of Intelligent Information Processing\nChinese Academy of Sciences (ICT/CAS\n",
"University of Chinese Academy of Sciences\n3 Baidu Inc. No. 10, Shangdi 10th Street100085BeijingChina"
] | [] | Diverse machine translation aims at generating various target language translations for a given source language sentence. To leverage the linear relationship in the sentence latent space introduced by the mixup training, we propose a novel method, MixDiversity, to generate different translations for the input sentence by linearly interpolating it with different sentence pairs sampled from the training corpus during decoding. To further improve the faithfulness and diversity of the translations, we propose two simple but effective approaches to select diverse sentence pairs in the training corpus and adjust the interpolation weight for each pair correspondingly. Moreover, by controlling the interpolation weight, our method can achieve the trade-off between faithfulness and diversity without any additional training, which is required in most of the previous methods. Experiments on WMT'16 en→ro, WMT'14 en→de, and WMT'17 zh→en are conducted to show that our method substantially outperforms all previous diverse machine translation methods. | 10.18653/v1/2021.findings-emnlp.29 | [
"https://arxiv.org/pdf/2109.03402v2.pdf"
] | 237,439,221 | 2109.03402 | 8953b9df5249b62f2705590912713d8f42e29ee4 |
Mixup Decoding for Diverse Machine Translation
Jicheng Li lijicheng@ict.ac.cn
†
Pengzhi Gao gaopengzhi@baidu.com
Xuanfu Wu wuxuanfu20s@ict.ac.cn
Yang Feng fengyang@ict.ac.cn
Zhongjun He hezhongjun@baidu.com
Hua Wu wu_hua@baidu.com
Haifeng Wang wanghaifeng@baidu.com
Institute of Computing Technology
Key Laboratory of Intelligent Information Processing
Chinese Academy of Sciences (ICT/CAS
University of Chinese Academy of Sciences
3 Baidu Inc. No. 10, Shangdi 10th Street100085BeijingChina
Mixup Decoding for Diverse Machine Translation
Diverse machine translation aims at generating various target language translations for a given source language sentence. To leverage the linear relationship in the sentence latent space introduced by the mixup training, we propose a novel method, MixDiversity, to generate different translations for the input sentence by linearly interpolating it with different sentence pairs sampled from the training corpus during decoding. To further improve the faithfulness and diversity of the translations, we propose two simple but effective approaches to select diverse sentence pairs in the training corpus and adjust the interpolation weight for each pair correspondingly. Moreover, by controlling the interpolation weight, our method can achieve the trade-off between faithfulness and diversity without any additional training, which is required in most of the previous methods. Experiments on WMT'16 en→ro, WMT'14 en→de, and WMT'17 zh→en are conducted to show that our method substantially outperforms all previous diverse machine translation methods.
Introduction
Neural machine translation (NMT) (Sutskever et al., 2014;Wu et al., 2016;Gehring et al., 2017;Vaswani et al., 2017;Ott et al., 2018) has achieved significant success in improving the quality of machine translation. Despite these successes, NMT still faces problems in translation diversity ( Vanmassenhove et al., 2019;Gu et al., 2020). Due to the existence of lexical diversity, syntactic diversity and synonymous words in the target language, one source language sentence usually corresponds to multiple proper translations. However, existing NMT models mostly consider the one-to-one mapping but neglects the one-to-many mapping between the source and target languages. † This work was done when Jicheng Li was interning at Baidu Inc., China. * Yang Feng is the corresponding author of the paper.
Neural Machine Translation
Model mix sentence pairs < l a t e x i t s h a 1 _ b a s e 6 4 = " R I / Y Q J t a E t L L n G U R g 5 E 5 6 5 v J n A 0 = " > A A A C 4 H i c j V H L S s N A F D 3 G V 6 2 v q s t u g k V w V R I R F F c F N y 4 V r B b a I s k 4 1 c G 8 m J l I S 3 H h z p 2 4 9 Q f c 6 t e I f 6 B / 4 Z 0 x B R + I T k h y 5 t x 7 z s y 9 N 8 w i o b T n v Y w 5 4 x O T U 9 O l m f L s 3 P z C Y m V p + U i l u W S 8 y d I o l a 0 w U D w S C W 9 q o S P e y i Q P 4 j D i x + H F r o k f X 3 K p R J o c 6 k H G u 3 F w l o i e Y I E m 6 q R S 7 W j e 1 2 F v G I u + q 3 i i e c K 4 m w V C q q u T S s 2 r e 3 a 5 P 4 F f g B q K t Z 9 W n t H B K V I w 5 I j B k U A T j h B A 0 d O G D w 8 Z c V 0 M i Z O E h I 1 z X K F M 2 p y y O G U E x F 7 Q 9 4 x 2 7 Y J N a G 8 8 l V U z O i W i V 5 L S x R p p U s q T h M 1 p r o 3 n 1 t m w v 3 k P r a e 5 2 4 D + Y e E V E 6 t x T u x f u l H m f 3 W m F o 0 e t m 0 N g m r K L G O q Y 4 V L b r t i b u 5 + q k q T Q 0 a c w a c U l 4 S Z V Y 7 6 7 F q N s r W b 3 g Y 2 / m o z D W v 2 r M j N 8 W Z u S Q P 2 v 4 / z J z j a q P t e 3 T / Y r D V 2 i l G X U M U q 1 m m e W 2 h g D / t o k v c 1 H v C I J y d 0 b p x b 5 + 4 j 1 R k r N C v 4 s p z 7 d y K 3 m x U = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " R I / Y Q J t a E t L L n G U R g 5 E 5 6 5 v J n A 0 = " > A A A C 4 H i c j V H L S s N A F D 3 G V 6 2 v q s t u g k V w V R I R F F c F N y 4 V r B b a I s k 4 1 c G 8 m J l I S 3 H h z p 2 4 9 Q f c 6 t e I f 6 B / 4 Z 0 x B R + I T k h y 5 t x 7 z s y 9 N 8 w i o b T n v Y w 5 4 x O T U 9 O l m f L s 3 P z C Y m V p + U i l u W S 8 y d I o l a 0 w U D w S C W 9 q o S P e y i Q P 4 j D i x + H F r o k f X 3 K p R J o c 6 k H G u 3 F w l o i e Y I E m 6 q R S 7 W j e 1 2 F v G I u + q 3 i i e c K 4 m w V C q q u T S s 2 r e 3 a 5 P 4 F f g B q K t Z 9 W n t H B K V I w 5 I j B k U A T j h B A 0 d O G D w 8 Z c V 0 M i Z O E h I 1 z X K F M 2 p y y O G U E x F 7 Q 9 4 x 2 7 Y J N a G 8 8 l V U z O i W i V 5 L S x R p p U s q T h M 1 p r o 3 n 1 t m w v 3 k P r a e 5 2 4 D + Y e E V E 6 t x T u x f u l H m f 3 W m F o 0 e t m 0 N g m r K L G O q Y 4 V L b r t i b u 5 + q k q T Q 0 a c w a c U l 4 S Z V Y 7 6 7 F q N s r W b 3 g Y 2 / m o z D W v 2 r M j N 8 W Z u S Q P 2 v 4 / z J z j a q P t e 3 T / Y r D V 2 i l G X U M U q 1 m m e W 2 h g D / t o k v c 1 H v C I J y d 0 b p x b 5 + 4 j 1 R k r N C v 4 s p z 7 d y K 3 m x U = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " R I / Y Q J t a E t L L n G U R g 5 E 5 6 5 v J n A 0 = " > A A A C 4 H i c j V H L S s N A F D 3 G V 6 2 v q s t u g k V w V R I R F F c F N y 4 V r B b a I s k 4 1 c G 8 m J l I S 3 H h z p 2 4 9 Q f c 6 t e I f 6 B / 4 Z 0 x B R + I T k h y 5 t x 7 z s y 9 N 8 w i o b T n v Y w 5 4 x O T U 9 O l m f L s 3 P z C Y m V p + U i l u W S 8 y d I o l a 0 w U D w S C W 9 q o S P e y i Q P 4 j D i x + H F r o k f X 3 K p R J o c 6 k H G u 3 F w l o i e Y I E m 6 q R S 7 W j e 1 2 F v G I u + q 3 i i e c K 4 m w V C q q u T S s 2 r e 3 a 5 P 4 F f g B q K t Z 9 W n t H B K V I w 5 I j B k U A T j h B A 0 d O G D w 8 Z c V 0 M i Z O E h I 1 z X K F M 2 p y y O G U E x F 7 Q 9 4 x 2 7 Y J N a G 8 8 l V U z O i W i V 5 L S x R p p U s q T h M 1 p r o 3 n 1 t m w v 3 k P r a e 5 2 4 D + Y e E V E 6 t x T u x f u l H m f 3 W m F o 0 e t m 0 N g m r K L G O q Y 4 V L b r t i b u 5 + q k q T Q 0 a c w a c U l 4 S Z V Y 7 6 7 F q N s r W b 3 g Y 2 / m o z D W v 2 r M j N 8 W Z u S Q P 2 v 4 / z J z j a q P t e 3 T / Y r D V 2 i l G X U M U q 1 m m e W 2 h g D / t o k v c 1 H v C I J y d 0 b p x b 5 + 4 j 1 R k r N C v 4 s p z 7 d y K 3 m x U = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " R I / Y Q J t a E t L L n G U R g 5 E 5 6 5 v J n A 0 = " > A A A C 4 H i c j V H L S s N A F D 3 G V 6 2 v q s t u g k V w V R I R F F c F N y 4 V r B b a I s k 4 1 c G 8 m J l I S 3 H h z p 2 4 9 Q f c 6 t e I f 6 B / 4 Z 0 x B R + I T k h y 5 t x 7 z s y 9 N 8 w i o b T n v Y w 5 4 x O T U 9 O l m f L s 3 P z C Y m V p + U i l u W S 8 y d I o l a 0 w U D w S C W 9 q o S P e y i Q P 4 j D i x + H F r o k f X 3 K p R J o c 6 k H G u 3 F w l o i e Y I E m 6 q R S 7 W j e 1 2 F v G I u + q 3 i i e c K 4 m w V C q q u T S s 2 r e 3 a 5 P 4 F f g B q K t Z 9 W n t H B K V I w 5
I j B k U A T j h B A 0 d O G D w 8 Z c V 0 M i Z O E h I 1 z X K F M 2 p y y O G U E x F 7 Q 9 4 x 2 7 Y J N a G 8 8 l V U z O i W i V 5 L S
x R p p U s q T h M 1 p r o 3 n 1 t m w v 3 k P r a e 5 2 4 D + Y e E V E 6 t x T u x f u l H m f 3 W m F o 0 e t m 0 N g m r K L G O q Y 4 V L b r t i b u 5 + q k q T Q 0 a c w a c U l 4 S Z V Y 7 6 7 F q N s r W b 3 g Y 2 / m o z D W v 2 r M j N 8 W Z u S Q P 2 v 4 / z J z j a q P t e 3 T / Y r D V 2 i l G X U M U q 1 m m e W 2 h g D / t o k v c 1 H v C I J y d 0 b p x b 5 + 4 j 1 R k r N C v 4 s p z 7 d y K 3 m x U = < / l a t e x i t > diverse translations < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K E b J 2 m M W Z O A p 8 y W K e s r b i K m Z H u W B + E s T s M O j t 6 P r h B R O S Z + m + 6 u f s N P H P U t 7 l o a + I 6 t j L J 4 p d q q A 7 i L i W M U c J P 5 W x q c p h x 6 6 7 D d c M 5 y f w S l B
H O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P
n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K E b J 2 m M W Z O A p 8 y W K e s r b i K m Z H u W B + E s T s M O j t 6 P r h B R O S Z + m + 6 u f s N P H P U t 7 l o a + I 6 t j L J 4 p d q q A 7 i L i W M U c J P 5 W x q c p h x 6 6 7 D d c M 5 y f w S l B
H O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F
a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K
O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O
a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " G 3 0 n v n J k y K B l K c C y m N n r q T x J g V 4 = " > A A A C t X i c j V L L S g M x F D 0 d X 7 V W r W s 3 g 0 V w V T J u d C n o w m U F + 4 B a Z C Z N a + y 8 T D J C K f 6 A W z 9 O / A P 9 C 2 / i C G o R z T A z J + f e c 5 K b m y i P p T a M v V S 8 p e W V 1 b X q e m 2 j X t v c 2 m 7 U u z o r F B c d n s W Z 6 k e h F r F M R c d I E 4 t + r k S Y R L H o R d N T G + / d C 6 V l l l 6 a W S 6 G S T h J 5 V j y 0 B D V v m 4 0 W Y u 5 4 S + C o A R N l C N r P O M K I 2 T g K J B A I I U h H C O E p m e A A A w 5 c U P M i V O E p I s L P K B G 2 o K y B G W E x E 7 p O 6 H Z o G R T m l t P 7 d S c V o n p V a T 0 s U + a j P I U Y b u a 7 + K F c 7 b s b 9 5 z 5 2 n 3 N q N / V H o l x B r c E P u X 7 j P z v z p b i 8 E Y x 6 4 G S T X l j r H V 8 d K l c K d i d + 5 / q c q Q Q 0 6 c x S O K K 8 L c K T / P 2 X c a 7 W q 3 Z x u 6 + K v L t K y d 8 z K 3 w J v d J f U 3 + N n N R d A 9 b A W s F V w w V L G L P R x Q G 4 9 w g n O 0 0 S H L E R 7 x 5 J 1 5 t 9 7 d x z 3 w K u W F 2 M G 3 4 e l 3 4 Y W M 3 A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " P j L F 3 V g 3 g g + a l N 7 0 U 9 P L / 9 + 3 m p 4 = " > A A A C 1 3 i c j V L L S s N A F D 2 N r 1 q r V r c i B I v g q i R u d C m 4 c V n B P q C V k s e 0 D k 2 T M D M R S + n K n T v x K 9 z q x 4 h / o H / h n T E F t Y h O S H L m 3 H v O z J 0 7 f h p x q R z n t W A t L C 4 t r x R X S 2 v l 9 Y 3 N y l a 5 K Z N M B K w R J F E i 2 r 4 n W c R j 1 l B c R a y d C u a N / I i 1 / O G p j r e u m Z A 8 i S / U O G W X I 2 8 Q 8 z 4 P P E V U r 7 L b V e x G + f 1 J y H U a s 5 X w Y h m Z q J z 2 K l W n 5 p h h z w M 3 B 1 X k o 5 5 U X t B F i A Q B M o z A E E M R j u B B 0 t O B C w c p c Z e Y E C c I c R N n m K J E 2 o y y G G V 4 x A 7 p O 6 B Z J 2 d j m m t P a d Q B r R L R K 0 h p Y 5 8 0 C e U J w n o 1 2 8 Q z 4 6 z Z 3 7 w n x l P v b U x / P / c a E a t w R e x f u l n m f 3 W 6 F o U + j k 0 N n G p K D a O r C 3 K X z J y K 3 r n 9 p S p F D i l x G o c U F 4 Q D o 5 y d s 2 0 0 0 t S u z 9 Y z 8 T e T q V k 9 D / L c D O 9 6 l 9 R g 9 2 c 7 5 0 H z s O Y 6 N f f c Q R E 7 2 M M B t f E I J z h D H Q 2 y v M U j n v B s h d a d d f 9 5 F a x C f i e 2 8 W 1 Y D x + h l Z s V < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " P j L F 3 V g 3 g g + a l N 7 0 U 9 P L / 9 + 3 m p 4 = " > A A A C 1 3 i c j V L L S s N A F D 2 N r 1 q r V r c i B I v g q i R u d C m 4 c V n B P q C V k s e 0 D k 2 T M D M R S + n K n T v x K 9 z q Many studies have been proposed to tackle such issues by exploiting the diversity in the model space, such as using different experts (Shen et al., 2019), applying different multi-head attentions (Sun et al., 2020), and utilizing different models (Wu et al., 2020). Although the model-oriented methods have been well studied, the data-oriented method still lacks exploration.
x 4 h / o H / h n T E F t Y h O S H L m 3 H v O z J 0 7 f h p x q R z n t W A t L C 4 t r x R X S 2 v l 9 Y 3 N y l a 5 K Z N M B K w R J F E i 2 r 4 n W c R j 1 l B c R a y d C u a N / I i 1 / O G p j r e u m Z A 8 i S / U O G W X I 2 8 Q 8 z 4 P P E V U r 7 L b V e x G + f 1 J y H U a s 5 X w Y h m Z q J z 2 K l W n 5 p h h z w M 3 B 1 X k o 5 5 U X t B F i A Q B M o z A E E M R j u B B 0 t O B C w c p c Z e Y E C c I c R N n m K J E 2 o y y G G V 4 x A 7 p O 6 B Z J 2 d j m m t P a d Q B r R L R K 0 h p Y 5 8 0 C e U J w n o 1 2 8 Q z 4 6 z Z 3 7 w n x l P v b U x / P / c a E a t w R e x f u l n m f 3 W 6 F o U + j k 0 N n G p K D a O r C 3 K X z J y K 3 r n 9 p S p F D i l x G o c U F 4 Q D o 5 y d s 2 0 0 0 t S u z 9 Y z 8 T e T q V k 9 D / L c D O 9 6 l 9 R g 9 2 c 7 5 0 H z s O Y 6 N f f c Q R E 7 2 M M B t f E I J z h D H Q 2 y v M U j n v B s h d a d d f 9 5 F a x C f i e 2 8 W 1 Y D x + h l Z s V < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Z S B x h J r L g 9 S V G Z K S 0 x G N x C m 3 f F Q = " > A A A C 4 n i c j V H N S g M x G J y u / / 9 V j y I s F s F T 2 f W i e B K 8 e K x g V W i L 7 K a p B r e 7 S 5 I V p f T k z Z t 4 9 Q W 8 6 s O I b 6 B v 4 Z e Y g l p E s + z u Z L 6 Z S b 4 k z h O h d B C 8 l r y R 0 b H x i c m p 6 Z n Z u f m F 8 u L S k c o K y X i d Z U k m T + J I 8 U S k v K 6 F T v h J L n n U j R N + H F / s m f r x J Z d K Z O m h v s 5 5 q x u d p a I j W K S J O i 2 v N j W / 0 n G n 1 x Z G x n 0 t o 1 Q l t q r 6 p + V K U A 3 s 8 I d B 6 E A F b t S y 8 g u a a C M D Q 4 E u O F J o w g k i K H o a C B E g J 6 6 F H n G S k L B 1 j j 6 m y V u Q i p M i I v a C v m c 0 a z g 2 p b n J V N b N a J W E X k l O H + v k y U g n C Z v V f F s v b L J h f 8 v u 2 U y z t 2 v 6 x y 6 r S 6 z G O b F / + Q b K / / p M L x o d b N s e B P W U W 8 Z 0 x 1 x K Y U / F 7 N z / 0 p W m h J w 4 g 9 t U l 4 S Z d Q 7 O 2 b c e Z X s 3 Z x v Z + p t V G t b M m d M W e D e 7 p A s O f 1 7 n M D j a r I Z B N T w I K r s 7 7 q o n s Y I 1 b N B 9 b m E X + 6 i h T t k 3 e M Q T n r 2 2 d + v d e f e f U q / k P M v 4 N r y H D 1 4 E n F c = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K E b J 2 m M W Z O A p 8 y W K e s r b i K m Z H u W B + E s T s M O j t 6 P r h B R O S Z + m + 6 u f s N P H P U t 7 l o a + I 6 t j L J 4 p d q q A 7 i L i W M U c J P 5 W x q c p h x 6 6 7 D d c M 5 y f w S l B H O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K E b J 2 m M W Z O A p 8 y W K e s r b i K m Z H u W B + E s T s M O j t 6 P r h B R O S Z + m + 6 u f s N P H P U t 7 l o a + I 6 t j L J 4 p d q q A 7 i L i W M U c J P 5 W x q c p h x 6 6 7 D d c M 5 y f w S l B H O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K E b J 2 m M W Z O A p 8 y W K e s r b i K m Z H u W B + E s T s M O j t 6 P r h B R O S Z + m + 6 u f s N P H P U t 7 l o a + I 6 t j L J 4 p d q q A 7 i L i W M U c J P 5 W x q c p h x 6 6 7 D d c M 5 y f w S l B H O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K E b J 2 m M W Z O A p 8 y W K e s r b i K m Z H u W B + E s T s M O j t 6 P r h B R O S Z + m + 6 u f s N P H P U t 7 l o a + I 6 t j L J 4 p d q q A 7 i L i W M U c J P 5 W x q c p h x 6 6 7 D d c M 5 y f w S l B H O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " N Z x V b N B 8 u k 3 l R a G u e U D o a 5 t W 6 z Q = " > A A A C 4 n i c j V H L S g M x F D 0 d X 7 W + R l 2 K M F g E V 2 V G B M V V w Y 3 L C l Y F l T K P V E P n R Z I R S + n K n T t x 6 w + 4 1 Y 8 R / 0 D / w p s 4 g g 9 E M 8 z M y b n n n O Q m Q R 5 z q V z 3 u W K N j I 6 N T 1 Q n a 1 P T M 7 N z 9 v z C g c w K E b J 2 m M W Z O A p 8 y W K e s r b i K m Z H u W B + E s T s M O j t 6 P r h B R O S Z + m + 6 u f s N P H P U t 7 l o a + I 6 t j L J 4 p d q q A 7 i L i W M U c J P 5 W x q c p h x 6 6 7 D d c M 5 y f w S l B H O V q Z / Y Q T R M g Q o k A C h h S K c A w f k p 5 j e H C R E 3 e K A X G C E D d 1 h i F q 5 C 1 I x U j h E 9 u j 7 x n N j k s 2 p b n O l M Y d 0 i o x v Y K c D l b J k 5 F O E N a r O a Z e m G T N / p Y 9 M J l 6 b 3 3 6 B 2 V W Q q z C O b F / + T 6 U / / X p X h S 6 2 D I 9 c O o p N 4 z u L i x T C n M q e u f O p 6 4 U J e T E a R x R X R A O j f P j n B 3 j k a Z 3 f b a + q b 8 Y p W b 1 P C y 1 B V 7 1 L u m C v e / X + R M c r D c 8 t + H t b d S b 2 + V V V 7 G E F a z R f W 6 i i V 2 0 0 K b s K 9 z j A Y 9 W Z F 1 b N 9 b t u 9 S q l J 5 F f B n W 3 R t f R J x b < / l a
In this work, we focus on improving the translation diversity by exploiting the diversity in the sentence space. Since different translations of one source sentence share the same semantics, their sentence-level embeddings will gather in the same region in the target sentence space. In other words, each sentence in this region is a translation of the source sentence. By sampling different sentences from this region, we can obtain various translations. To sample different translations from this region, we propose a simple but effective method, MixDiversity. As aforementioned, the NMT model learns a one-to-one mapping between the source and target languages. Given the source sentence and the generated tokens in the decoder, the NMT model can map the source sentence into a corresponding target sentence. Therefore, to obtain various translations on the target side, we need to find the corresponding inputs for the NMT model. By mixing the source sentence with the sampled sentence pairs in the training corpus via linear interpolation, we can obtain mixed sentences as inputs for the NMT model and map them into a corresponding sentence in the target sentence space. By assigning a larger interpolation weight for the source sentence, the mixed sentence then has similar semantics, and the corresponding translation has higher faithfulness to the source sentence. In this way, by mixing the source sentence with different sentence pairs during decoding, we can obtain diverse mixed sentences as inputs for the NMT model and map them to different translations for the source sentence.
Given that NMT models are non-linear functions, the interpolation weight of the input sentences could decline, and the semantic of the output could shift to the randomly sampled sentence pairs. To guarantee the consistency of the interpolation weight during decoding, we force the NMT model to learn to maintain the proportion between the mixed sentences with the mixup training strategy (Guo et al., 2020) which linearly interpolates two randomly sampled sentence pairs in both encoder and decoder during training. The main idea of our approach is illustrated in Figure 1, where we mix one source sentence with four different sentence pairs sampled from the training corpus to obtain four variant mixed samples as inputs for the NMT model and map the mixed sentences to four diverse sentences in the target space.
MixDiversity
Overview
During training, we linearly interpolate word embeddings of two randomly sampled sentence pairs on both the source and target sides. During inference, since the corresponding target sentence of the input can not be obtained in advance, we interpolate word embeddings of previously generated tokens and the sampled target sentence in the decoder. Note that the MixDiversity can also be used without the Mixup Training.
Mixup Training for NMT
We apply the mixup training (Guo et al., 2020) to encourage the NMT model to learn the linear relationship in the latent space of the input sentences. Consider a pair of training samples (x i , y i ) and (x j , y j ) in the parallel corpus S, where x i and x j denote the source sentences, and y i and y j denote the target sentences. The synthetic sample (x ij , y ij ) is generated as follows.
x ij = λx i + (1 − λ)x j , y ij = λy i + (1 − λ)y j ,
where λ is drawn from a Beta distribution Beta(α, α) with a hyper-parameter α. The synthetic sample (x ij , y ij ) is then fed into the NMT model for training to minimize the empirical risk:
L(θ) = E (x i ,y i )∈S (x j ,y j )∈S [ (f (x ij , y ij ; θ),ÿ ij )],(1)
where denotes the cross entropy loss, θ is a set of model parameters, f ( * ) is the probability predictions of the NMT model,
y ij = λÿ i + (1 − λ)ÿ j ,(2)
andÿ i andÿ j are the sequences of one-hot label vectors for y i and y j respectively.
Mixup Decoding for Diverse MT
At inference, assume x = x 1 , ..., x I that corresponds to the source sentence with length I. We mix it with K different sentence pairs (x 1 , y 1 ), . . . , (x K , y K ) selected from the training corpus to generate K different translations of x. Specifically, for the i th translation, we first interpolate the token embeddings of x with the token embeddings of x i in the encoder side:
e(x i t ) = λ i t e(x t )+(1−λ i t )e(x i t ), ∀t ∈ [1, I]. (3)
The encoder then maps the mixed token embeddingsê(x i 1 ), . . .ê(x i I ) into the corresponding hidden representations h i .
In the decoder side, at step t, we mix the embedding of the token y t−1 , which is predicted by the NMT model at step t − 1, with the embedding of y i t−1 as follows:
e(y i t−1 ) = λ i t e(y t−1 ) + (1 − λ i t )e(y i t−1 ),(4)
where y 0 and y i 0 are the special beginning-ofsentence symbol bos . The predicted token y t is then calculated by
y t = argmax y∈Vy P (y|h i ,ê(y i t−1 ); θ), t 1,(5)
where V y is the vocabulary of the target language. Note that λ i t 's in (3) and (4) are drawn from the Beta distribution Beta(α, α) with the same α for different t and i.
Select Sentence Pairs by Source Length
We first group sentence pairs in the training corpus by their source sentence lengths and then randomly select K sentence pairs (x 1 , y 1 ), . . . (x K , y K ) from the groups that have similar length compared with the input sentence. Specifically, given an input sentence with length I, we sample sentence pairs from the groups with lengths in the range of [I − 1, I].
Adjust Interpolation Weight by Similarity In order to correctly translate the semantic of the input sentence, x needs to dominate the mixed samples. Different sentences in (x 1 , y 1 ), . . . (x K , y K ) may have different similarity with x, and a higher similarity between x i and x implies a looser constraint on the interpolation weight between them. Thus, taking the similarity between x i and x into account, we sample the interpolation weight λ i t from the Beta distribution as follows.
λ i t ∼ Beta(α i , α i ), α i = τ + τ d(x, x i ) ,(6)
where τ is a hyper-parameter to control the interpolation weight, and d( * ) is the Euclidean distance between the embeddings of two sentences, which are defined as the average among all token embeddings in the sentence. In our implementation, λ t t is actually set to be max(λ t t , 1 − λ t t ). The larger distance between x and x i is, the larger interpolation weight λ t t we have, which leads to dynamically adjusting on the interpolation weight based on the sentence similarity.
Experimental Setup
Data Description
Our experiments consider three translation datasets: WMT'16 English-Romanian (en→ro), WMT'14 English-German (en→de), and WMT'17 Chinese-English (zh→en). All sentences are prepossessed with byte-pair-encoding (BPE) (Sennrich et al., 2016). For WMT'16 en→ro, we use the preprocessed dataset released in Lee et al. (2018)
Model Configuration
We apply a standard 6-layer Transformer Base model (Vaswani et al., 2017) with 8 attention heads, embedding size 512, and FFN layer dimension 2048. We use the label smoothing (Szegedy et al., 2016) with = 0.1 and Adam (Kingma and Ba, 2015) optimizer with β 1 = 0.9, β 2 = 0.98 and = 10 −9 . We set learning rate as 0.0007 with 4000 warmup steps from the initialized learning rate of 10 −7 . The NMT model is trained with dropout 0.1 and max tokens 4096. When adopting the mixup training strategy, we set α as 1.0, 0.1 and 0.1 for en→ro, en→de and zh→en respectively. We train our model on 4 NVIDIA V100 GPUs until it converges. At the inference time, we set beam size as 4 with length penalty 0.6.
Evaluation Metrics
Referring to Wu et al. (2020), we adopt the average BLEU with reference (rfb) to measure the faithfulness of different translations to the input sentence and the average pairwise-BLEU (pwb) to measure the pair-wise similarity between different translations. The higher rfb, the better accuracy of the translations. The lower pwb, the better diversity of the translations. In our experiments, given one input sentence, we generate five different translations We get the best results of MixDiversity with τ = 0.3, 0.3, and 0.25 in en→ro, en→de and zh→en respectively. ⇑ means the higher, the better. ⇓ means the lower, the better.
for all methods. When we calculate Diversity Enhancement per Quality (DEQ) (Sun et al., 2020) to evaluate the overall performance of different methods, we find that the DEQ results are not stable. For instance, the DEQ scores of ConcreteDropout in Figure 2 (from the leftmost point to the rightmost point) are 12. 65, 15.69, 28.21, -24.83, and 30.61, where positive and negative scores appear alternately. We thus propose a new metric, Euclidean Distance from the ultimate Aim (EDA), to evaluate the overall quality of the results synthetically.
Consider rfb and pwb as the abscissa and the ordinate of a coordinate system, where 0 rfb R, and 0 pwb P. R is the baseline BLEU, which is defined as the BLEU score of the top one translation by beam search decoding with beam size 4 in our experiments. P = 100 is the maximal pwb. Different results with specific rfb and pwb scores could be mapped to different points in this coordinate system. The ultimate aim of the diverse machine translation task is to reach the point (R, 0). By measuring the Euclidean distance between (R, 0) and the result, we can evaluate the overall quality of the result.
We, however, notice that rfb and pwb have different ranges (P > R), and pwb decreases much faster than rfb with the changing of τ . As a consequence, the calculated EDA is biased to the results with the lower pwb scores. To alleviate such bias, we normalize the value of rfb and pwb to [0, 1] by dividing R and P respectively and add a weight ω = R P on the pwb term shown as follows:
EDA = 100% · ( R − rfb R ) 2 + ω 2 ( 0 − pwb P ) 2 .
Note that different training strategies lead to different baseline BLEU R. Table 1 shows the baseline BLEU of Transformer in each dataset. When we use EDA to evaluate the performance of Concret-eDropout in Figure 2, we get 18. 49, 18.69, 19.61, 21.1, and 22.95. This result shows that EDA is a better and more stable overall evaluation metric than DEQ for the diverse machine translation.
Experimental Results
Main Results
We show the results of different methods on generating diverse translations in Table 2. We compare our method with the conventional beam search decoding (BeamSearch) and the existing modeloriented methods, including DiverseBS, HardMoE, HeadSample, and ConcreteDropout. For each method, we exhibit its best result with the lowest EDA score. We can see that MixDiversity gets lower EDA scores than all existing methods in all three datasets, and the performance of Mix-Diversity without the mixup training also outperforms other competitors on WMT'14 en→de and WMT'16 zh→en with lower EDA scores. Figure 2 shows the trade-off results between the reference BLEU and the pair-wise BLEU on WMT'14 en→de. We can see that mixup training or not, MixDiversity generally performs better than all other methods without additional training or finetuning, which is required in most previous methods, such as HardMoE.
Ablation Study
The results of the ablation study are shown in Ta first experiment, we evaluate the performance of our method with different settings: training NMT models without mixup strategy (w/o Mixup Training), decoding by randomly selecting K sentence pairs from the entire training corpus (w/o LenSelection), and sampling the interpolation weights without considering similarities between x and x i (w/o SimWeight). In the second experiment, we not only attempt to mix the input sentence with Gaussian noise drawn from N (0, 2), but we also mix the input sentence with synthetic sentence pairs which are made up of tokens that are randomly sampled from the vocabulary. In both cases, we observe remarkable increases in EDA. Such a phenomenon indicates that the potential linguistic features in training samples could assist MixDiversity in generating different translations of high diversity and faithfulness. In the last experiment, we verify the rationality and effectiveness of the mixup operations in both encoder and decoder.
Applications of Diverse Translation
In
Related Work
Many studies have been proposed to improve the translation diversity by exploiting the diversity in the model space. Li et al. (2016) andVijayakumar et al. (2016) adopt various regularization terms in the beam search decoding to encourage generating diverse outputs. He et al. (2018) generates different translations by incorporating condition signals of different models. Shen et al. (2019) proposes to training NMT models with the mixture of experts method and generates diverse translations using different latent variables of different experts. Shu et al. (2019) generates diverse translation conditioned on different sentence codes. Sun et al. (2020) discovers that encoder-decoder multi-head attention in Transformer learns multiple target-source alignments and generates diverse translations by sampling different heads in the attention modules. Wu et al. (2020) samples different models from a posterior model distribution and employs variational inference to control the diversity of translations.
Conclusion
In this work, we propose a novel method, MixDiversity, for the diverse machine translation. Compared with the previous model-oriented methods, MixDiversity is a data-oriented method that generates different translations of the input sentence by utilizing the diversity in the sentence latent space. We also propose two simple but effective methods to select the mixup samples and adjust the mixup weights for each sample.
A Methods for Comparison
In our experiments, we set k = 5 and compare our method with the following works:
• BeamSearch (BS): In our experiments, we choose the top k sentences generated by beam search decoding as the result.
• DiverseBeamSearch (DiveseBS) (Vijayakumar et al., 2016): It generates diverse translations by grouping sentences in the beam search decoding with a regularization term to guarantee the diversity between different groups. We set the number of groups as k, and each group includes two sentences in our experiments.
• HardMoE (Shen et al., 2019): It first trains the model with k different hidden states and then generates different translations with different hidden states.
• HeadSample (Sun et al., 2020): It generates different outputs by sampling different heads in multi-head attention modules. In our experiments, we set the number of heads to be sampled as 3.
• ConcreteDropout (Wu et al., 2020): It generates different outputs by sampling different models from the model distribution using variational inference.
B Trade-off between reference BLEU and pair-wise BLEU Figure 3 shows with each other. In addition, the ConcreteDropout needs to finetune the translation model under different configurations to achieve different trade-off results between the BLEU and the pair-wise BLEU. While the HardMoE needs to retrain the whole model with different settings of the number of experts so as to achieve the trade-off between the two BLEU scores. Besides, the performance of the HeadSample is unstable with different number of the sampled heads. In contrast, the MixDiversity can achieve the trade-off between the two BLEU scores by the hyper-parameter τ without any additional training or finetuning time.
C Case Study
In Table 5: The evaluation result using BERT-Score in WMT'14 en→de. ⇑ means the higher, the better. ⇓ means the lower, the better.
Source 因此 , 人类 希望 有朝一日 在 火星 建立 居住 基地 , 最终 向 火星 移民 , 把 它 变成 人类 的 第二 家园 。 Reference
Therefore , the human beings hope that one day on the Mars to establish a base of residence , and ultimately to Mars immigration , it turned into a second home of mankind .
BeamSearch Therefore , human beings hope that one day they will establish a residence base on Mars and eventually emigrate to Mars , making it their second home . Therefore , the human race hopes one day to establish a residence base on Mars and eventually emigrate to Mars , making it the second home of the human race . Therefore , the human race hopes one day to establish a residence base on Mars and eventually emigrate to Mars , turning it into the second home of mankind . Therefore , human beings hope that one day they will establish a residence base on Mars and eventually emigrate to Mars , making it the second home of human beings . Therefore , human beings hope that one day they will establish a residence base on Mars and eventually emigrate to Mars , turning it into the second home of mankind .
MixDiversity (τ = 0.15) Therefore , man hopes one day to establish a residence base on Mars , and eventually emigrate to Mars and turn it into a second home . So humans hope to one day establish a residence base on Mars and eventually emigrate to Mars and turn it into a second home for humanity . So man wants one day to establish a residence base on Mars and eventually emigrate to Mars and make it his second home . So man hopes one day to build a living base on Mars and eventually emigrate to make it a second home for humanity . So man wants to be able to build a living base on Mars and eventually emigrate to Mars , turning it into a second home .
MixDiversity (τ = 0.35)
So . one day , humans want to build a living base on Mars and eventually emigrate to Mars and turn it into a second home . The human race , therefore , hopes that one day it will establish a residence base on Mars and eventually immigrate to Mars to make it a second home . So man hopes one day to establish a base on Mars and eventually emigrate to Mars and turn it into a second home for man . Thus , mankind hopes that one day it will establish a living base on Mars and eventually immigrate to Mars , becoming a second home for humanity . So man wants to be able to build a residence base on Mars and eventually emigrate to Mars , making it thesecond home of man . zh→en. For the MixDiversity, we show the translation results under different τ . When τ = 0.15, the 5 outputs of the MixDiversity follow a similar sentence pattern "So man/human hopes one day to ...". When the value of τ increase from 0.15 to 0.35, both the number of sentence pattern and the number of subjects in the 5 generated translations are expanded and the differences between translations also becomes more obvious.
D Evaluation Results of the BERT-Score
As aforementioned, reference BLEU and pairwise BLEU have been used to measure faithfulness and diversity in this work. However, BLEU simply counts n-gram overlap between the inference and the reference, which can not account for meaningpreserving lexical and compositional diversity, e.g., synonyms and paraphrases. In contrast, the BERT-Score (Zhang et al., 2020) seems to be a better measure, which computes a similarity score for each token in the inference sentence with each token in the reference sentence and correlates better with human judgments.
We apply the BERT-Score to evaluate the performance of different methods in WMT'14 en→de, as shown in Tabel 5. we adopt the average BERT-score with reference (denoted as rf-BERTscore) to measure the faithfulness and the average pairwise BERT-score among generated sentences (denoted as pw-BERTscore) to measure the diversity. At last, we calculate the EDA using the BERT-Score (denoted as EDA-BERTscore) by substituting the BLEU score with the BERT-Score. We can see that the MixDiversity (w/o Mixup training) gets the best pw-BERTscore and the best EDA-BERTscore.
Figure 1 :
1Illustration of the proposed method, MixDiversity, which linearly interpolates the input sentence with various sentence pairs sampled from the training corpus so as to generate diverse translations.
Figure 2 :
2Illustration of the trade-off between reference BLEU and pair-wise BLEU in WMT'14 en→de with different τ .
Figure 3 :
3Illustration of the trade-off between reference BLEU and pair-wise BLEU in WMT'16 en→ro and WMT'17 zh→en with different τ .
which contains 0.6M sentence pairs. We use newsdev-2016 as the validation set and newstest-2016 as the test set. We build a shared vocabulary with 40K BPE types. For WMT'14 en→de, it consists of 4.5M training sentence pairs, and we use newstest-2013 for validation and newstest-2014 for test. We build a shared vocabulary with 32K BPE types. For WMT'17 zh→en, it consists of 20.1M training sentence pairs, and we use devtest-2017 as the validation set and newstest-2017 as the test set. We build the source and target vocabularies with 32K BPE types separately.Strategy
Baseline BLEU R
en→ro en→de zh→en
Vanilla
32.80
27.43
24.07
Mixup
33.75
27.70
24.40
Table 1: The baseline BLEU of different training strat-
egy in each dataset.
47
52
57
62
67
72
77
82
87
Pairwise-BLEU
23.0
23.5
24.0
24.5
25.0
25.5
26.0
26.5
27.0
BLEU
MixDivesity
MixDivesity(w/o Mixup
Training)
ConcreteDropout
BeamSearch
HardMoE
DiverseBS
HeadSample
Table 2 :
2The best result of each method on WMT'16 en→ro, WMT'14 en→de, and WMT'17 zh→en. For
DiverseBS, HardMoE, and HeadSample, we select the result under the best settings described in their papers.
For ConcreteDropout and MixDiversity, we validate the model under different hyper-parameter settings on the
validation set to find the best settings for the model, and we report the result on the test set under the best settings.
Table 3 :
3Ablation study on WMT'14 en→de.
Table 4 ,
4we compare MixDiversity with Beam-
Search (BS) to show the application of diverse
translation methods on boosting the performance
of both Back Translation and Knowledge Distilla-
tion. We generate sentences with a beam size of 5
for all methods. For BeamSearch (Top 5) and Mix-
Diversity, we generate five different translations.
In the Back Translation experiment, we randomly
sample 4M sentences from the German monolin-
gual corpus distributed in WMT'18 and combine
the original parallel corpus with the back-translated
parallel corpus to train the NMT model. In the Data
Distillation experiment, we train the student NMT
model with the generated sentences of the teacher
NMT model.
Table 4 :
4Results of the Back Translation and the Knowledge Distillation experiments on WMT'14 en→de.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. ArXiv preprint, abs/1609.08144. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.To evaluate the overall
performance synthetically, we design a new evalua-
tion metric, EDA. Experimental results show that
MixDiversity outperforms all previous methods in
the field of diverse machine translation.
the trade-off results between reference BLEU and pair-wise BLEU in WMT'16 en→ro and WMT'17 zh→en. From results in both en→ro and zh→en, we find that the lines of the MixDiversity and the ConcreteDropout overlap40
45
50
55
60
65
70
75
80
85
Pairwise-BLEU
26
27
28
29
30
31
32
33
34
BLEU
MixDiversity
MixDiversity(w/o Mixup
Training)
ConcreteDropout
BeamSearch
HardMoE
DiverseBS
HeadSample
(a) en→ro
40
45
50
55
60
65
70
75
80
85
Pairwise-BLEU
18.5
19.0
19.5
20.0
20.5
21.0
21.5
22.0
22.5
23.0
23.5
24.0
24.5
BLEU
MixDiversity
MixDiversity(w/o Mixup
Training)
ConcreteDropout
BeamSearch
HardMoE
DiverseBS
HeadSample
Table 6 ,
6we illustrate a case of outputs from the MixDiversity and the BeamSearch in WMT'17 rf-BERTscore⇑ pw-BERTscore⇓ EDA-BERTscore⇓Beam Search (BS)
85.50
95.87
96.95
HeadSample (Sun et al., 2020)
84.99
96.29
97.45
ConcreteDropout (Wu et al., 2020)
84.93
95.52
96.69
MixDiversity (w/o Mixup Training)
84.61
92.26
93.53
Table 6 :
6Example outputs of BeamSearch and MixDiversity in WMT'17 zh→en.
AcknowledgementsWe thank all the anonymous reviewers for their insightful and valuable comments. This work was supported by National Key R&D Program of China (NO. 2017YFE0192900).
Convolutional sequence to sequence learning. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N Dauphin, PMLRProceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningSydney, NSW, Australia70Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1243-1252. PMLR.
Token-level adaptive training for neural machine translation. Shuhao Gu, Jinchao Zhang, Fandong Meng, Yang Feng, Wanying Xie, Jie Zhou, Dong Yu, 10.18653/v1/2020.emnlp-main.76Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsShuhao Gu, Jinchao Zhang, Fandong Meng, Yang Feng, Wanying Xie, Jie Zhou, and Dong Yu. 2020. Token-level adaptive training for neu- ral machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1035-1046, Online. Association for Computational Linguistics.
Sequence-level mixed sample data augmentation. Demi Guo, Yoon Kim, Alexander Rush, 10.18653/v1/2020.emnlp-main.447Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsDemi Guo, Yoon Kim, and Alexander Rush. 2020. Sequence-level mixed sample data augmentation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5547-5552, Online. Association for Computa- tional Linguistics.
Sequence to sequence mixture model for diverse machine translation. Xuanli He, Gholamreza Haffari, Mohammad Norouzi, 10.18653/v1/K18-1056Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningBrussels, BelgiumAssociation for Computational LinguisticsXuanli He, Gholamreza Haffari, and Moham- mad Norouzi. 2018. Sequence to sequence mixture model for diverse machine transla- tion. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 583-592, Brussels, Belgium. Association for Computational Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Deterministic non-autoregressive neural sequence modeling by iterative refinement. Jason Lee, Elman Mansimov, Kyunghyun Cho, 10.18653/v1/D18-1149Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173-1182, Brussels, Belgium. Association for Computational Linguistics.
A simple, fast diverse decoding algorithm for neural generation. Jiwei Li, Will Monroe, Dan Jurafsky, abs/1611.08562ArXiv preprintJiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration. ArXiv preprint, abs/1611.08562.
Scaling neural machine translation. Myle Ott, Sergey Edunov, David Grangier, Michael Auli, 10.18653/v1/W18-6301Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsMyle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.
Edinburgh neural machine translation systems for WMT 16. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/W16-2323Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation sys- tems for WMT 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 371-376, Berlin, Ger- many. Association for Computational Linguistics.
Mixture models for diverse machine translation: Tricks of the trade. Tianxiao Shen, Myle Ott, Michael Auli, Marc'aurelio Ranzato, PMLRProceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USA97Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5719-5728. PMLR.
Generating diverse translations with sentence codes. Raphael Shu, Hideki Nakayama, Kyunghyun Cho, 10.18653/v1/P19-1177Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsRaphael Shu, Hideki Nakayama, and Kyunghyun Cho. 2019. Generating diverse translations with sentence codes. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1823-1827, Florence, Italy. Association for Computational Linguistics.
Generating diverse translation by manipulating multi-head attention. Zewei Sun, Shujian Huang, Hao-Ran, Xin-Yu Wei, Jiajun Dai, Chen, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Zewei Sun, Shujian Huang, Hao-Ran Wei, Xin-yu Dai, and Jiajun Chen. 2020. Generating diverse trans- lation by manipulating multi-head attention. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8976-8983.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaIlya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104- 3112.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna, 10.1109/CVPR.2016.3082016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USAIEEE Computer SocietyChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Re- thinking the inception architecture for computer vi- sion. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2818-2826. IEEE Computer Society.
Lost in translation: Loss and decay of linguistic richness in machine translation. Eva Vanmassenhove, Dimitar Shterionov, Andy Way, Proceedings of Machine Translation Summit XVII: Research Track. Machine Translation Summit XVII: Research TrackDublin, IrelandEuropean Association for Machine TranslationEva Vanmassenhove, Dimitar Shterionov, and Andy Way. 2019. Lost in translation: Loss and decay of linguistic richness in machine translation. In Proceedings of Machine Translation Summit XVII: Research Track, pages 222-232, Dublin, Ireland. Eu- ropean Association for Machine Translation.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
Diverse beam search: Decoding diverse solutions from neural sequence models. K Ashwin, Michael Vijayakumar, Cogswell, R Ramprasath, Qing Selvaraju, Stefan Sun, David Lee, Dhruv Crandall, Batra, abs/1610.02424ArXiv preprintAshwin K Vijayakumar, Michael Cogswell, Ram- prasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural se- quence models. ArXiv preprint, abs/1610.02424.
Generating diverse translation from model distribution with dropout. Xuanfu Wu, Yang Feng, Chenze Shao, 10.18653/v1/2020.emnlp-main.82Proceedings of the. theXuanfu Wu, Yang Feng, and Chenze Shao. 2020. Generating diverse translation from model distri- bution with dropout. In Proceedings of the
Conference on Empirical Methods in Natural Language Processing (EMNLP). Online. Association for Computational LinguisticsConference on Empirical Methods in Natural Language Processing (EMNLP), pages 1088-1097, Online. Association for Computational Linguistics.
| [] |
[
"PyABSA: A Modularized Framework for Reproducible Aspect-based Sentiment Analysis",
"PyABSA: A Modularized Framework for Reproducible Aspect-based Sentiment Analysis"
] | [
"Heng Yang \nDepartment of Computer Science\nUniversity of Exeter\nEX4 4QFExeterUK\n",
"Chen Zhang czhang@bit.edu.cn \nSchool of Computer Science\nBeijing Institute of Technology\nBeijingChina\n",
"Ke Li k.li@exeter.ac.uk \nDepartment of Computer Science\nUniversity of Exeter\nEX4 4QFExeterUK\n"
] | [
"Department of Computer Science\nUniversity of Exeter\nEX4 4QFExeterUK",
"School of Computer Science\nBeijing Institute of Technology\nBeijingChina",
"Department of Computer Science\nUniversity of Exeter\nEX4 4QFExeterUK"
] | [] | The advancement of aspect-based sentiment analysis (ABSA) has urged the lack of a user-friendly framework that can largely lower the difficulty of reproducing state-of-the-art ABSA performance, especially for beginners. To meet the demand, we present PyABSA, a modularized framework built on PyTorch for reproducible ABSA. To facilitate ABSA research, PyABSA supports several ABSA subtasks, including aspect term extraction, aspect sentiment classification, and end-to-end aspect-based sentiment analysis. Concretely, PyABSA integrates 29 models and 26 datasets.With just a few lines of code, the result of a model on a specific dataset can be reproduced. With a modularized design, PyABSA can also be flexiblely extended to considered models, datasets, and other related tasks. Besides, PyABSA highlights its data augmentation and annotation features, which significantly address data scarity. All are welcome to have a try at https://github.com/ yangheng95/PyABSA. | null | [
"https://export.arxiv.org/pdf/2208.01368v2.pdf"
] | 257,102,978 | 2208.01368 | 8d2cb017d386fcea848a663345402c89cdbe5a22 |
PyABSA: A Modularized Framework for Reproducible Aspect-based Sentiment Analysis
Heng Yang
Department of Computer Science
University of Exeter
EX4 4QFExeterUK
Chen Zhang czhang@bit.edu.cn
School of Computer Science
Beijing Institute of Technology
BeijingChina
Ke Li k.li@exeter.ac.uk
Department of Computer Science
University of Exeter
EX4 4QFExeterUK
PyABSA: A Modularized Framework for Reproducible Aspect-based Sentiment Analysis
The advancement of aspect-based sentiment analysis (ABSA) has urged the lack of a user-friendly framework that can largely lower the difficulty of reproducing state-of-the-art ABSA performance, especially for beginners. To meet the demand, we present PyABSA, a modularized framework built on PyTorch for reproducible ABSA. To facilitate ABSA research, PyABSA supports several ABSA subtasks, including aspect term extraction, aspect sentiment classification, and end-to-end aspect-based sentiment analysis. Concretely, PyABSA integrates 29 models and 26 datasets.With just a few lines of code, the result of a model on a specific dataset can be reproduced. With a modularized design, PyABSA can also be flexiblely extended to considered models, datasets, and other related tasks. Besides, PyABSA highlights its data augmentation and annotation features, which significantly address data scarity. All are welcome to have a try at https://github.com/ yangheng95/PyABSA.
Introduction
Aspect-based sentiment analysis (ABSA) (Pontiki et al., 2014(Pontiki et al., , 2015(Pontiki et al., , 2016 has made remarkable strides in recent years, particularly in the subtasks of aspect term extraction (ATE) (Yin et al., 2016;Wang et al., 2016a;Li and Lam, 2017;Wang et al., 2017;Li et al., 2018b;Xu et al., 2018;Ma et al., 2019;Yang, 2019;, aspect sentiment classification (ASC) (Ma et al., 2017;Zhang et al., 2019;Huang and Carley, 2019;Phan and Ogunbona, 2020;Zhao et al., 2020;Li et al., 2021a;Dai et al., 2021;Tian et al., 2021;Wang et al., 2021), and end-to-end aspect-based sentiment analysis (E2EABSA) (Yang et al., 2021b). In an example sentence that "I love the pizza at this restaurant, but the service is terrible.", there are two aspects "pizza" and "service". towards which the sentiments are positive and negative, respectively. Here, ATE aims to extract the two aspects, ASC aims to detect the corresponding sentiments given the aspects, and E2EABSA 1 aims to achieve the extraction and detection as one.
While an enormous number of models have been proposed in ABSA, however, they typically have distinguished architectures (e.g., LSTM, GCN, BERT) and optimizations (e.g., data pre-processing, evaluation metric), making it hard to reproduce their reported results even if their code is released. To address this issue and promote a fair comparison, we introduce PyABSA, a modularized framework built on PyTorch for reproducible ABSA. We provide a demonstration video 2 to show the basic usages of PyABSA.
PyABSA enables easy-to-use model training, evaluation, and inference on aforementioned ABSA subtasks with 29 models and 26 datasets supported. PyABSA allows beginners to reproduce the result of a model on a specific dataset with just a few lines of code. In addition to using PyABSA to reproduce results, we have also released a range of trained checkpoints, which can be accessed through the Transformers Model Hub 3 for users who need exact reproducibility.
Moreover, PyABSA is a framework with a modularized organization. Technically, PyABSA has five major modules: template class, configuration manager, dataset manager, metric visualizer, checkpoint manager. Thus it is flexible to extend provided templates to considered models, datasets and other related tasks with minor modifications.
It is widely recognized that ABSA models suffers from the shortage of data and the absence of datasets in specific domains. Utilizing an ABSAoriented data augmentor, we are able to provide up to 200K+ additional examples per dataset. The augmented datasets can improve the accuracy of models by 1 − 3%. To encourage the community to contribute custom datasets, we provide an data annotation interface.
It is noteworthy that there are other existing projects partly achieving similar goals with PyABSA. We should mark that the advantages of PyABSA over these projects are in the following aspects.
• PyABSA democratizes reproducible ABSA research by supporting a larger array of models and datasets among mainly concerned ABSA subtasks. • PyABSA is a modularized framework that is flexible to be extended to considered models, datasets, and other related tasks thanks to its organization. • PyABSA additionally offers data augmentation and data annotation features to address the data scarity in ABSA. We primarily support three subtasks in ABSA, namely ATE, ASC, and E2EABSA. Each subtask contains its own models and datasets, which adds up to 29 models and 26 datasets in total.
Supported Tasks
Models & Datasets
The core difficulty in unifying different models into a framework is that distinguished architectures and optimizations being used. We strive to bridge the gap in PyABSA, which has to our best knowledge the largest model pool covering attention-based, graph-based, and BERT-based models, etc. The supported models are listed in Table 1.
PyABSA also gathers a wide variety of datasets across various domains and languages, including laptops, restaurants, MOOCs, Twitter, and others. As far as we know, PyABSA maintains the largest ever number of ABSA datasets, which can be viewed in Table 2.
With just a few lines of code, researchers and users can invoke these builtin models and datasets for their own purposes. An example training pipeline of ASC is given in Snippet 1.
Reproduction
We as well present a preliminary performance overview of the models over the datasets provided in PyABSA. The results, which are based on ten epochs of training using the configurations for reproduction, can be found in Appendix B. The standard deviations of the results are also attached in parentheses. We used the pile of all datasets from PyABSA as the multilingual one. Please note that "-" in the results table means that the graphbased models are not applicable for those specific datasets. The checkpoints of these models are also offered for exact reproducibility. An E2E ABSA example inference pipeline is given in Snippet 2.
Modularized Framework
The main design of PyABSA is shown in Figure 1, which includes five necessary modules. We start by exploring task instances, which are abstracted as template classes. Afterwards, we dive into other modules (i.e., configuration manager, dataset manager, metric visualizer, checkpoint manager), elaborating their roles in getting PyABSA modularized.
Template Classes
PyABSA streamlines the process of developing models for ABSA subtasks, with a range of templates (refer to the five template classes in Figure 1) that simplify the implementation of models and ease the customization of data.
We follow a software engineering design with common templates and interfaces, allowing users to defining models with model utilities, processing data with data utilities, training models with trainer, and inferring models with predictors. These can be all achieved simply by inheriting the templates without manipulating the common modules. The inherited modules come with a uniform interface for all task-agnostic features.
Configuration Manager
Configuration manager handles environment configurations, model configurations, and hyperparameter settings. It extends the Python Namespace object for improving user-friendliness. Additionally, The configuration manager possesses a configuration checker to make sure that incorrect configurations do not pass necessary sanity checks, helping users keep in track of their training settings.
Dataset Manager
Dataset manager enables users to manage a wide range of builtin and custom datasets. Each dataset Typically, each ABSA subtask has 5 template classes that need to be instantiated, except for the augmenter which is optional.The right side of the diagram shows the main framework of PyABSA. The lowest level is the data annotation, which is suitable for creating custom datasets and the created datasets can be shared to the dataset hub. The three modules in the middle are the generic modules, which are suitable for training based on new datasets or models. The checkpoint manager is used to connect to the model hub and is responsible for uploading and downloading models and instantiating inference models.
is assigned with unique ID and name for management, and the dataset items are designed as nest objects to enhance flexibility. This design makes it simple to combine datasets for ensemble learning and multilingual ABSA tasks. Moreover, the dataset manager also takes care of seamlessly connect to the ABSA dataset hub, automatically downloading and managing the integrated datasets.
Metric Visualizer
As a vital effort towards streamlined evaluation and fair comparisons, metric visualizer 4 for PyABSA to automatically record, manage, and visualize various metrics (such as Accuracy, F-measure, STD, IQR, etc.). The metric visualizer can track metrics in real-time or load saved metrics records and produce box plots, violin plots, trajectory plots, Scott-Knott test plots, significance test results, etc. An example of auto-generated visualizations is shown in Figure 2 and more plots and experiment settings can be found in Appendix C. The metric visualizer streamlines the process of visualizing performance metrics and eliminates potential biases in metric statistics. 4 The metric visualizer was developed specifically for PyABSA and is available as an independent open-source project at:
https://github.com/yangheng95/ metric-visualizer
Checkpoint Manager
Checkpoint manager manages the trained model checkpoints and interacts with the model hub. Users can easily query available checkpoints for different ABSA subtasks and instantiate an inference model by specifying its checkpoint name. Users 2 fantastic to go with the whole family. the tickets are slightly on the higher side but the experience is worth it. the animals appear to be well looked after and cared for. there is heaps of animals that roam freely that you can touch, pat and feed. im very impressed overall. the cafe is also very lovely with amazing food. Select...
3
we were up here mid feburary. weather was good for walking. and theres lots of it, so many paths and trails. was good to see so many open areas for the animals. kids enjoyed feeding the kangaroos and an attempt at an emu before nerves kicked in size difference walkways are well maintained and clean, but it is slightly hilly in some parts. spent about 2. 5hrs walking around exploring everything. good for all ages.
Select...
4
great place any weather. lots to see and do cafe, staff and gift shop are fantastic. become a member, visit twice in a year and it has paid for itself. ive been 4 times this year already Select...
5
a must visit for tourists and locals. we hadnt visited for many years and decided it was time to revisit. we bought yearly memberships so we can now visit whenever we like. well worth the money and value received with only 2 visits. the park is well laid out, paths are sealed so good for parks, wheelchairs, etc. all the animals were very relaxed and friendly. great views over adelaide by the yellow footed rock wallabies rockery. the cafe was inviting and served amazing wedges and good coffee. there is a cosy lo9king fireplace for the winter visits. the animals were quite active on a cooler summers morning.
Select... While connecting to the model hub is the most convenient way to get an inference model, we also provide two alternative ways: • Searching for trained or cached checkpoints using keywords or paths through the checkpoint manager. • Building inference models using trained models returned by the trainers, which eliminates the need for saving checkpoints to disk. The checkpoint manager for any subtask is compatible with GloVe and pre-trained models based on transformers, and with the help of PyABSA's interface, launching an ATESC service requires just a few lines of code.
Save your work
Featured Functionalities
Data Augmentation
In ABSA, data scarity can lead to inconsistencies in performance evaluation and difficulties with generalizing across domains. To address this issue, PyABSA has adopted an automatic text augmen- tation method, i.e., BoostAug. This method balances diversity and skewness in the distribution of augmented data. In our experiments, the text augmentation method significantly boosted the classification accuracy and F1 scores of all datasets and models, whereas previous text augmentation techniques had a negative impact on model performance. We refer a comprehensive overview of this text augmentation method to Yang and Li (2022).
Dataset Annotation
Annotating ABSA datasets is more difficult compared to pure text classification. As there is no open-source tool available for annotating ABSA datasets, creating custom datasets becomes a criti-cal challenge. In PyABSA, we have got users rescued by provide a manual annotation interface contributed by the community (referred to as Figure 3), along with an automatic annotation interface.
Manual Annotation To ensure accurate manual annotation, our contributor developed a specialized ASC annotation tool 5 for PyABSA. This tool runs on web browsers, making it easy for anyone to create their own datasets with just a web browser. The annotation tool outputs datasets for various ABSA sub-tasks, such as ASC and ATESC sub-tasks, and we even provide an interface to help users convert datasets between different sub-tasks. Check out the community-contributed manual dataset annotation tool in Figure 3 Automatic Annotation To make manual annotation easier and address the issue of limited data, we offer an automatic annotation method in PyABSA. This interface is powered by a trained E2EABSA model and uses a hub-powered inference model to extract aspect terms and sentiment polarities. It enables users to quickly expand small datasets with annotated ABSA instances. Check out the following example for a demonstration of the automatic annotation interface: Snippet 4: The code snippet of automatic annotation. from pyabsa import make_ABSA_dataset # annotate "raw_data" using "multilingual" ATESC model make_ABSA_dataset(dataset_name_or_path='raw_data ', checkpoint='multilingual') Ensemble Training In deep learning, model ensemble is a crucial technique, and it is common to enhance ABSA performance in real-world projects through model ensemble. To simplify the process for users, PyABSA provides easy-to-use model ensemble without any code changes. Furthermore, PyABSA offers convenient ensemble methods for users to effortlessly augment their training data using built-in datasets from the data center. For example, when PyABSA recognizes a model or dataset as a list, it will automatically perform ensemble. We showcase this simple ensemble method in Snippet 5.
Ensemble Inference PyABSA includes an ensemble inference module for all subtasks, which enables users to aggregate the results of multiple 5 https://github.com/yangheng95/ABSADatasets/DPT models to produce a final prediction, thereby leveraging the strengths of each individual model and resulting in improved performance and robustness compared to using a single model alone. We provide an example of ensemble inference in Snippet 6.
Conclusions and Future Work
We present PyABSA, a modularized framework for reproducible ABSA. Our goal was to democratize the reproduction of ABSA models with a few lines of code and provide an opportunity of implementing ideas with minimal modifications on our prototypes. Additionally, the framework comes equipped with powerful data augmentation and annotation features, largely addressing the data scarity of ABSA. In the future, we plan to expand the framework to include other ABSA subtasks, such as aspect sentiment triplet extraction.
References
A Related Works
In recent years, many open-source models have been developed for aspect-based sentiment classification (ASC) (Li et al., 2021a;Tian et al., 2021;Li et al., 2021b;Wang et al., 2021) and aspect term extraction and sentiment classification (ATESC) (Li et al., 2018b;Xu et al., 2018;Ma et al., 2019;Yang, 2019;. However, the open-source repositories for these models often lack the capability to make predictions, and many are no longer being maintained. Two works similar to PyABSA are ABSA-PyTorch (Song et al., 2019) and Aspect-based Sentiment Analysis. ABSA-PyTorch combined multiple third-party models to facilitate fair comparisons of accuracy and F1, but it is now outdated and only supports the ASC task. Aspect-based Sentiment Analysis (Consultants, 2020) also handles ASC, but with limited models. PyABSA is a researchfriendly framework that supports multiple aspectbased sentiment analysis (ABSA) subtasks and includes multilingual, open-source ABSA datasets. The framework has instant inference interfaces for both aspect-based sentiment classification (ASC) and aspect-term extraction and sentiment classification (ATESC) subtasks, facilitating the implementation of multilingual ABSA services. PyABSA sets itself apart from other similar works, such as ABSA-PyTorch and Aspect-based Sentiment Analysis, by being actively maintained and supporting multiple ABSA subtasks.
B Model Evaluation
We present the experimental results of various models on different datasets, which may help users choose a suitable model for their projects. The results in parentheses are the standard deviations. These results were obtained through 10 epochs of training using the default settings. The multi-language dataset includes all the built-in datasets from PyABSA.
C Metric Visualization in
The absence of results
for some datasets using syntax-based models is indicated by "-".
ASC
al. (2018) ASGCN Zhang et al. (2019) ATAE-LSTM Wang et al. (2016b) Cabasc Liu et al. (2018) IAN Ma et al. (2017) LSTM-ASC Hochreiter et al. (1997) MemNet Tang et al. (2016b) MGAN Fan et al. (2018) RAM Chen et al. (2017) TC-LSTM Tang et al. (2016a) TD-LSTM Tang et al. (2016a) TNet-LF Li et al. (2018a) BERT-ASC Devlin et al. (2019) BERT-SPC Devlin et al. (2019) DLCF-DCA Xu et al. (2022) DLCFS-DCA Xu et al. (2022) Fast-LCF-ASC Zeng et al. (2019) Fast-LCFS-ASC Zeng et al. (2019) LCA-BERT Yang and Zeng (2020) LCF-BERT Zeng et al. (2019) LCFS-BERT Zeng et al. (2019) Fast-LSA-T Yang et al. (2021a) Fast-LSA-S Yang et al. (2021a) Fast-LSA-P Yang et al. (2021a) BERT-ATESC ATE / E2E Devlin et al. (2019) Fast-LCF-ATESC Yang et al. (2021b) Fast-LCFS-ATESC Yang et al. (2021b) LCF-ATESC Yang et al. (2021b) LCFS-ATESC Yang et al. (2021b)
Figure 1 :
1The left half of the diagram introduces the template classes provided in PyABSA.
Figure 2 :
2The metrics summary and a part of automatic visualizations processed by metric visualizer in PyABSA. The experimental dataset is ARTS-Laptop14, an adversarial dataset for ASC.
Figure 3 :
3The community-contributed manual dataset annotation tool provided for PyABSA.IDText Overall 1 cute animals. bought annual pass, and we have visited 3x now.Select...
Figure 4 :
4A part of available checkpoints for E2E ABSA in PyABSA's model hub.
Figure 5 :
5are some visualization examples autogenerated by PyABSA. Note that the metrics are not stable on small datasets. An example of automated -metric visualizations of the Fast-LSA-T-V2 model grouped by metric names.
Figure 6 :
6The significance level visualizations of the Fast-LSA-T-V2 grouped by different max modeling length. The left is scott-knott rank test plot, while the right is A12 effect size plot.
Table 1 :
1The prevalent models provided by PyABSA. ATE and E2EABSA share similar models. Note that the models based on BERT can be adapted to other pretrained language models from HuggingFace Transformers.
Snippet 1: The code snippet of an ASC training pipeline.from pyabsa import AspectSentimentClassification as ASC config = ASC.ASCConfigManager.get_asc_config_multilingual() config.model = ASC.ASCModelList.FAST_LSA_T_V2datasets_path = ASC.ABSADatasetList.Multilingual
sent_classifier = Trainer(config=config,
dataset=datasets_path,
checkpoint_save_mode=1, # save
state_dict instead of model
auto_device=True, # auto-select
cuda device
load_aug=True, # training using
augmentation data
).load_trained_model()
Table 2 :
2A list of datasets in various languages presented in PyABSA, where the datasets marked with † are used for adversarial research. The increased number of examples in the training set have been generated using our own ABSA automatic augmentation tool.Dataset
Language
# of Examples
# of Augmented Examples
Source
Training Set Validation Set Testing Set
Training Set
Laptop14
English
2328
0
638
13325
SemEval 2014
Restaurant14
English
3604
0
1120
19832
SemEval 2014
Restaurant15
English
1200
0
539
7311
SemEval 2015
Restaurant16
English
1744
0
614
10372
SemEval 2016
Twitter
English
5880
0
654
35227
Dong et al. (2014)
MAMS
English
11181
1332
1336
62665
Jiang et al. (2019)
Television
English
3647
0
915
25676
Mukherjee et al. (2021)
T-shirt
English
1834
0
465
15086
Mukherjee et al. (2021)
Yelp
English
808
0
245
2547
WeiLi9811@GitHub
Phone
Chinese
1740
0
647
0
Peng et al. (2018)
Car
Chinese
862
0
284
0
Peng et al. (2018)
Notebook
Chinese
464
0
154
0
Peng et al. (2018)
Camera
Chinese
1500
0
571
0
Peng et al. (2018)
MOOC
Chinese
1583
0
396
0
jmc-123@GitHub
Shampoo
Chinese
6810
0
915
0
brightgems@GitHub
MOOC-En
English
1492
0
459
10562
aparnavalli@GitHub
Arabic
Arabic
9620
0
2372
0
SemEval 2016
Dutch
Dutch
1283
0
394
0
SemEval 2016
Spanish
Spanish
1928
0
731
0
SemEval 2016
Turkish
Turkish
1385
0
146
0
SemEval 2016
Russian
Russian
3157
0
969
0
SemEval 2016
French
French
1769
0
718
0
SemEval 2016
ARTS-Laptop14 †
English
2328
638
1877
13325
Xing et al. (2020)
ARTS-Restaurant14 †
English
3604
1120
3448
19832
Xing et al. (2020)
Kaggle †
English
3376
0
866
0
Khandeka@Kaggle
Chinese-Restaurant †
Chinese
26119
3638
7508
0
Zhang et al. (2022)
Snippet 2: The code snippet of an E2EABSA inference
pipeline.
from pyabsa import AspectTermExtraction as ATE
aspect_extractor = ATE.AspectExtractor(
"multilingual",
data_num=100,
)
# simple inference
examples = [
"But the staff was so nice to us .",
"But the staff was so horrible to us .",
]
result = aspect_extractor.predict(
example=examples,
print_result=True, # print results in console
ignore_error=True, # ignore an invalid input
eval_batch_size=32, # set batch size
)
# batch inference
atepc_result = aspect_extractor.batch_predict(
inference_source,
save_result=False,
print_result=True,
pred_sentiment=True,
eval_batch_size=32,
)
Positive Neutral Negative Clear can query available checkpoints in few lines of code as in Snippet 3 from the model hub. The example of available checkpoints is shown in Figure 4. Snippet 3: The code snippet of available checkpoints.Save
«
»
place
«
»
1
1
from pyabsa import available_checkpoints
from pyabsa import TaskCodeOption
checkpoint_map = available_checkpoints(
# the code of ASC
TaskCodeOption.Aspect_Polarity_Classification,
show_ckpts=True
)
Snippet 6 :
6The code snippet of an model ensemble in PyABSA.from pyabsa.utils import VoteEnsemblePredictor
checkpoints = {
ckpt: APC.SentimentClassifier(checkpoint=ckpt)
# use the findfile module to search all available
checkpoints
for ckpt in findfile.find_cwd_dirs(or_key=["laptop14"])
}
ensemble_predictor = VoteEnsemblePredictor(
checkpoints, weights=None, numeric_agg="mean", str_agg="
max_vote"
)
ensemble_predictor.predict("The [B-ASP]food[E-ASP] was good!
")
PyABSAC.1 Code for Auto-metric VisualizationPyABSA provides standardised methods for monitoring metrics and metric visualisations. PyASBA will automatically generate trajectory plot, box plot, violin plot, and bar charts based on metrics to evaluate the performance differences across models, etc. This example aims at evaluating the influence of maximum modelling length as a hyperparameter on the performance of the FAST-LSA-T-V2 model on the Laptop14 dataset.from pyabsa import AspectSentimentClassification as ASC config = ASC.ASCConfigManager.get_config_english() config.model = ASC.ASCModelList.FAST_LSA_T_V2 config.lcf = 'cdw' # each trial repeats with different seed config.seed = [random.randint(0, 10000) for _ in range(3)]import random
import os
from metric_visualizer import MetricVisualizer
Table 3 :
3The evaluation of the performance of the ASC and ATESC models on the datasets available in PyABSA.
Model ASC 84.57(0.44) 81.98(0.22) 95.27(0.29) 94.10(0.40) 95.27(0.29) 94.10(0.40) 89.12(0.21) 74.64(0.68) 86.29(0.51) 68.64(0.26) 86.14(0.63) 75.59(0.45) 85.27(0.34) 65.58(0.07) 87.62(77.13) 77.13(0.08) 87.19(0.36) 80.93(0.21) DLCF-DCA 84.05(0.06) 81.03(1.05) 95.69(0.22) 94.74(0.30) 89.23(0.15) 75.68(0.01) 86.93(0.38) 72.81(1.57) 86.42(0.49) 76.29(0.10) 90.42(0.0) 73.69(0.82) 85.96(1.71) 67.59(1.61) 87.00(0.41) 74.88(0.41) 87.80(0.01) 81.62(0.20) Fast-LCF-ASC 84.70(0.05) 82.00(0.08) 95.98(0.02) 95.01(0.05) 89.82(0.06) 77.68(0.33) 86.42(0.13) 71.36(0.53) 86.35(0.28) 75.10(0.14) 91.59(0.21) 72.31(0.26) 86.64(1.71) 67.00(1.63) 87.77(0.15) 74.17(0.47) 87.66(0.15) 81.34(0.25) Fast-LCFS-ASC 84.27(0.09) 81.60(0.17) 95.67(0.32) 94.40(0.55) LCF-BERT 84.81(0.29) 82.06(0.06) 96.30(0.05) 95.45(0.05) 89.80(0.13) 77.60(0.44) 86.55(0.76) 70.67(0.41) 85.52(0.42) 74.03(0.75) 91.86(0.21) 75.26(0.37) 89.73(0.05) 68.57(1.09) 87.41(0.21) 74.71(0.17) 87.86(0.09) 82.01(0.46) LCFS-BERT 84.49(0.13) 81.46(0.05) 95.32(0.39) 94.23(0.56) 88.89(0.11) 75.41(0.37) 87.94(0.43) 72.69(1.01) 84.61(0.21) 71.98(1.25) 90.83(0.41) 73.87(1.45) 88.36(0.68) 69.21(0.86) 87.15(0.15) 74.99(0.44) 87.55(0.22) 81.58(0.13) Fast-LSA-T 84.60(0.29) 81.77(0.44) 96.05(0.05) 95.10(0.05) 89.25(0.38) 77.25(0.43) 86.04(0.0) 70.02(0.75) 86.07(0.14) 73.52(0.53) 91.93(0.27) 74.21(0.60) 88.01(1.03) 66.74(0.61) 88.24(0.10) 76.91(1.10) 87.56(0.13) 81.01(0.56)Task
English
Chinese
Arabic
Dutch
Spanish
French
Turkish
Russian
Multilingual
AccASC
F1ASC
AccASC
F1ASC
AccASC
F1ASC
AccASC
F1ASC
AccASC
F1ASC
AccASC
F1ASC
AccASC
F1ASC
AccASC
F1ASC
AccASC
F1ASC
BERT-SPC
-
-
-
-
-
-
-
-
-
-
-
-
-
-
AcknowledgementsWe appreciate all contributors who help PyABSA e.g., committing code or datasets; the community's support makes PyABSA even better. Furthermore, we appreciate all ABSA researchers for their opensource models that improve ABSA.
Eliminating sentiment bias for aspect-level sentiment classification with unsupervised opinion extraction. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang, 10.18653/v1/2021.findings-emnlp.258Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana. Dominican RepublicAssociation for Computational LinguisticsBo Wang, Tao Shen, Guodong Long, Tianyi Zhou, and Yi Chang. 2021. Eliminating sentiment bias for aspect-level sentiment classification with unsuper- vised opinion extraction. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 3002-3012. Associa- tion for Computational Linguistics.
Recursive neural conditional random fields for aspect-based sentiment analysis. Wenya Wang, Daniel Sinno Jialin Pan, Xiaokui Dahlmeier, Xiao, 10.18653/v1/d16-1059Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAThe Association for Computational LinguisticsWenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016a. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 616-626. The Association for Computational Linguistics.
Coupled multi-layer attentions for co-extraction of aspect and opinion terms. Wenya Wang, Daniel Sinno Jialin Pan, Xiaokui Dahlmeier, Xiao, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceSan Francisco, California, USAAAAI PressWenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Fran- cisco, California, USA, pages 3316-3322. AAAI Press.
Attention-based LSTM for aspectlevel sentiment classification. Yequan Wang, Minlie Huang, Xiaoyan Zhu, Li Zhao, 10.18653/v1/d16-1058Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAThe Association for Computational LinguisticsYequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016b. Attention-based LSTM for aspect- level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 606-615. The As- sociation for Computational Linguistics.
Tasty burgers, soggy fries: Probing aspect robustness in aspectbased sentiment analysis. Xiaoyu Xing, Zhijing Jin, Di Jin, Bingning Wang, Qi Zhang, Xuanjing Huang, 10.18653/v1/2020.emnlp-main.292Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline2020Association for Computational LinguisticsXiaoyu Xing, Zhijing Jin, Di Jin, Bingning Wang, Qi Zhang, and Xuanjing Huang. 2020. Tasty burg- ers, soggy fries: Probing aspect robustness in aspect- based sentiment analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3594-3605. Associ- ation for Computational Linguistics.
Double embeddings and cnn-based sequence labeling for aspect extraction. Hu Xu, Bing Liu, Lei Shu, Philip S Yu, 10.18653/v1/P18-2094Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018. the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018Melbourne, AustraliaAssociation for Computational Linguistics2Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018. Dou- ble embeddings and cnn-based sequence labeling for aspect extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics, ACL 2018, Melbourne, Australia, July 15- 20, 2018, Volume 2: Short Papers, pages 592-598. Association for Computational Linguistics.
Combining dynamic local context focus and dependency cluster attention for aspect-level sentiment classification. Mayi Xu, Biqing Zeng, Heng Yang, Junlong Chi, Jiatao Chen, Hongye Liu, 10.1016/j.neucom.2021.12.084Neurocomputing. 478Mayi Xu, Biqing Zeng, Heng Yang, Junlong Chi, Jiatao Chen, and Hongye Liu. 2022. Combining dynamic local context focus and dependency cluster attention for aspect-level sentiment classification. Neurocom- puting, 478:49-69.
PyABSA: Open Framework for Aspect-based Sentiment Analysis. Heng Yang, Heng Yang. 2019. PyABSA: Open Framework for Aspect-based Sentiment Analysis.
Augmentor or filter? reconsider the role of pre-trained language model in text classification augmentation. Heng Yang, Ke Li, 10.48550/arXiv.2210.02941abs/2210.02941CoRRHeng Yang and Ke Li. 2022. Augmentor or fil- ter? reconsider the role of pre-trained language model in text classification augmentation. CoRR, abs/2210.02941.
Enhancing finegrained sentiment classification exploiting local context embedding. Heng Yang, Biqing Zeng, abs/2010.00767CoRRHeng Yang and Biqing Zeng. 2020. Enhancing fine- grained sentiment classification exploiting local con- text embedding. CoRR, abs/2010.00767.
Back to reality: Leveraging patterndriven modeling to enable affordable sentiment dependency learning. Heng Yang, Biqing Zeng, Mayi Xu, Tianxing Wang, abs/2110.08604CoRRHeng Yang, Biqing Zeng, Mayi Xu, and Tianxing Wang. 2021a. Back to reality: Leveraging pattern- driven modeling to enable affordable sentiment de- pendency learning. CoRR, abs/2110.08604.
A multi-task learning model for chinese-oriented aspect polarity classification and aspect term extraction. Heng Yang, Biqing Zeng, Jianhao Yang, Youwei Song, Ruyang Xu, 10.1016/j.neucom.2020.08.001Neurocomputing. 419Heng Yang, Biqing Zeng, Jianhao Yang, Youwei Song, and Ruyang Xu. 2021b. A multi-task learning model for chinese-oriented aspect polarity classifi- cation and aspect term extraction. Neurocomputing, 419:344-356.
Constituency lattice encoding for aspect term extraction. Yunyi Yang, Kun Li, Xiaojun Quan, Weizhou Shen, Qinliang Su, 10.18653/v1/2020.coling-main.73Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain2020Online. International Committee on Computational LinguisticsYunyi Yang, Kun Li, Xiaojun Quan, Weizhou Shen, and Qinliang Su. 2020. Constituency lattice encod- ing for aspect term extraction. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (On- line), December 8-13, 2020, pages 844-855. Inter- national Committee on Computational Linguistics.
Unsupervised word and dependency path embeddings for aspect term extraction. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, Ming Zhou, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016. the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016New York, NY, USAIJCAI/AAAI PressYichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term ex- traction. In Proceedings of the Twenty-Fifth Inter- national Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2979-2985. IJCAI/AAAI Press.
Lcf: A local context focus mechanism for aspect-based sentiment classification. Biqing Zeng, Heng Yang, Ruyang Xu, Wu Zhou, Xuli Han, Applied Sciences. 9163389Biqing Zeng, Heng Yang, Ruyang Xu, Wu Zhou, and Xuli Han. 2019. Lcf: A local context focus mecha- nism for aspect-based sentiment classification. Ap- plied Sciences, 9(16):3389.
Aspect-based sentiment classification with aspectspecific graph convolutional networks. Chen Zhang, Qiuchi Li, Dawei Song, 10.18653/v1/D19-1464Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019Hong Kong, ChinaAssociation for Computational LinguisticsChen Zhang, Qiuchi Li, and Dawei Song. 2019. Aspect-based sentiment classification with aspect- specific graph convolutional networks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4567-4577. Association for Computational Linguistics.
Structural bias for aspect sentiment triplet extraction. Chen Zhang, Lei Ren, Fang Ma, Jingang Wang, Wei Wu, Dawei Song, Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022. the 29th International Conference on Computational Linguistics, COLING 2022Gyeongju, Republic of KoreaInternational Committee on Computational LinguisticsChen Zhang, Lei Ren, Fang Ma, Jingang Wang, Wei Wu, and Dawei Song. 2022. Structural bias for as- pect sentiment triplet extraction. In Proceedings of the 29th International Conference on Computa- tional Linguistics, COLING 2022, Gyeongju, Repub- lic of Korea, October 12-17, 2022, pages 6736-6745. International Committee on Computational Linguis- tics.
Modeling sentiment dependencies with graph convolutional networks for aspect-level sentiment classification. Pinlong Zhao, Linlin Hou, Ou Wu, 10.1016/j.knosys.2019.105443Knowl. Based Syst. 193105443Pinlong Zhao, Linlin Hou, and Ou Wu. 2020. Mod- eling sentiment dependencies with graph convolu- tional networks for aspect-level sentiment classifica- tion. Knowl. Based Syst., 193:105443.
| [
"https://github.com/yangheng95/",
"https://github.com/yangheng95/ABSADatasets/DPT"
] |
[
"Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots ACM Reference Format",
"Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots ACM Reference Format"
] | [
"Juntao Li lijuntao@pku.edu.cn ",
"Chang Liu changliu@pku.edu.cn ",
"Rui Yan ruiyan@ruc.edu.cn. ",
"Min Zhang minzhang@suda.edu.cn ",
"Rui Yan ",
"Juntao Li ",
"Chang Liu liuchang97@pku.edu.cn ",
"Juntao Li ",
"Chang Liu ",
"Chongyang Tao chongyangtao@pku.edu.cn ",
"Zhangming Chan zhangming.chan@pku.edu.cn ",
"Dongyan Zhao zhaody@pku.edu.cn ",
"Min Zhang ",
"Rui Yan ",
"\nWangxuan Institute of Computer Technology and Center for Data Science\nCHONGYANG TAO\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nChina\n",
"\nZHANGMING CHAN\nWangxuan Institute of Computer Technology\nPeking University\nChina\n",
"\nDONGYAN ZHAO †\nWangxuan Institute of Computer Technology\nPeking University\nChina\n",
"\nMIN ZHANG\nWangxuan Institute of Computer Technology\nPeking University\nChina\n",
"\nGaoling School of Artificial Intelligence\nSoochow University\nChina\n",
"\nWangxuan Institute of Computer Technology and Center for Data Science, Academy for Advanced Interdisciplinary Studies\nRenmin University of China\nChina\n",
"\nInstitute of Computer Technology\nPeking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing Shi, China; Chongyang Tao, Wangxuan\n",
"\nWangxuan Institute of Computer Technology\nPeking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina\n",
"\nWangxuan Institute of Computer Technology\nPeking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina\n",
"\nPeking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina\n",
"\nGaoling School of Artificial Intelligence\nSoochow University\n#1 Shizi Rd, Jiang SuSuzhouChina\n",
"\nRenmin University of China\n#59 Zhongguancun Rd, Haidian Qu, Beijing ShiChina\n"
] | [
"Wangxuan Institute of Computer Technology and Center for Data Science\nCHONGYANG TAO\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nChina",
"ZHANGMING CHAN\nWangxuan Institute of Computer Technology\nPeking University\nChina",
"DONGYAN ZHAO †\nWangxuan Institute of Computer Technology\nPeking University\nChina",
"MIN ZHANG\nWangxuan Institute of Computer Technology\nPeking University\nChina",
"Gaoling School of Artificial Intelligence\nSoochow University\nChina",
"Wangxuan Institute of Computer Technology and Center for Data Science, Academy for Advanced Interdisciplinary Studies\nRenmin University of China\nChina",
"Institute of Computer Technology\nPeking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing Shi, China; Chongyang Tao, Wangxuan",
"Wangxuan Institute of Computer Technology\nPeking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina",
"Wangxuan Institute of Computer Technology\nPeking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina",
"Peking University\n#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina",
"Gaoling School of Artificial Intelligence\nSoochow University\n#1 Shizi Rd, Jiang SuSuzhouChina",
"Renmin University of China\n#59 Zhongguancun Rd, Haidian Qu, Beijing ShiChina"
] | [
"J. ACM"
] | Existing multi-turn context-response matching methods mainly concentrate on obtaining multi-level and multi-dimension representations and better interactions between context utterances and response. However, in real-place conversation scenarios, whether a response candidate is suitable not only counts on the given dialogue context but also other backgrounds, e.g., wording habits, user-specific dialogue history content. To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN). Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information; 2) we perform hybrid representation learning on context-response utterances and explicitly incorporate a customized attention mechanism to extract vital information from context-response interactions so as to improve the accuracy of matching. We evaluate our model on two large datasets with user identification, i.e., personalized Ubuntu dialogue Corpus (P-Ubuntu) and personalized Weibo dataset (P-Weibo). Experimental results confirm that our method significantly outperforms several strong models by combining personalized attention, wording behaviors, and hybrid representation learning. | 10.1145/3453183 | [
"https://arxiv.org/pdf/2103.09534v1.pdf"
] | 232,257,956 | 2103.09534 | 009b699bac018f8c1a5d0c8691e650fc79ba4741 |
Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots ACM Reference Format
17 Mar 2021
Juntao Li lijuntao@pku.edu.cn
Chang Liu changliu@pku.edu.cn
Rui Yan ruiyan@ruc.edu.cn.
Min Zhang minzhang@suda.edu.cn
Rui Yan
Juntao Li
Chang Liu liuchang97@pku.edu.cn
Juntao Li
Chang Liu
Chongyang Tao chongyangtao@pku.edu.cn
Zhangming Chan zhangming.chan@pku.edu.cn
Dongyan Zhao zhaody@pku.edu.cn
Min Zhang
Rui Yan
Wangxuan Institute of Computer Technology and Center for Data Science
CHONGYANG TAO
Academy for Advanced Interdisciplinary Studies
Peking University
China
ZHANGMING CHAN
Wangxuan Institute of Computer Technology
Peking University
China
DONGYAN ZHAO †
Wangxuan Institute of Computer Technology
Peking University
China
MIN ZHANG
Wangxuan Institute of Computer Technology
Peking University
China
Gaoling School of Artificial Intelligence
Soochow University
China
Wangxuan Institute of Computer Technology and Center for Data Science, Academy for Advanced Interdisciplinary Studies
Renmin University of China
China
Institute of Computer Technology
Peking University
#5 Yiheyuan Rd, Haidian Qu, Beijing Shi, China; Chongyang Tao, Wangxuan
Wangxuan Institute of Computer Technology
Peking University
#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina
Wangxuan Institute of Computer Technology
Peking University
#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina
Peking University
#5 Yiheyuan Rd, Haidian Qu, Beijing ShiChina
Gaoling School of Artificial Intelligence
Soochow University
#1 Shizi Rd, Jiang SuSuzhouChina
Renmin University of China
#59 Zhongguancun Rd, Haidian Qu, Beijing ShiChina
Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots ACM Reference Format
J. ACM
17 Mar 2021Publication date: March 2021.Authors' addresses: Juntao Li, 0004-5411/2021/3-ART $15.00 https://doi.org/xx.xxxx/xxxxxxx.xxxxxxxCCS Concepts: • Computing methodologies → Discoursedialogue and pragmatics Additional Key Words and Phrases: Open-domain dialogue systemDialogue history modelingPersonalized rankingRetrieval-based chatbotSemantic matchingHybrid representation learning * Equal contribution Ordering is decided by a coin flip † Corresponding author
Existing multi-turn context-response matching methods mainly concentrate on obtaining multi-level and multi-dimension representations and better interactions between context utterances and response. However, in real-place conversation scenarios, whether a response candidate is suitable not only counts on the given dialogue context but also other backgrounds, e.g., wording habits, user-specific dialogue history content. To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN). Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information; 2) we perform hybrid representation learning on context-response utterances and explicitly incorporate a customized attention mechanism to extract vital information from context-response interactions so as to improve the accuracy of matching. We evaluate our model on two large datasets with user identification, i.e., personalized Ubuntu dialogue Corpus (P-Ubuntu) and personalized Weibo dataset (P-Weibo). Experimental results confirm that our method significantly outperforms several strong models by combining personalized attention, wording behaviors, and hybrid representation learning.
INTRODUCTION
Dialogue systems have received a considerable amount of attention from academic researchers and have achieved remarkable success in a myriad of industry scenarios, such as in chit-chat machines [46], information seeking and searching [1,12], and intelligent assistants [22]. From the perspective of domains involved in previous studies, existing studies can be categorized into two groups, i.e., domain-specific and open-domain. Domain-specific models generally pursue to solve and complete one specific target (e.g., restaurant reservation [21], train routing [8]), which always involve domain knowledge and engineering. Unlike domain-specific studies, open-domain dialogues between human and machine involve unlimited topics within a conversation [40], as a result of which building an open-domain dialogue system is more challenging with the lack of enough knowledge engineering. Benefiting from the explosion of available dialogue datasets, constructing open-domain dialogue systems has attracted a growing number of researchers. Among dialogue systems in open-domain, generation-based [45,49,54] and retrieval-based [55,60,61] methods are the mainstreams in both academia and industry, where generation methods learn to create a feasible response for a user-issued query while retrieval-based methods extract a proper response from a candidate pool. In contrast to the "common response" 1 created by generation models [23], retrieval-based methods can extract fluent and informative responses from human conversations [50]. Early retrieval-based methods mainly address the issue of single-turn response selection, where the dialogue context only contains one utterance [56]. Recent studies focus on modeling multi-turn response selection [32].
For multi-turn response selection, a dialogue system is required to properly calibrate the matching degree between a multi-turn dialogue context and a given response candidate. The response selection task thus can be naturally transformed to learning the matching degrees between the semantic representations and the dependency relationships between context and response candidates. SMN (sequential matching network) attempts to learn fine-grained (e.g., word-level, sentence-level) semantic matching information between each utterance in context and the response candidate and aggregate the matching information to calculate the final matching results. MIX (multi-channel information crossing) [5] models the matching degrees between context and response from the perspective of interaction representations [50] to extract multi-channel matching patterns and information. Deep attention matching network (DAM) [74] captures sophisticated dependency information in utterances and cross utterances, i.e., using self-attention mechanism and crossattention strategy to learn the representations of context and response candidate. Although these methods have achieved promising results, there is still room for improving their capability of context utterance modeling, such as combining dependency relationships and multi-channel interaction representations [5,50]. Besides, existing models are trapped into learning matching signals from context and response, leaving introducing extra information unexplored. Table 1 illustrates a sampled case in our experiments. We can observe that the word "xfce4" is the crucial clue for selecting the target response. However, other words are likely to overwhelm the word "xfce4", leading to unsatisfactory performance as it appears only once in the context. If we exploit the information in history-response matching, i.e., the high-frequency word "xfce4", the performance of response [68] proposes to use pseudo-relevant feedback documents as an extra information source to enrich the response representation. Such a strategy is useful but still risky since the pseudo-relevant feedback might introduce much-unrelated information. Thus, it is imperative to use more accurate extra information, i.e., user-specific dialogue history, for improving the matching performance. To address these obstacles, we propose a novel personalized hybrid matching network (PHMN) for multi-turn response selection, in which the customized dialogue history is introduced as additional information, and multiple types of representations in context and response are merged as the hybrid representation. Explicitly, we incorporate two types of representations in the hybrid representation learning, i.e., attention-based representation for learning dependency information, interaction-based representation for extracting multiple matching patterns and features. Such a strategy has been proved to be effective in recent studies for improving context-response matching [50,51]. In exploiting information in dialogue history, we introduce two different ways for enhancing multi-turn context response matching. For one thing, we extract the wording behavior of a specific user in the corresponding dialogue history as long-term information to supplement the short-term context-response matching. For another, we compute personalized attention weights for candidate responses to extract critical information for response selection. More concretely, we perform the personalized attention upon the learned hybrid representation and then utilize a gate mechanism to fuse the wording behavior matching information and weighted hybrid context-response matching information. Hence, our model is effective for both context modeling and dialogue history exploiting.
We conduct experiments on two challenging datasets for personalized multi-turn response retrieval, i.e., personalized Ubuntu dialogue corpus (P-Ubuntu) and personalized Weibo dataset (P-Weibo), to evaluate the effectiveness of our proposed PHMN model, where there exist user ids in both datasets. Experimental results confirm that our model achieves state-of-the-art performance on the newly created corpora. Through introducing personalized wording behaviors and personalized attention, our model yields a significant improvement over several strong baseline models, which suggests that introducing user-specific dialogue history and learning hybrid representations are appealing for multi-turn response retrieval.
PRELIMINARIES
Deep Matching Network
Generally, the recent effective deep matching networks for multi-turn response retrieval or short text matching consist of four elements: representations learning, dependency modeling, matching, aggregation and fusion.
Representation Learning. Most studies for multi-turn response selection first transform context utterances and response candidates to either vector representations [15] or interaction matrices [11,13,37] for convenient matching calculation. For vector representations learning, various deep neural networks are designed for learning multi-level and multi-dimension semantic information from conversation utterances, including CNN-based [18,19], RNN-based [25,31], and tree-RNNbased methods [16,47]. As to interaction-based representation learning methods, they first generate an interaction matrix for each utterance pair between context utterances and response candidates. Then, direct matching features such as the degree and structure of matching are captured by a deep neural network [6,64].
Dependency Modeling. Besides the semantic representations and matching structures in the interaction-based method, there exist sophisticated dependency information and reference relations within utterances and across utterances. Benefited from the great success of the Transformer on neural machine translation, various attention-based methods are proposed to capture the dependency structure and information from different levels. DAM [74] leverages a novel attentionbased multi-turn response selection framework for learning various dependency information and achieves very competitive performance. DAM borrows the self-attention strategy from Transformer for capturing word-level intra-utterance dependency and sentence-level representations and uses a cross-attention mechanism to capture dependency (e.g., reference relations) between those latently matched segment pairs.
Matching. Once obtaining utterance representations at each level of granularity, the matching relations between two segments will be calculated. According to the information in utterance representations, semantic matching methods and structure matching approaches are designed to calibrate the matching degree between two representations. To date, various matching degree calculation methods have been investigated, e.g., using Euclidean distance between two vectors as the matching degree, performing cosine similarity calculation, computing element-wise dot production. Based on the information in vector representations (semantic or structure) and different matching degree calibration strategies, an effective matching method can be designed for comprehensively computing the matching degree between two utterances.
Aggregation and Fusion. After calculating the matching degree between context and response at each level of granularity, a typical deep matching network contains an aggregation or fusion module for learning the final matching score. SMN [61] proposes to use RNN to sequentially accumulate the matching degree of each utterance-response pair and further compute the matching score between the context and the response. As utterances relationships within a dialogue context have effects on the calculation of the final matching score, DUA [71] refines the utterance representations with gated self-attention and further aggregates this information into a matching score. DAM [74] aggregates all the matching degrees of segments across each utterance and response into a 3D tensor and then leverages two-layered 3D convolutions with max-pooling operations to fuse the matching degree information and compute the final matching score.
Problem Formulation
We follow the conventional settings in previous multi-turn response retrieval works [50,51] and introduce the following necessary notations to formulate the personalized multi-turn response retrieval task. A dataset with user dialogue history content D = {( , , , )} =1 is first given, where , , , represent dialogue context, response candidate, dialogue history and the corresponding binary label of the response candidate respectively. Note that we treat user dialogue history utterances as the extra information for building a personalized multi-turn dialogue response retrieval model. For the sake of clarity, we omit the subscript which denotes the case index in D when elaborating the details of our model. Herein, an utterance in the dialogue context is represented as = ( 1 , 2 , . . . , , . . . , ) where represents an utterance with length in the j-th turn of the dialogue context and there are utterances in the dialogue context. Similarly, there are history utterances of the current user who is supposed to raise a response for the given dialogue context, which is denoted as =
, where , represents an utterance with length , . denotes the number of words in a candidate response . = 1 means the given response candidate is proper for the context and corresponded user dialogue history, otherwise = 0. Then, our task is defined as learning a matching function (·) from the given dataset that can yield a matching score between the dialogue context and the given response candidate with the help of user dialogue history.
MODEL
Inspired by the advanced deep multi-turn dialogue response selection framework mentioned above, we design our model from two directions, i.e., obtaining more comprehensive information from context and response, introducing more auxiliary information other than context and response. We proposed a personalized hybrid matching network (PHMN) for multi-turn response selection, which incorporates hybrid representations in context and response (i.e., semantic matching, interactionbased features, and dependency relations information) and personalized dialogue content (i.e., user-specific dialogue history). As shown in figure 1, our proposed PHMN comprises three main sub-modules, i.e., hybrid representation learning module, personalized dialogue content modeling, aggregation and fusion.
Hybrid Representations Learning
We consider obtaining semantic representations of context and response at two different levels, i.e., word-level and phrase-level. Concretely, we adopt word embeddings as the word-level representations and the combination of uni-gram, bi-gram, tri-gram semantic information as phrase representations. We also borrow the strategy of self-attention from the Transformer [53] and DAM [74] to learn abundant dependency relationships in conversations. To capture the matching structure and patterns, we transform the semantic representations of context and response to interaction matrices. Details of learning word representation, phrase representation, dependency representation and constructing interaction matrices are elaborated as follows:
Word Representations. We use word embeddings as word-level representations insomuch as they contain rich semantic information and co-occurrence information. In learning, we initialized word embeddings with pre-trained Word2Vec on each benchmark dataset, i.e., P-Ubuntu dialogue corpus in English and P-Weibo dataset in Chinese. Upon both datasets, the dimension of word embedding is . Note that any proper Word2vec learning algorithm and pre-trained results are applicable such as BERT [7]. Thus, the word-level representation of an utterance is = [ ,1 , ,2 , . . . , , , . . . , , ] ∈ R × ; and similarly a response candidate can be written as =
] ∈ R × . The dimensions of , and , are both .
Phrase Representations. In an actual situation, obtaining semantic representations solely based on word representations is risky as the semantic assemble patterns of words differ from each other. For instance, "all in" and "in all" have totally different semantic information, while "work hard" and "hard work" deliver the same semantic content. We consider modeling the semantic assemble patterns with a convolutional neural network. In both English and Chinese, the minimal semantic unit typically includes 1 to 3 words [5]. As a result, we conduct convolution operations upon the word embedding representations with different window sizes to capture uni-gram, bi-gram, and tri-gram information. Concretely, we conduct 1-D convolution on the word embeddings of a given utterance = [ ,1 , ,2 , . . . , , , . . . , , ] with window size from 1 to 3, where there are filters for each window size and the stride length is 1. The -gram phrase representation in the -th location is calculated as:
= ( + ) (1) where
and are trainable parameters of the convolutional filter with window size , and ∈ R × stands for the input unigram embeddings in the current sliding window which is formulated as:
= [ − ⌊ 1 2 ( −1) ⌋ , . . . , , . . . , + ⌊ 1 2 ⌋ ](2)
where is the word embedding representation of a word in either the dialogue context or the response (i.e., it can be either , or , ). Here we set the same as . The output sequence of vectors of the convolution has the same length as the input sequence of vectors by utilizing the zero-padding strategy. Thus, a given utterance is transformed to three matrix, i.e., 1 = 3 ]. 1 , 2 and 3 correspond to {1,2,3}-gram semantic information, respectively. Similarly, we also conduct 1-D convolution on a given response = [ ,1 , ,2 , . . . , , , . . . , , ] using the same convolutional filters, which outputs three matrix 1 , 2 , 3 .
[ 1 1 , 1 2 , . . . , 1 ], 2 = [ 2 1 , 2 2 , . . . , 2 ], 3 = [ 3 1 , 3 2 , . . . ,
Dependency Representations. To obtain the sophisticated dependency representations in conversations, we utilized an attentive module that is similar to the attention module in Transformer [53] and DAM. The attentive module takes three sentences as input, namely the query sentence, the key sentence, and the value sentence, which are denoted as
Q = [ ] Q −1 =0 , K = [ ] K −1 =0 , V = [ ] V −1
=0 respectively, where Q , K , V represent the number of words in each sentence and K = V , and is the -dimension word embedding representation of a word. The attentive module first uses each word in the query sentence to attend each word in the key sentence through the scaled dot-product attention mechanism. Then, the obtained attention score is functioned upon the value sentence V to form a new representation of Q, which is formulated as follows:
(Q, K, V) = ( QK √ )V(3)
In practice, the key sentence and the value sentence are identical, i.e., K = V. Thus, each word in the query sentence Q is represented by the joint meaning of its similar words in V. We dispense ℎ heads to Q, K, V to capture multiple aspects dependency information via the scaled dot-product multi-head attention. The output of head is then written by
O i = (Q Q , K K , V V )(4)
where Q , K , V ∈ R ×( /ℎ) are trainable parameters for linear transformations. The outputs of each head are concatenated to obtain the attention representations, formulated as:
O = (O 1 ⊕ O 2 ⊕ · · · ⊕ O ℎ ) O(5)
where ⊕ represents column-wise concatenation operation and O ∈ R × is trainable. We then applied a layer normalization operation for preventing the vanishing or exploding of gradients. We also use a residual connection to add the output O to the query sentence Q. From here, we denote the whole attentive module as (Q, K, V). Note that the output of the attentive module has an identical dimension with the query sentence Q. In experiments, Q, K, V are set to same, i.e., Q = K = V. For a given context utterance , its attention-based representation is calculated as the output of ( , , ). In this way, an utterance can attend itself to represent each word with other related words within the utterance. As a result, dependency relation information among the utterance can be captured. Similarly, the dependency representation of a given response is = ( , , ). Interaction Matrices. Given an utterance in a context and a response , we have five-channel representations for and respectively, i.e., , 1 , 2 , 3 , and , 1 , 2 , 3 , , where each representation channel of has a dimension of R × and each representation channel of has a dimension of R × . We then construct five interaction matrices for each utterance-response pair, which correspond to the interactions of -, 1 -1 , 2 -2 , 3 -3 , -. Take the interaction of -as an example, the ( , )-th element of the interaction matrix is calculated by the dot production of the -th element of and the -th element of . In practice, we directly use matrix multiplications to calculate each of the five interaction matrices. The calculation of is as follows:
= ·(6)
Following the same calculation procedure, we can obtain the other four interaction matrices 1 , 2 , 3 , , where each matrix has a dimension of R × .
Personalized Dialogue Content Modeling
In addition to the hybrid representation of context and response, we propose to use the user-specific dialogue content from two perspectives. For one thing, for a given dialogue context, the different user has distinctive attention to the context when matching a response candidate. In other words, some words or phrases are more important than others for response selection, and those vital content changes for different users. For another, we assume the user wording behavior in dialogue history is effective supplementary information for response selection. Thus, personalized dialogue content modeling depends on how to learn personalized attention to allocate weight to each word or phrase in matching context utterances and response candidate, and how to extract wording behavior matching information between dialogue history and response candidate. Personalized Attention. Intuitively, the words and phrases in context utterances and response candidate are not equally important for response selection. Moreover, different users may have distinctive attention to the dialogue content. To model the relative importance among the words and phrases with the consideration of the users' persona, we propose a simple but effective method for calculating the personalized attention scores from history utterances, which takes phrases distributions at multiple granularities into account. We first construct the personalized TF-IDF corpus by treating the dialogue history of each user as a document. Then we can compute the {1, 2, 3}-gram TF-IDF scores for each given utterance. In doing so, each {1, 2, 3}-gram phrase in the response candidate is allocated with a weight. We then conduct these weights on the interaction matrices of each context utterance and response pair. Recall that we have , 1 , 2 , 3 , for each context utterance and response pair, representing interactions at the word embedding level, the uni-gram level, the bi-gram level, the tri-gram level, and the self-attention dependency. Specifically, for the given response , we calculate its {1, 2, 3}-gram personalized weights as 1 , 2 and 3 whose dimensions are all R ×1 . We then copy these score vectors times in the column direction to form the personalized mask matrices 1 , 2 and 3 . All the three personalized mask matrices have the same dimension of R × , and the values in the same row within a matrix are the same. As the rows of the interaction matrices represent the response, we directly multiply the {1, 2, 3}-gram personalized mask matrices to the corresponding {1, 2, 3}-gram interaction matrices. Concretely, we multiply 1 to , 1 , , multiply 2 to 2 , and multiply 3 to 3 . As shown in Figure 2, we denote these weights as the personalized masks to extract vital matching signals in the interaction matrices, resulting five new interaction matrices
′ , 1 ′ , 2 ′ , 3 ′ , ′ for each context utterance response pair.
Wording Behavior Matching. In analogy to the phrase representations of context utterances and response, we treat the {1, 2, 3, 4}-grams matching information and patterns as wording behavior matching information. In details, we conduct 1-D convolution on a response candidate = [ ,1 , ,2 , . . . , , ], and a history utterance , = [ , ,1 , , ,2 , . . . , , , ], where the convolution window size is from 1 to 4. There are 1 4 convolution filters for each window size, and the stride length is 1. The zero-padding is used to let the input sequence and the output sequence of the convolution operation have the same length. Thus, a history utterance , has four corresponding matrices 1 , , , 2 , , 3 , , 4 with a same dimension R , × 1 a concatenation operation on the four matrices as the final representations of wording behavior, written by:
, = ( 1 , ⊕ , 2 ⊕ , 3 ⊕ , 4 )(7)
where , ∈ R , × . Accordingly, the wording behavior representation of a response is ∈ R × . We further calculate the interaction matrix of , and for capturing matching structure and patterns on wording behavior level. Similar to the calculation of the interaction matrices elaborated in the last subsection, the ( , )-th element of the , is calculated by the dot production of the -th element of and the -th element of , . In practice, we use matrix multiplications to calculate , as follows:
, = · ,(8)
Aggregation and Fusion
To aggregate matching degree information between a context utterance and a response, we alternatively stack two layers of 2-D convolution and max-pooling operation on the interaction matrices
′ , 1 ′ , 2 ′ , 3 ′ , ′ ,
where each interaction matrix is treated as an input channel, and the activation function is ReLU. After this operation, a concatenation operation and an MLP with one hidden layer are used to flatten the output of the stacked CNN and generate a low-dimension vector for each context utterance response pair, denoted as . As to the matching information aggregation between a history utterance and a response, we perform the same 2-D CNN with two layers on the interaction matrix , . After the concatenation and flatten layer, we obtain a vector , as the aggregation of , . The dimensions of and , are both ℎ . For multi-turn context-response matching, PHMN computes the aggregated matching vector between each utterance in context = ( 1 , 2 , . . . , , . . . , ) and the corresponding response candidate , resulting in a sequence of matching vectors 1 , 2 , . . . , , . . . ,
. In matching between dialogue history and response, PHMN outputs a bag of matching vectors ,1 , ,2 , . . . , , , . . . , , between each utterance in history = ( ,1 , ,2 , . . . , , , . . . , , ) and the response candidate . Noticing that utterances in a context have a temporal relationship, we thus leverage an RNN with GRU cell to process the aggregated matching vectors 1 , 2 , . . . , , . . . , and the use last state of RNN as the aggregated matching degree, namely ∈ R ℎ ×1 . On the other hands, utterances in dialogue history are parallel, and thus we use an attention mechanism [2] to fuse the matching vectors ,1 , ,2 , . . . , , , . . . , , , i.e., computing the weighted sum as the aggregated matching degree, denoted as ∈ R ℎ ×1 . To facilitate the combination of context-response matching information and history-response matching degree, we leverage a dynamic gate mechanism [52], which is formulated as follows:
= ( + )(9)
where is the fused context-response matching degree and corresponds to historyresponse matching, represents the activation function. The final combination of and is computed by
= (1 − ) ⊗ + ⊗(10)
where ⊗ denotes element-wise multiplication. is then processed by a fully connected layer followed by a softmax function to obtain a binary output.
Training
In learning the matching functions (·), the objective is to minimize the cross-entropy with dataset D, which can be formulated as:
L = − ∑︁ =1 ( ( , , )) + (1 − ) (1 − ( , , ))(11)
We also construct two auxiliary loss functions to enhance the training process. The first loss function refers to learning the binary classification outputs only based on context-response matching information (the upper section of Figure 1), written by:
L 1 = − ∑︁ =1 ( 1 ( , )) + (1 − ) (1 − 1 ( , ))(12)
while another loss function corresponds to outputting the binary results based on history-response matching information (the bottom part in Figure 1), formulated as:
L 2 = − ∑︁ =1 ( 2 ( , )) + (1 − ) (1 − 2 ( , ))(13)
where 1 (·) and 2 (·) refer to the matching function of context-response and history-response respectively.
EXPERIMENTS
Datasets
To evaluate the effectiveness of our proposed model, we conduct experiments on two large opendatasets with user-id information, i.e., P-Ubuntu dialogue corpus in English, P-Weibo dataset in Chinese. In detail, the P-Ubuntu dialogue corpus contains multi-turn technical support conversations with corresponded open user ids, which is collected from the Ubuntu forum 2 . We utilized Ubuntu v1.0 [32] as the raw dataset and followed the previous pre-processing strategy to replace numbers, paths, and URLs with placeholders [65]. The P-Weibo corpus is crawled from an open Chinese online chatting forum 3 which contains massive multi-turn conversation sessions and user identification information.
However, the traditional pre-processed version only contains context-response pairs, neglecting the user's ids and their dialogue history utilized in our proposed personalized ranking-based chatbots. To mitigate this issue, we further process the raw dataset into a personalized version as follows. We firstly filter out users who spoke less than 30 utterances in P-Ubuntu and 10 utterances in P-Weibo. The remaining users are considered as valid users, and we collect their utterances from the corresponding corpora as their dialogue history. The user's dialogue histories are truncated to the max length of 100 for P-Ubuntu and 50 for P-Weibo. We then collect dialogue sessions of which the two speakers are both valid users from the raw corpora. Next, we create dialogue cases from dialogue sessions by splitting them into several fragments each of which is comprised of several consecutive dialogue utterances. The last utterance in the fragment is considered as the gold response, and the remaining utterances are as the context. To achieve this, we use a sliding window to split out dialogue cases from sessions. We set the maximum context turn to 10 for both corpora and the minimum context turn to 5 for P-Ubuntu and 3 for P-Weibo given their statistics. Furthermore, we pair each dialogue case with its users' information to facilitate the incorporation of personalized response selection, which contains the speaker's id, the speaker's dialogue history, the responder's id, the responder's dialogue history. Note that for each preprocessed case, we make sure that the provided speaker's or the responder's dialogue history has no overlap with the dialogue session that the current dialogue case comes from to avoid information leakage. Finally, after the aforementioned pre-processing steps, we get 600000 such six-point groups (context, response, speaker's id, speaker's dialogue history, responder's id, responder's dialogue history) as positive cases for both corpora. We randomly split them into 500000/50000/50000 for training/validation/testing. For training, we randomly sample a negative response from other responses of the full dataset, so the proportion of the positive sample and the negative sample is 1:1. While for validation and testing, the number of randomly selected negative responses from the full dataset is 9 and the proportion is 1:9. More statistical details of the two corpora are given in Table 2.
Baselines
In our experiments, we compare our model with the following related and strong baselines. Note that since we utilize two newly created datasets P-Ubuntu and P-Weibo, we run all these models by ourselves.
TF-IDF [32], a simple but effective matching method, computes the TF-IDF scores of each word in both context utterances and response. Both context utterances and responses are represented by their corresponding weighted addition of word embeddings. The matching score between context and response is then calculated by cosine similarity.
LSTM [32] concatenates all utterances in the context into a long sentence and employs a shared LSTM network to convert both the context and the response into vector representations. Their matching degree is then calculated through a bi-linear function with sigmoid activation.
Multi-View [73] performs context-response matching calculation from multi-views, i.e., integrating information from both word sequence view and utterance sequence view to model two different levels of dependency.
SMN [61] refers to the sequential matching network. This framework separately processes each utterance in a given context to learn a matching vector between each utterance and the response with the CNN network. Then, the learned matching vectors are aggregated by RNN to calculate the final matching score between the context and the response candidate.
DAM [74], the deep attention matching network, is a strong baseline for multi-turn response retrieval. This model builds a similar matching calculation pipeline upon the SMN, while the dependency between utterances in context and response candidates are captured by stacked selfattention and cross-attention mechanisms.
MRFN [50] represents the multi-representation fusion network. The model performs contextresponse matching based on multiple types of sentence representations and fuses matching information from different channels effectively.
IOI [51] refers to the interaction-over-interaction network. This model performs deep-level matching by stacking multiple interaction blocks, i.e., extracting and aggregating the matching information within an utterance-response pair in an iterative fashion.
MSN [69] refers to the multi-hop selector network. This model firstly adopts a multi-hop selector to select the relevant utterances as context to avoid the side effect of using too many context utterances. Then, the model matches the candidate response with the filtered context to get a matching score.
BERT [7] refers to the fine-tuned BERT-base model. This model is initialized with BERT-baseuncased and BERT-base-Chinese for P-Ubuntu and P-Weibo respectively. It takes the concatenation of the context and the candidate response as the input and utilizes stacked self-attention layers to extract fine-grained representations. The matching score is calculated with an MLP built upon the top layer.
Experimental Settings
We introduce the experimental settings in this subsection. Unless otherwise stated, the preprocessing methods and the hyperparameters are the same for both corpora. We construct a shared vocabulary for context utterances, history utterances and responses, which contains the 30000 most frequent words on the training sets. We then run Word2Vec 4 on the training sets of the two corpora with the dimension of the word embedding as 200. Following previous work, we limit the length of context to 10 turns and truncate all context utterances to the max length of 50. As to user dialogue histories, we provide up to 100 user utterances for P-Ubuntu dataset and 50 sentences for P-Weibo dataset respectively. If the number of turns in a context and the number of utterances in a user dialogue history have not reached the given upper limit, we append blank sentences whose words are all padding tokens. In the hybrid representations learning of context-response matching module, we set the number of filters as 200 for {1, 2, 3}-gram CNN, and the number of heads as 8 for multi-head self-attention. In the personalized dialogue content modeling part, we choose 50 as the filter number for all {1, 2, 3, 4}-gram CNN. In the aggregation stage, the window sizes of 2-D convolution and pooling are (3, 3) for both context-response and history-response interactions. The dimension of the hidden state of the turn-level aggregation GRU is 200. For training, we set the mini-batch size to 60 and adopt the Adam optimizer [20] with the initial learning rate set to 3e-4. We exponentially decay the learning rate with the decay rate as 0.95 for every 2000 training steps. We utilize early stopping as a regularization strategy. The model which achieves the best performance on the validation set is used for testing.
For baseline models, we adopt their released codes if possible or implement ourselves and experiment on our proposed datasets. We ensure that all of our implemented baseline models achieve similar results as reported in the original papers in the standard Ubuntu v1.0 corpus. These models utilize the same vocabulary and initial word embeddings as our model. Specifically, for BERT , we use BERT-base-uncased 5 for P-Ubuntu and BERT-base-Chinese 6 for P-Weibo respectively. We first truncate the response to the max length of 50 and then iteratively insert the Table 3. Experiment results on P-Ubuntu and P-Weibo datasets, where numbers in bold means the best performance of each metric . HMN, PMN, HMN , HMN represent the simplified versions of PHMN for ablation study. We run three times of these models with different initialized parameters to calculate p-value. Numbers marked with * mean that the improvement is statistically significant compared with the baseline (t-test with p-value < 0.01). context utterances in reverse order before the response until we exhaust the context, or the total sequence exceeds the max sequence length of BERT (i.e., 512). We fine-tune the model using Adam optimizer [20] with the learning rate of 3e-5 and the batch size of 32.
P-Ubuntu
Evaluation Metrics
Given the candidate responses for each context of the test set, we evaluate the performance of different models with @ , which denotes whether top-retrieved responses from candidates contain the positive response. Besides, we also provide the top-ranking list for each test context to calculate the mean reciprocal rank (MRR) score, which is computed as follows:
MRR = 1 |T | ∑︁ ⟨ , ⟩ ∈ T 1 (⟨ , ⟩)(14)
where T indicates the context set for testing, (⟨ , ⟩) is the position of the true response regarding to the input ⟨ , ⟩ in the candidate ranking list. Table 3 reports the results of baselines and our proposed methods on P-Ubuntu and P-Weibo dataset. Table 4 supplements the evaluation results of model ablation on two datasets. We analyze these results from the following aspects.
RESULTS
Main Performance
Overall, our proposed PHMN model significantly outperforms all other models in all metrics and achieves the new state-of-the-art results on P-Ubuntu dialogue corpus and P-Weibo dataset. Table 4. Ablation study results for fusion gate and auxiliary loss on P-Ubuntu and P-Weibo corpora. The models with subscript use fusion gate, otherwise, they simply concatenate the two matching vectors and make a binary classification via a fully connected layer. The models with subscript L+L 1 +L 2 are additionally equipped with auxiliary loss. Our full model PHMN mentioned above adopts both fusion gate and auxiliary loss, for clarity, we denoted it as PHMN L+L 1 +L 2 + here. Benefiting from the deep neural model in matching feature extraction and sequential modeling strategy, SMN performs much better than the previous three baseline models on both datasets. With the enhancement of powerful attention mechanism and deep stacked layers, DAM not surprisingly yields substantial improvements over SMN, which confirms that the attention mechanism is powerful for learning dependency representations of conversations. Through fusing multiple types of sentence representation, MRFN yields substantial improvement over DAM on the both P-Ubuntu corpus and P-Weibo dataset. Furthermore, IOI and MSN perform slightly better than MRFN, these models are the strongest baselines to date without BERT and its variations. BERT improves the scores of different metrics over other baselines by a large margin, but with the cost of model complexity and time efficiency, where the details are shown in Table 6. Our proposed HMN achieves comparable results with DAM by taking advantage of attentionbased representations and interaction-based matching. Considering that HMN contains only three convolution layers while DAM stacks multiple attention layers, the hybrid representations are thus time-efficient and effective. Moreover, we notice that the simplified versions of PHMN, i.e., HMN and HMN , outperform MRFN and IOI on both corpora by a large margin.
P-Ubuntu
The Effect of Wording Behavior
As mentioned previously, the wording behavior is introduced for modeling long-term personal information other than the current dialogue context so as to enhance the performance of response candidate selection. We conduct the following two groups experiments, i.e., HMN v.s. HMN, PHMN v.s. HMN as ablation studies to investigate how the wording behavior extracted from user dialogue history affects the response selection results. HMN is the simplified version of PHMN without containing wording behavior modeling and personalized attention module. HMN boosts HMN with wording behavior modeling as extra hints for selecting response candidates, whereas HMN enhances HMN with the personalized attention module to extract important information from context utterances. PMN only takes dialogue history utterances and response as input to extract wording behavior matching patterns and degrees.
Using wording behavior information alone yields a relatively inferior matching performance. As demonstrated in Table 3, PMN achieves a basic performance in terms of various matching accuracy. PMN yields a better result than TF-IDF and insignificantly worse performance than the recently proposed models (i.e., LSTM and Multi-View) on the P-Ubuntu dialogue corpus. It is also obtained that PMN is marginally better than LSTM and Multi-View on the P-Weibo dataset. However, there is a significant gap between PMN and state-of-the-art models. These results support the intuition that context utterances contain most of the patterns and information regarding selecting a proper response while the wording behavior models general and long-term matching information.
Wording behavior significantly enhances context-response matching network by introducing supplementary matching information. Note again that wording behavior in dialogue history serves as the long-term information and can be utilized to supplement the short-term information in context utterances. Not surprisingly, HMN achieves a significant improvement over the HMN model, and even achieves a significant improvement over the MRFN, IOI, and MSN models, which are very strong among these models without incorporating BERT and its variations. With the enhancement of wording behavior information, our proposed PHMN yields an observable improvement over HMN and obtains the new state-of-the-art on two large datasets. These results confirm that wording behavior matching between user-specific dialogue history and response candidate is effective for multi-turn response selection.
The Influence of Personalized Attention
As previously stated, introducing the personalized attention module is expected to bring a positive effect on extracting important information in context-response matching. We investigate the influence of personalized attention with two groups of comparison, i.e., HMN v.s. HMN, PHMN v.s. HMN . Following observations are made in this investigation, which confirms that personalized attention is an effective add-on to the existing context-response matching model.
Personalized attention module effectively improves the accuracy of context-response matching through extracting important information in context-response matching. When personalized attention is introduced, HMN and PHMN model are capable of extracting meaningful matching information from the interaction matrices of context utterances and response while allocating less weight to unrelated matching signals. As illustrated by the evaluation results in Table 3, personalized attention can substantially improve the performance of HMN and HMN .
Performance improvement achieved by using personalized attention is less than by modeling wording behavior in dialogue history. Recall that we propose to employ user-specific dialogue history content from two different perspectives, i.e., wording behavior and personalized attention. It is natural to compare the effectiveness of personalized attention and wording behavior. As illustrated in Table 3, personalized attention results in a substantial improvement over base models on two corpora while wording behavior achieves a significant improvement on two corpora, which indicates that wording behavior modeling is more important than personalized attention. Table 4 summarizes the evaluation results of eight model variations so as to investigate the effect of the auxiliary loss and the gate mechanism. We have the observation that, for both PHMN and HMN , the auxiliary loss is helpful for training on two corpora. For PHMN and HMN , when adding the gate mechanism, it is not surprised that observable performance improvement is achieved. We believe the improvement is partly because wording behavior information in dialogue history is not at the same level with hybrid representations while the gate mechanism can effectively balance the distinctions between different levels of representations.
The effect of Fusion Gate and Auxiliary Loss
The effect of Dialogue History Size on Model Performance
In our proposed PHMN model, the dialogue histories are used to calculate the personalized attention mask and perform wording behavior matching with the candidate response. On the one hand, we often don't have enough history utterances from the same user in some scenarios. On the other hand, there is a trade-off between speed and model performance. Therefore, we study how the size of dialogue history influences the model performance in this subsection and leave the comparison of inference speed together with baselines to the next subsection. As illustrated in Table 5, we set the number of utterances in the dialogue history of the P-Weibo dataset to {10, 20, 30, 40, 50} and set the number of utterances in the dialogue history of the P-Ubuntu dataset to {10, 30, 50, 70, 100} for studying the influence of dialogue history size on model performance. It can be observed that even the available number of dialogue history is small (i.e., 10 and 30 utterances), all the three models can still yield a considerable improvement over the HMN baseline. And with the increase of dialogue history size, all the models' performance continues to improve and is not saturated under the current limitation. We can reasonably expect that with more dialogue histories available, PHMN will bring us more surprises.
Comparison of Model Complexity
Moreover, we also study the time and memory cost of our models by comparing them with baselines in terms of the number of model parameters and inference latency, which is measured as the average time cost of evaluating a single case on the same RTX 2080Ti GPU. HMN and HMN , HMN and PHMM) for some of our models, and the inference latency is relevant to the user dialogue histories for some of our models (i.e., HMN and PHMM). To be more specific, HMN shares the same model architecture with HMN, and the computation cost of the introduced personalized attention mechanism is agnostic to the number of user dialogue histories. While HMN has the same model architecture as PHMN, both models' inference latency increases with the number of user dialogue histories. Thus, we also give the inference latency of PHMN models with different user dialogue history sizes (denoted as the subscript number).
The comparison results are illustrated in Table 6. Comparing our proposed models with baselines, we can easily conclude that PHMN is both time-and memory-efficient while performing remarkably well. In terms of parameter size, it can be observed that our proposed PHMN model has similar parameters as the start-of-the-art non-BERT baseline MSN and is smaller than MRFN and IOI, not to mention BERT (which is 14.5 times larger than PHMN). This indicates that the significant performance improvement of our proposed model comes from the introduced strategies in this paper rather than a larger model size. When it comes to inference latency, we can find that PHMN is similar to MRFN and is 2.4 times faster than IOI. BERT again significantly drags on the group (which is 9.4 times slower than PHMN).
We then compare our proposed models. Comparing HMN with HMN or comparing HMN ,100 with PHMN 100 , we can find that the personalized attention mechanism is quite time-efficient as it almost adds no additional time cost. As for the influence of dialogue histories on the inference speed, it can be seen that the latency increases linearly with the number of used dialogue histories, which poses a trade-off between speed and performance that can be tuned to tailor the application scenarios.
Case Study
In addition to evaluating PHMN with quantitative results, we also conduct case studies to illustrate its superiority over baselines. Table 7 and Table 8 illustrate two examples of context-response matching with the enhancement of user-specific dialogue history modeling. Further, the tables also give the predictions of our proposed PHMN and two strong baselines (i.e., MSN and BERT ), with which we can better understand the superiority of PHMN.
For the example in Table 7, it is clearly shown that wording behavior is helpful for retrieving the correct response, i.e., "I've told you..." in dialogue history can serve as the supplementary information other than context-response matching. From the models' prediction scores we can observe that all the models provide a high matching score to the first negative case, which not only has a large word overlap with the context (i.e., "man apt-get") but also seems to have a plausible tone to respond to the last context utterance "What IS THE COMMAND" though it ignores the remaining context information. Both MSN and BERT rank this negative response as the most appropriate response against all the 10 candidate responses, including the ground truth Table 8 reveals the effectiveness of personalized attention mechanism in extracting accurate information from the interaction matrices of context utterances and response. By allocating a large weight to the key clue word "xfce4" in response, the matching accuracy is enhanced. Again, it can be seen from the models' prediction scores that although all the three models rank the ground-truth response on top, the prediction scores of the first negative candidate response given by MSN and BERT is not low. Meanwhile, PHMN assigns a high matching score to the ground truth response and a relatively low matching score to the first negative candidate response. The gap between the top-ranked score and the second-ranked score of PHMN is much larger than that of BERT (0.80 v.s. 0.36) and MSN (0.80 v.s. 0.18), which indicates that our proposed PHMN is much more confident to select the ground-truth response. This superiority is owed to the personalized attention mechanism that highlights the key clue word "xfce4".
It is observed that there are also inferior context-response matching cases in experiments. A notable example pattern is that the extracted literal wording behavior information might overwhelm other informative words and structured knowledge in the dialogue history. One potential solution for addressing such an issue is to enhance PHMN with fine-grained personalized information modeling and structured knowledge extraction. We also notice that there exist a few extraordinary bad cases where both wording behavior and personalized attention introduce noise signals for context-response matching. We believe this is due to the limited size of the dialogue history. These phenomena and analyses point out the direction of potential future work.
Study of Speaker's Persona Information
We also consider incorporating the speaker's persona information into our proposed personalized response selection model to find whether it can help the model learn better and make the conversation more engaging. Specifically, we assign persona embeddings that capture high-level persona features(i.e., topics, talking preferences, and so on) for speakers whose occurrences in the processed datasets are larger than a lower threshold (named User Occurrence Threshold).
We fuse the speaker's persona embedding into the context-response matching process to provide a speaker-aware matching vector for better capturing the speaker's preference. We borrow the iterative mutual gating mechanism from Mogrifier LSTM [35], whose effectiveness has been verified, to allow the context-response matching vector and the speaker's persona embedding vector to mutually refine the useful information they carried. We name PHMN enhanced with the speaker's embedding as PHMN . Under this motivation, there could be many influential factors that might be crucial to the performance, here we mainly study four factors: (1) the gate position, (2) the number of mutual gating iterations, (3) the dimension of the persona embedding, and (4) the number of users who have persona embedding (which is closely related to User Occurrence Threshold).
For (1), we can perform mutual gating between the context-response matching vector and the speaker's persona embedding vector before or after the turn-level aggregation GRU. If the We conduct extensive experiments on the two corpora to determine whether incorporating the speaker's persona information is helpful. The experiment results are shown in Table 9. Unfortunately, we don't observe an improvement when taking the speaker's persona information into account, given additional computation and memory cost. Specifically, on the Ubuntu corpus, all attempts fail to obtain better performance, while on Weibo, some settings of PHMN get comparable performance with PHMN. Another interesting observation is that the less interaction iteration is, the smaller the persona embedding dimension is, the larger the lower threshold of user occurrence is, the better performance (also the smaller performance drop on Ubuntu dataset) the model PHMN can achieve. The above observations indicate that incorporating the speaker's persona information brings little benefit but more computation and memory cost. Thus we don't involve the speaker's persona information in our model. Nevertheless, the speaker's preference may still be beneficial to response selection in some scenarios. We leave the study of the incorporation of the speaker's persona information as one of our future works.
RELATED WORK
In the past decades, human-machine conversation systems have been widely investigated and developed. Early studies mainly focus on building rules and templates for computers to yield a human-like response. Such a strategy has been evolved and successfully used in various domains, such as museum guiding [8], restaurant booking [21]. Later on, with the explosive growth of data, the application of the open-domain conversation model is promising. However, conventional methods for domain-specific settings have obstacles to scale to open area. Given this, various data-driven approaches have been proposed for modeling open-domain conversation, including two main groups: generation-based approaches [4,28,29,38,42], retrieval-based methods [66]. Early work of the first group builds their systems upon statistical machine translation model [40]. Recently, on top of the sequence to sequence architecture [45,54], various extensions have been proposed to address the "common response" issue [23]; to leverage external knowledge [36,41,62]; to model the hierarchical structure of conversation contexts [43,44,63]; to generate personalized responses [24,72]; and to pursue effective optimization strategies [26,27].
Early work for retrieval-based dialogue systems studies single-turn response selection [13,17,34,55]. Later on, various multi-turn response selection methods have been proposed, including the dual LSTM model, [32], the multi-view matching method [73], the sequential matching network [61], and the deep attention matching network [74]. Recently, various effective methods have been proposed for investigating the fusion of multiple types of sentence representations [50], the deep interaction in matching feature extraction [51], model ensemble [48,67,70], external knowledge combination [68], the influence of stickers in multi-modal response selection [9], and emotion control in context-response matching [39]. With the rapid explosion of pre-trained language models, researchers also have made considerable efforts in combining pre-trained language models with response selection. One typical method is to combine a pre-trained language model (BERT) with post-training method in the task of response selection [58]. Gu et al. [10] further investigate the problem of employing pre-trained language models for Speaker-Aware multi-turn response selection. Lu et al. [33] propose two strategies to improve pre-trained contextual language models for response retrieval in multi-turn conversation, namely speaker segmentation and dialogue augmentation. A deep context modeling architecture (DCM) with BERT as the context encoder has also been proposed for multi-turn response selection [30]. To address the issue of ignoring the sequential nature of multi-turn dialogue systems in utilizing pre-trained language models, the utterance manipulation strategie (UMS) has been proposed [59]. Wang et al. [57] propose an essential pre-training step to embed topic information into BERT with self-supervised learning in multi-party multi-turn response selection. More details of progresses and challenges in building intelligent open-domain dialogue systems can be found in recent surveys [3,14].
In this work, we proposed a personalized hybrid matching network (PHMN) for multi-turn response selection. We combine deep attention-based representations and interaction information as hybrid representations to achieve comprehensive modeling of multi-turn context utterances. Besides, we introduce personalized dialogue history as additional information to enhance the accuracy of context-response matching. Through extracting wording behavior and personalized attention weights from the dialogue history, our proposed PHMN achieves state-of-the-art performance on two datasets.
CONCLUSION
In this study, we propose a novel personalized hybrid matching network (PHMN) for multi-turn response selection through leveraging user-specific dialogue history as extra information. Building upon the advanced multi-dimension hybrid representation learning strategy, we incorporate the information in dialogue history from various granularities, i.e., wording behaviors matching, user-level attention for extracting vital matching information from context-response matching. Experimental results on two large datasets with different languages, personalized Ubuntu dialogue corpus (P-Ubuntu), and personalized Weibo (P-Weibo), confirm that our proposed method significantly outperforms state-of-the-art models (without using BERT). We also conduct a thorough ablation study to investigate the effect of wording behavior modeling and the influence of personalized attention, which confirms that both wording behavior and personalized attention are effective for enhancing context-response matching. Besides, we further explored the influence of Speaker A's persona in conversation insomuch as individuals will perform distinctive behaviors when they have a chat with different people. In the near future, we pursue to learn a structured knowledge representation of users and encapsulate this structured information into response selection.
Fig. 1 .
1The detailed architecture of our PHMN model, which includes three parts, i.e., hybrid representation learning module, personalized dialogue content modeling, aggregation and fusion.
Fig. 2 .
2Details of the personalized attention over the hybrid representations between context and response.
Table 1 .
1An example from the raw Ubuntu dataset that illustrates dialogue history can benefit response matching.B: i've read somewhere that xfce4 is as fast as fluxbox Dialogue B: i use xfce4 , old laptop gnome runs terribly slow on it History B: haven't tried kde on this laptop, but when i tried xfce4 its like a new lease of life B: xfce4 is light, yet quite functionalContext
A: do anyone know how to add shortcuts to the menu ?
B: depends on your desktop environment
A: sorry i new in ubuntu, what do you mean with desktop enviroment?
B: KDE / GNOME / xfce4/ fluxbox ??
A: its GNOME
B: old laptop GNOME runs terribly slow on it
A: umm yup.. what do you suggest then?
Target
B: Try xfce4 it's wonderfull, as light as icewm, and more confortable to use
selection will be enhanced. A recent study
Table 2 .
2The statistical results of two large open datasets used in the experiments, i.e., P-Ubuntu and P-Weibo; C, R, Sess refer to context, response, and dialogue session, respectively; # C-R pairs and Avg #utts per user represent the total number of context-response matching pairs and the average number of utterances in the dialogue history of a user.Corpus
P-Ubuntu
P-Weibo
Subsets
Train Valid Test Train Valid Test
# C-R Pairs
1000k 500k 500k 1000k 500k 500k
Avg # turns per Sess
8.6
8.6
8.6
4.4
4.4
4.4
Avg # words per C
99.8
100.1 99.4
37.3
37.2
37.3
Avg # words per R
12.1
12.0
12.0
7.8
7.9
7.8
Avg # utts per user
93.9
93.9
93.9
23.0
23.0
22.9
Avg # words per utt
12.0
11.9
11.9
7.9
7.9
7.9
Especially for 10 @1, PHMN achieves significant improvement over the most strong model without using BERT and its variations, i.e., MSN, on both datasets (i.e., (78.2 v.s. 70.9) on P-Ubuntu Corpus and (74.5 v.s. 70.3) on P-Weibo dataset). Surprisingly, when compared with BERT baseline, our proposed PHMN (without the support of BERT and its variations) still obtain significantly better results on P-Ubuntu Corpus (78.2 v.s. 75.7) and P-Weibo dataset (74.5 v.s. 74.0). For baseline models, TF-IDF, LSTM, and Multi-view only achieve fundamental performances on each dataset and metric.Corpus
P-Weibo Corpus
2 @1
10 @1
10 @2
10 @5 MRR
2 @1
10 @1
10 @2
10 @5 MRR
PHMN L
94.8
76.7
88.9
98.3
85.7
93.4
73.3
86.0
97.1
83.3
PHMN L+
94.9
77.2
89.1
98.4
86.0
93.6
73.8
86.3
97.2
83.6
PHMN L+L1+L2
95.1
77.8
89.5
98.4
86.4
93.6
74.1
86.5
97.3
83.8
PHMN L+L1+L2+
95.2
78.2
89.7
98.5
86.7 94.0
74.5
87.0
97.4
84.2
HMN +L
94.1
75.0
87.8
97.9
84.6
93.0
72.2
85.3
96.8
82.5
HMN +L+
94.3
75.4
88.0
98.0
84.9
93.1
72.6
85.4
96.9
82.7
HMN +L+L1+L2
94.5
75.8
88.3
98.1
85.1
93.2
72.8
85.7
97.0
82.9
HMN +L+L1+L2+
94.6
76.1
88.3
98.1
85.3
93.3
73.2
85.8
97.0
83.2
Table 5 .
5Model performance of different numbers of utterances in the dialogue history of users.#Utts (M)
P-Ubuntu Corpus
P-Weibo Corpus
2 @1
10 @1
10 @2
10 @5 MRR
2 @1
10 @1
10 @2
10 @5 MRR
HMN
0
91.7
68.7
82.5
96.0
80.0
91.0
67.7
81.4
95.3
79.1
10
92.6
70.1
83.6
96.7
81.0
91.6
68.8
82.6
96.0
80.0
30
93.1
71.8
85.1
97.0
82.3
92.0
69.7
83.4
96.3
80.7
50
93.4
72.7
85.8
97.2
82.9
92.3
70.1
83.7
96.4
81.0
70
93.7
73.4
86.2
97.4
83.4
-
-
-
-
-
100
93.9
74.0
86.6
97.6
83.8
-
-
-
-
-
HMN
0
91.7
68.7
82.5
96.0
80.0
91.0
67.7
81.4
95.3
79.1
10
93.0
71.7
85.3
97.1
82.2
92.6
71.5
84.6
96.5
81.8
30
93.6
73.4
86.2
97.3
83.3
93.1
72.7
85.7
96.9
82.9
50
94.0
74.6
87.3
97.6
84.3
93.3
73.2
85.8
97.0
83.2
70
94.5
75.6
88.1
98.0
85.0
-
-
-
-
-
100
94.6
76.1
88.3
98.1
85.3
-
-
-
-
-
PHMN
0
91.7
68.7
82.5
96.0
80.0
91.0
67.7
81.4
95.3
79.1
10
93.8
73.5
86.5
97.6
83.5
93.1
72.6
85.4
97.0
82.7
30
94.4
75.5
87.9
98.1
84.9
93.6
74.0
86.6
97.3
83.7
50
94.8
76.8
88.8
98.2
85.8
94.0
74.5
87.0
97.4
84.2
70
95.1
77.6
89.3
98.4
86.3
-
-
-
-
-
100
95.2
78.2
89.7
98.5
86.7
-
-
-
-
-
Table 6 .
6Comparison of model size and inference speed.LSTM Multi-View
SMN
DAM
MRFN
IOI
MSN
BERT
# Params(M)
6.3
6.7
6.4
8.1
9.6
15.3
7.6
110
Latency(ms) 0.882
0.894
0.364
2.526
1.818
4.416
0.822
17.2
HMN
HMN
HMN ,100 PHMN 10 PHMN 30 PHMN 50 PHMN 70 PHMN 100
# Params(M)
6.7
6.7
7.6
7.6
7.6
7.6
7.6
7.6
Latency(ms) 0.642
0.648
1.824
0.796
1.030
1.230
1.452
1.834
For our proposed models, we study HMN, HMN , HMN , and PHMM. Recall that the model
architectures are the same (i.e.,
Table 7 .
7A sampled case from P-Ubuntu corpus that shows the effectiveness of wording behavior modeling. B: as I've told you 3 times now Dialogue B: I've told you 3 times, install "mysql-server" History B: I've told you 2 times there are guides on https://help.ubuntu.com B: I've told you what to do, and I've told you this is not an ubuntu support issueContext
A: What's the command to order packages by size? I'd like the command line one
and the GUI one
B: look in the gui for sort functions. man dpkg and man apt-get
A: please tell me what command to use
B: man apt-get and man dpk will show you the commands
A: What IS THE COMMAND
Label
Candidate Response
Model Prediction
MSN BERT
PHMN
✔
I've told you the command to find out the commands 0.32
0.48
0.79
you want
✘
try 'man apt-get' and 'man aptitude'
0.58
0.64
0.43
I wanna eventually be a Unix Admin
0.02
<0.01
0.01
that should be good enough
0.07
0.02
0.05
so you'll have anonymous access and such.
0.02
0.01
<0.01
It's kind of like a menu... in expanded mode.
0.08
0.04
0.03
and it works good for the time being :)
<0.01
<0.01
<0.01
sounds like dissection's suggestion might do the trick. 0.03
<0.01
0.01
it may already be supported, have a websearch round
0.05
0.02
0.03
no, it is installed now.
0.06
0.12
0.17
response. And our proposed PHMN successfully ranks the ground-truth response on top thanks to
the wording behavior model mechanism that effectively captures supplementary information.
Example in
Table 8 .
8A sampled case from P-Ubuntu corpus that shows the advantage of utilizing personalized attention. B: i've read somewhere that xfce4 is as fast as fluxbox Dialogue B: i use xfce4 , old laptop gnome runs terribly slow on it History B: haven't tried kde on this laptop, but when i tried xfce4 its like a new lease of life. xfce4 is light, yet quite functional Context A: do anyone know how to add shortcuts to the menu? B: depends on your desktop environment A: sorry i new in ubuntu, what do you mean with desktop enviroment? B: KDE / GNOME / xfce4/ fluxbox?? A: its GNOME B: old laptop GNOME runs terribly slow on it A: umm yup.. what do you suggest then? humm, well I dunno. you check lsmod to see 0.01 <0.01 <0.01 what loaded and dmesg? no ,but I was hoping it is not necessary if you ll check that out after I try to get DRI working.Label
Candidate Response
Model Prediction
MSN BERT
PHMN
✔
Try xfce4 it's wonderfull, as light as icewm,
0.67
0.81
0.97
and more comfortable to use
✘
have you tried GNOME's "network browser"?
0.49
0.45
0.17
0.04
0.02
0.05
use lagacy drivers
i quite love my intel graphics
<0.01
<0.01
<0.01
but I can not ping anything
<0.01
<0.01
<0.01
i made the steps and i have an error
0.02
<0.01
0.01
its not exactly lubuntu but it works just the same 0.01
0.02
0.01
im new to linux, i need more in depth infomation 0.04
0.13
0.09
I'0.11
0.07
0.03
Table 9 .
9Study of incorporating speaker's persona information.gate is before the turn-level GRU, the speaker's persona embedding can provide utterance-level guidance for original matching vectors . We abbreviate this gate position as Before. If we inject the speaker's persona embedding after the turn-level GRU, it can guide the aggregated matching vector from a global perspective. We abbreviate this gate position as After. For(2), the gating iterations are set to {1, 2, 3} to study whether deep mutual interactions can boost the performance. For (3), the dimension of persona embedding is set to {50, 100, 200}. And we use the User Occurrence Threshold mentioned just before as the indicator for (4). Concretely, we set the User Occurrence Threshold to {3, 4, 5}, which means we only provide the speakers whose occurrences in the processed datasets are larger than {3, 4, 5} with a specific user embedding, while leaving other speakers with a shared UNK embedding. Under these settings, the numbers of remaining users for Ubuntu and Weibo are {33303, 26228, 21059} and {54683, 24950, 12162} respectively.Gate
Position
Gating
Iterations
Embedding
Dimension
User Occurrence
Threshold
Ubuntu
Weibo
R@1 MRR R@1 MRR
PHMN
-
-
-
-
78.2 86.7 74.5 84.2
PHMN
Before
1
50
5
77.8 86.5 74.6 84.1
After
77.7
86.4 74.4 83.9
Before
1
50
5
77.8 86.5 74.6 84.1
2
77.7 86.4 74.4 84.0
3
77.3 84.1 74.3 83.9
Before
1
50
5
77.8 86.5 74.6 84.1
100
77.7 86.4 74.4 84.0
200
77.3 86.2 74.3 84.0
Before
1
50
5
77.8 86.5
74.6 84.1
4
77.6 86.3 74.5 84.1
3
77.4 86.3 74.4 84.0
Sequence-to-sequence neural networks along with the log-likelihood objective function tend to create short, high-frequency, and commonplace responses (e.g., "I don't know", "I'm OK"), which also refers to common response in previous study[43,49,54] J. ACM, Vol. , No. , Article . Publication date: March 2021.Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots
. We perform J. ACM, Vol. , No. , Article . Publication date: March 2021.Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots
https://ubuntuforums.org/ 3 https://www.weibo.com
https://code.google.com/archive/p/word2vec/ 5 https://huggingface.co/bert-base-uncased 6 https://huggingface.co/bert-base-chinese
J. ACM, Vol. , No. , Article . Publication date: March 2021.Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots
ACKNOWLEDGMENTSWe would like to thank the efforts of anonymous reviewers for improving this paper. This work was supported by the National Key Research and Development Program of China (No.2020AAA0106600).
Asking Clarifying Questions in Open-Domain Information-Seeking Conversations. Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, W Bruce Croft, Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalMohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W Bruce Croft. 2019. Asking Clarifying Questions in Open-Domain Information-Seeking Conversations. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 475-484.
Neural Machine Translation by Jointly Learning to Align and Translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd International Conference on Learning Representations, ICLR 2015.
arXiv:1907.12878Deep Retrieval-Based Dialogue Systems: A Short Review. Basma El Amel Boussaha, Nicolas Hernandez, Christine Jacquin, and Emmanuel MorinarXiv preprintBasma El Amel Boussaha, Nicolas Hernandez, Christine Jacquin, and Emmanuel Morin. 2019. Deep Retrieval-Based Dialogue Systems: A Short Review. arXiv preprint arXiv:1907.12878 (2019).
Modeling personalization in continuous space for response generation via augmented wasserstein autoencoders. Zhangming Chan, Juntao Li, Xiaopeng Yang, Xiuying Chen, Wenpeng Hu, Dongyan Zhao, Rui Yan, Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing. the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processingZhangming Chan, Juntao Li, Xiaopeng Yang, Xiuying Chen, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Modeling personalization in continuous space for response generation via augmented wasserstein autoencoders. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp). 1931-1940.
MIX: Multi-Channel Information Crossing for Text Matching. Haolan Chen, Fred X Han, Di Niu, Dong Liu, Kunfeng Lai, Chenglin Wu, Yu Xu, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningACMHaolan Chen, Fred X Han, Di Niu, Dong Liu, Kunfeng Lai, Chenglin Wu, and Yu Xu. 2018. MIX: Multi-Channel Information Crossing for Text Matching. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 110-119.
Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. Zhuyun Dai, Chenyan Xiong, Jamie Callan, Zhiyuan Liu, Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. the Eleventh ACM International Conference on Web Search and Data MiningACMZhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM, 126-134.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805 (2018).
TRAINS-95: Towards a Mixed-Initiative Planning Assistant. George Ferguson, F James, Bradford W Allen, Miller, Proceedings of the Third International Conference on Artificial Intelligence Planning Systems. the Third International Conference on Artificial Intelligence Planning SystemsGeorge Ferguson, James F Allen, Bradford W Miller, et al. 1996. TRAINS-95: Towards a Mixed-Initiative Planning Assistant. In Proceedings of the Third International Conference on Artificial Intelligence Planning Systems. 70-77.
Learning to Respond with Stickers: A Framework of Unifying Multi-Modality in Multi-Turn Dialog. Shen Gao, Xiuying Chen, Chang Liu, Li Liu, Dongyan Zhao, Rui Yan, Proceedings of The Web Conference 2020. The Web Conference 2020Shen Gao, Xiuying Chen, Chang Liu, Li Liu, Dongyan Zhao, and Rui Yan. 2020. Learning to Respond with Stickers: A Framework of Unifying Multi-Modality in Multi-Turn Dialog. In Proceedings of The Web Conference 2020. 1138-1148.
Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots. Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementJia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2041-2044.
A Deep Relevance Matching Model for Ad-hoc Retrieval. Jiafeng Guo, Yixing Fan, Ai Qingyao, W Bruce Croft, Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. the 25th ACM International on Conference on Information and Knowledge ManagementACMJiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A Deep Relevance Matching Model for Ad-hoc Retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, 55-64.
Guided Transformer: Leveraging Multiple External Sources for Representation Learning in Conversational Search. Helia Hashemi, Hamed Zamani, W Croft, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalHelia Hashemi, Hamed Zamani, and W. Croft. 2020. Guided Transformer: Leveraging Multiple External Sources for Representation Learning in Conversational Search. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (2020).
Convolutional Neural Network Architectures for Matching Natural Language Sentences. Baotian Hu, Zhengdong Lu, Hang Li, Qingcai Chen, Advances in Neural Information Processing Systems. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional Neural Network Architectures for Matching Natural Language Sentences. In Advances in Neural Information Processing Systems. 2042-2050.
Challenges in Building Intelligent Open-Domain Dialog Systems. Minlie Huang, Xiaoyan Zhu, Jianfeng Gao, ACM Transactions on Information Systems (TOIS). 38Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in Building Intelligent Open-Domain Dialog Systems. ACM Transactions on Information Systems (TOIS) 38, 3 (2020), 1-32.
Learning Deep Structured Semantic Models for Web Search Using Clickthrough Data. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, Larry Heck, Proceedings of the 22nd ACM International Conference on Conference on Information & Knowledge Management. the 22nd ACM International Conference on Conference on Information & Knowledge ManagementACMPo-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning Deep Structured Semantic Models for Web Search Using Clickthrough Data. In Proceedings of the 22nd ACM International Conference on Conference on Information & Knowledge Management. ACM, 2333-2338.
Deep Recursive Neural Networks for Compositionality in Language. Ozan Irsoy, Claire Cardie, Advances in Neural Information Processing Systems. Ozan Irsoy and Claire Cardie. 2014. Deep Recursive Neural Networks for Compositionality in Language. In Advances in Neural Information Processing Systems. 2096-2104.
An Information Retrieval Approach to Short Text Conversation. Zongcheng Ji, Zhengdong Lu, Hang Li, arXiv:1408.6988arXiv preprintZongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An Information Retrieval Approach to Short Text Conversation. arXiv preprint arXiv:1408.6988 (2014).
Nal Kalchbrenner, Edward Grefenstette, Phil Blunsom, arXiv:1404.2188Convolutional Neural Network for Modelling Sentences. arXiv preprintNal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Network for Modelling Sentences. arXiv preprint arXiv:1404.2188 (2014).
Yoon Kim, arXiv:1408.5882Convolutional Neural Networks for Sentence Classification. arXiv preprintYoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. arXiv preprint arXiv:1408.5882 (2014).
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.
Sequicity: Simplifying Task-Oriented Dialogue Systems with Single Sequence-to-Sequence Architectures. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, Dawei Yin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying Task-Oriented Dialogue Systems with Single Sequence-to-Sequence Architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1. 1437-1447.
AliMe Assist: An Intelligent Assistant for Creating an Innovative E-commerce Experience. Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. the 2017 ACM on Conference on Information and Knowledge ManagementFeng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, et al. 2017. AliMe Assist: An Intelligent Assistant for Creating an Innovative E-commerce Experience. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 2495-2498.
A Diversity-Promoting Objective Function for Neural Conversation Models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A Diversity-Promoting Objective Function for Neural Conversation Models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 110-119.
A Persona-Based Neural Conversation Model. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, Bill Dolan, 10.18653/v1/P16-1094Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A Persona-Based Neural Conversation Model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 994-1003. https://doi.org/10.18653/v1/P16-1094
Jiwei Li, Minh-Thang Luong, Dan Jurafsky, Eudard Hovy, arXiv:1503.00185When are Tree Structures Necessary for Deep Learning of Representations. arXiv preprintJiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. 2015. When are Tree Structures Necessary for Deep Learning of Representations? arXiv preprint arXiv:1503.00185 (2015).
Deep Reinforcement Learning for Dialogue Generation. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, Jianfeng Gao, 10.18653/v1/D16-1127Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingJiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep Reinforcement Learning for Dialogue Generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 1192-1202. https://doi.org/10.18653/v1/D16-1127
Adversarial Learning for Neural Dialogue Generation. Jiwei Li, Will Monroe, Tianlin Shi, Sėbastien Jean, Alan Ritter, Dan Jurafsky, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingJiwei Li, Will Monroe, Tianlin Shi, Sėbastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial Learning for Neural Dialogue Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2157-2169. http://aclweb.org/anthology/D17-1230
Insufficient data can also rock! learning to converse using smaller data with augmentation. Juntao Li, Lisong Qiu, Bo Tang, Dongmin Chen, Dongyan Zhao, Rui Yan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Juntao Li, Lisong Qiu, Bo Tang, Dongmin Chen, Dongyan Zhao, and Rui Yan. 2019. Insufficient data can also rock! learning to converse using smaller data with augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6698-6705.
Overview of the NLPCC 2018 shared task: Multi-turn human-computer conversations. Juntao Li, Rui Yan, CCF International Conference on Natural Language Processing and Chinese Computing. SpringerJuntao Li and Rui Yan. 2018. Overview of the NLPCC 2018 shared task: Multi-turn human-computer conversations. In CCF International Conference on Natural Language Processing and Chinese Computing. Springer, 446-451.
. Lu Li, Chenliang Li, Donghong Ji, n.d.Lu Li, Chenliang Li, and Donghong Ji. [n.d.].
Deep Context Modeling for Multi-Turn Response Selection in Dialogue Systems. Information Processing & Management. 58102415Deep Context Modeling for Multi-Turn Response Selection in Dialogue Systems. Information Processing & Management 58, 1 ([n. d.]), 102415.
Pengfei Liu, Xipeng Qiu, Xuanjing Huang, arXiv:1605.05101Recurrent Neural Network for Text Classification with Multi-Task Learning. arXiv preprintPengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent Neural Network for Text Classification with Multi-Task Learning. arXiv preprint arXiv:1605.05101 (2016).
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau, 10.18653/v1/W15-4640Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 16th Annual Meeting of the Special Interest Group on Discourse and DialogueRyan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. 285-294. https://doi.org/10.18653/v1/W15-4640
Improving Contextual Language Models for Response Retrieval in Multi-Turn Conversation. Junyu Lu, Xiancong Ren, Yazhou Ren, Ao Liu, Zenglin Xu, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalJunyu Lu, Xiancong Ren, Yazhou Ren, Ao Liu, and Zenglin Xu. 2020. Improving Contextual Language Models for Response Retrieval in Multi-Turn Conversation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 1805-1808.
A Deep Architecture for Matching Short Texts. Zhengdong Lu, Hang Li, Advances in Neural Information Processing Systems. Zhengdong Lu and Hang Li. 2013. A Deep Architecture for Matching Short Texts. In Advances in Neural Information Processing Systems. 1367-1375.
. Gábor Melis, Tomáš Kočiskỳ, Phil Blunsom, arXiv:1909.01792Mogrifier LSTM. arXiv preprintGábor Melis, Tomáš Kočiskỳ, and Phil Blunsom. 2019. Mogrifier LSTM. arXiv preprint arXiv:1909.01792 (2019).
Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersLili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 3349-3358. http://www.aclweb.org/anthology/ C16-1316
Text Matching as Image Recognition. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, Xueqi Cheng, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceLiang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2016. Text Matching as Image Recognition. In Proceedings of the AAAI Conference on Artificial Intelligence. 2793-2799.
Are training samples correlated? learning to generate dialogue responses with multiple references. Lisong Qiu, Juntao Li, Wei Bi, Dongyan Zhao, Rui Yan, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsLisong Qiu, Juntao Li, Wei Bi, Dongyan Zhao, and Rui Yan. 2019. Are training samples correlated? learning to generate dialogue responses with multiple references. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 3826-3835.
What If Bots Feel Moods. Lisong Qiu, Yingwai Shiu, Pingping Lin, Ruihua Song, Yue Liu, Dongyan Zhao, Rui Yan, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalLisong Qiu, Yingwai Shiu, Pingping Lin, Ruihua Song, Yue Liu, Dongyan Zhao, and Rui Yan. 2020. What If Bots Feel Moods?. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 1161-1170.
Data-driven Response Generation in Social Media. Alan Ritter, Colin Cherry, William B Dolan, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingAlan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven Response Generation in Social Media. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. 583-593.
Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation. Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Bengio, Aaron Courville, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceIulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron Courville. 2017. Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation. In Proceedings of the AAAI Conference on Artificial Intelligence. 3288-3294.
End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, Joelle Pineau, proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceIulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In proceedings of the AAAI Conference on Artificial Intelligence. 3776-3784.
Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, C Aaron, Joelle Courville, Pineau, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence16Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building End-To- End Dialogue Systems Using Generative Hierarchical Neural Network Models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 16. 3776-3784.
A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, C Aaron, Yoshua Courville, Bengio, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceIulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence. 3295-3301.
Neural Responding Machine for Short-Text Conversation. Lifeng Shang, Zhengdong Lu, Hang Li, 10.3115/v1/P15-1152Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural Responding Machine for Short-Text Conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. 1577-1586. https://doi.org/10.3115/v1/P15-1152
From Eliza to XiaoIce: Challenges and Opportunities with Social Chatbots. Heung-Yeung Shum, Xiaodong He, Di Li, Frontiers of IT & EE. 19Heung-Yeung Shum, Xiaodong He, and Di Li. 2018. From Eliza to XiaoIce: Challenges and Opportunities with Social Chatbots. Frontiers of IT & EE 19, 1 (2018), 10-26.
Parsing Natural Scenes and Natural Language with Recursive Neural Networks. Richard Socher, C Cliff, Chris Lin, Andrew Y Manning, Ng, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine LearningRichard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. 2011. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). 129-136.
An Ensemble of Retrieval-Based and Generation-Based Human-Computer Conversation Systems. Yiping Song, Cheng-Te Li, Jian-Yun Nie, Ming Zhang, Dongyan Zhao, Rui Yan, Proceedings of the 27th International Joint Conference on Artificial Intelligence. the 27th International Joint Conference on Artificial IntelligenceYiping Song, Cheng-Te Li, Jian-Yun Nie, Ming Zhang, Dongyan Zhao, and Rui Yan. 2018. An Ensemble of Retrieval- Based and Generation-Based Human-Computer Conversation Systems. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 4382-4388.
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan, 10.3115/v1/N15-1020Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 196-205. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 196-205Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 196-205. https://doi.org/10.3115/v1/N15-1020
Multi-Representation Fusion Network for Multi-Turn Response Selection in Retrieval-Based Chatbots. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan, Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningChongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multi-Representation Fusion Network for Multi-Turn Response Selection in Retrieval-Based Chatbots. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. 267-275.
One Time of Interaction May Not Be Enough: Go Deep with an Interaction-Over-Interaction Network for Response Selection in Dialogues. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsChongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. One Time of Interaction May Not Be Enough: Go Deep with an Interaction-Over-Interaction Network for Response Selection in Dialogues. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 1-11.
Learning to Remember Translation History with a Continuous Cache. Zhaopeng Tu, Yang Liu, Shuming Shi, Tong Zhang, Transactions of the Association of Computational Linguistics. 6Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to Remember Translation History with a Continuous Cache. Transactions of the Association of Computational Linguistics 6 (2018), 407-420.
Attention is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Advances in Neural Information Processing Systems. 5998-6008.
Oriol Vinyals, Quoc Le, arXiv:1506.05869A Neural Conversational Model. arXiv preprintOriol Vinyals and Quoc Le. 2015. A Neural Conversational Model. arXiv preprint arXiv:1506.05869 (2015).
A Dataset for Research on Short-Text Conversations. Hao Wang, Zhengdong Lu, Hang Li, Enhong Chen, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingHao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A Dataset for Research on Short-Text Conversations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 935-945. http://aclweb.org/ anthology/D13-1096
Syntax-Based Deep Matching of Short Texts. Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu, Proceedings of the 24th International Conference on Artificial Intelligence. the 24th International Conference on Artificial IntelligenceMingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-Based Deep Matching of Short Texts. In Proceedings of the 24th International Conference on Artificial Intelligence. 1354-1361.
Response Selection for Multi-Party Conversations with Dynamic Topic Tracking. Weishi Wang, C H Steven, Shafiq Hoi, Joty, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP. the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLPWeishi Wang, Steven CH Hoi, and Shafiq Joty. 2020. Response Selection for Multi-Party Conversations with Dynamic Topic Tracking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 6581-6591.
An Effective Domain Adaptive Post-Training Method for BERT in Response Selection. Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, H Lim, Proc. Interspeech. Interspeech2020Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and H Lim. 2020. An Effective Domain Adaptive Post-Training Method for BERT in Response Selection. In Proc. Interspeech, Vol. 2020.
Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection. Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-Hun Lee, Saebyeok Lee, arXiv:2009.04703arXiv preprintTaesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-hun Lee, and Saebyeok Lee. 2020. Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection. arXiv preprint arXiv:2009.04703 (2020).
Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots. Yu Wu, Wei Wu, Zhoujun Li, Ming Zhou, arXiv:1805.02333arXiv preprintYu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2018. Learning Matching Models with Weak Supervision for Response Selection in Retrieval-based Chatbots. arXiv preprint arXiv:1805.02333 (2018).
Sequential Matching Network: A New Architecture for Multi-Turn Response Selection in Retrieval-Based Chatbots. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, Zhoujun Li, 10.18653/v1/P17-1046Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential Matching Network: A New Architecture for Multi-Turn Response Selection in Retrieval-Based Chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1. 496-505. https://doi.org/10.18653/v1/P17-1046
Article . Publication date. J. ACM. J. ACM, Vol. , No. , Article . Publication date: March 2021.
Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots. Dialogue History Matters! Personalized Response Selection in Multi-turn Retrieval-based Chatbots
Topic Aware Neural Response Generation. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, Wei-Ying Ma, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceChen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic Aware Neural Response Generation. In Proceedings of the AAAI Conference on Artificial Intelligence. 3351-3357.
Hierarchical Recurrent Attention Network for Response Generation. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, Ming Zhou, Thirty-Second AAAI Conference on Artificial Intelligence. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical Recurrent Attention Network for Response Generation. In Thirty-Second AAAI Conference on Artificial Intelligence.
End-to-End Neural Ad-hoc Ranking with Kernel Pooling. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, Russell Power, Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 40th International ACM SIGIR Conference on Research and Development in Information RetrievalACMChenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-End Neural Ad-hoc Ranking with Kernel Pooling. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 55-64.
Incorporating Loose-Structured Knowledge into LSTM with Recall Gate for Conversation Modeling. Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, Xiaolong Wang, arXiv:1605.05110arXiv preprintZhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating Loose-Structured Knowledge into LSTM with Recall Gate for Conversation Modeling. arXiv preprint arXiv:1605.05110 (2016).
Learning to Respond with Deep Neural Networks for Retrieval-Based Human-Computer Conversation System. Rui Yan, Yiping Song, Hua Wu, Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR conference on Research and Development in Information RetrievalRui Yan, Yiping Song, and Hua Wu. 2016. Learning to Respond with Deep Neural Networks for Retrieval-Based Human-Computer Conversation System. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 55-64.
A Hybrid Retrieval-Generation Neural Conversation Model. Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, Bruce Croft, Xiaodong Liu, Yelong Shen, Jingjing Liu, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementLiu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, W Bruce Croft, Xiaodong Liu, Yelong Shen, and Jingjing Liu. 2019. A Hybrid Retrieval-Generation Neural Conversation Model. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 1341-1350.
Response Ranking with Deep Matching Networks and External Knowledge in Information-seeking Conversation Systems. Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, Bruce Croft, Jun Huang, Haiqing Chen, arXiv:1805.00188arXiv preprintLiu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W Bruce Croft, Jun Huang, and Haiqing Chen. 2018. Response Ranking with Deep Matching Networks and External Knowledge in Information-seeking Conversation Systems. arXiv preprint arXiv:1805.00188 (2018).
Multi-Hop Selector Network for Multi-Turn Response Selection in Retrieval-Based Chatbots. Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, Songlin Hu, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingChunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, and Songlin Hu. 2019. Multi-Hop Selector Network for Multi-Turn Response Selection in Retrieval-Based Chatbots. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 111-120.
EnsembleGAN: Adversarial Learning for Retrieval-Generation Ensemble Model on Short-Text conversation. Jiayi Zhang, Chongyang Tao, Zhenjing Xu, Qiaojing Xie, Wei Chen, Rui Yan, Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalJiayi Zhang, Chongyang Tao, Zhenjing Xu, Qiaojing Xie, Wei Chen, and Rui Yan. 2019. EnsembleGAN: Adversarial Learning for Retrieval-Generation Ensemble Model on Short-Text conversation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 435-444.
Modeling Multi-turn Conversation with Deep Utterance Aggregation. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, Gongshen Liu, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsZhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018. Modeling Multi-turn Conversation with Deep Utterance Aggregation. In Proceedings of the 27th International Conference on Computational Linguistics (Santa Fe, New Mexico, USA). Association for Computational Linguistics, 3740-3752. http://aclweb.org/anthology/C18-1317
Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, Bing Liu, Thirty-Second AAAI Conference on Artificial Intelligence. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional Chatting Machine: Emo- tional Conversation Generation with Internal and External Memory. In Thirty-Second AAAI Conference on Artificial Intelligence.
Multi-View Response Selection for Human-Computer Conversation. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, Rui Yan, 10.18653/v1/D16-1036Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingXiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-View Response Selection for Human-Computer Conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 372-381. https://doi.org/10.18653/v1/D16-1036
Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, Hua Wu, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1. 1118-1127. http://aclweb.org/anthology/P18- 1103
| [] |
[
"From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project *",
"From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project *"
] | [
"Peter Clark \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Oren Etzioni \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Daniel Khashabi \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Tushar Khot \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Dalvi Bhavana \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Kyle Mishra \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Ashish Richardson \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Carissa Sabharwal \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Oyvind Schoenick \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Niket Tafjord \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Sumithra Tandon \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Dirk Bhakthavatsalam \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Michal Groeneveld \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Michael Guerquin \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n",
"Schmitz \nAllen Institute for Artificial Intelligence\nSeattleWAU.S.A\n"
] | [
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A",
"Allen Institute for Artificial Intelligence\nSeattleWAU.S.A"
] | [] | AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy!, but the rich variety of standardized exams has remained a landmark challenge. Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge(Schoenick et al., 2016). This paper reports unprecedented success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90% on the exam's non-diagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC questions. The results, on unseen test questions, are robust across different test years and different variations of this kind of test. They demonstrate that modern NLP methods can result in mastery on this task. While not a full solution to general question-answering (the questions are multiple choice, and the domain is restricted to 8th Grade science), it represents a significant milestone for the field. * We gratefully acknowledge the late Paul Allen's inspiration, passion, and support for research on this grand challenge.1 See Section 4.1 for the experimental methodology. | 10.1609/aimag.v41i4.5304 | [
"https://arxiv.org/pdf/1909.01958v2.pdf"
] | 202,539,605 | 1909.01958 | a885f4312e6209054657154cbd4be908866ba445 |
From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project *
Peter Clark
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Oren Etzioni
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Daniel Khashabi
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Tushar Khot
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Dalvi Bhavana
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Kyle Mishra
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Ashish Richardson
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Carissa Sabharwal
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Oyvind Schoenick
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Niket Tafjord
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Sumithra Tandon
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Dirk Bhakthavatsalam
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Michal Groeneveld
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Michael Guerquin
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
Schmitz
Allen Institute for Artificial Intelligence
SeattleWAU.S.A
From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project *
AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy!, but the rich variety of standardized exams has remained a landmark challenge. Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge(Schoenick et al., 2016). This paper reports unprecedented success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90% on the exam's non-diagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC questions. The results, on unseen test questions, are robust across different test years and different variations of this kind of test. They demonstrate that modern NLP methods can result in mastery on this task. While not a full solution to general question-answering (the questions are multiple choice, and the domain is restricted to 8th Grade science), it represents a significant milestone for the field. * We gratefully acknowledge the late Paul Allen's inspiration, passion, and support for research on this grand challenge.1 See Section 4.1 for the experimental methodology.
Introduction
This paper reports on the history, progress, and lessons from the Aristo project, a six-year quest to answer grade-school and high-school science exams. Aristo has recently surpassed 90% on multiple choice questions from the 8th Grade New York Regents Science Exam (see Figure 2). 1 We begin by offering several perspectives on why this achievement is significant for NLP and for AI more broadly.
The Turing Test versus Standardized Tests
In 1950, Alan Turing proposed the now well-known Turing Test as a possible test of machine intelligence: If a system can exhibit conversational behavior that is indistinguishable from that of a human during a conversation, that system could be considered intelligent (Turing, 1950). As the field of AI has grown, the test has become less meaningful as a challenge task for several reasons. First, its setup is not well defined (e.g., who is the person giving the test?). A computer scientist would likely know good distinguishing questions to ask, while a random member of the general public may not.
What constraints are there on the interaction? What guidelines are provided to the judges? Second, recent Turing Test competitions have shown that, in certain formulations, the test itself is gameable; that is, people can be fooled by systems that simply retrieve sentences and make no claim of being intelligent (Aron, 2011;BBC, 2014). John Markoff of The New York Times wrote that the Turing Test is more a test of human gullibility than machine intelligence. Finally, the test, as originally conceived, is pass/fail rather than scored, thus providing no measure of progress toward a goal, something essential for any challenge problem.
Instead of a binary pass/fail, machine intelligence is more appropriately viewed as a diverse collection of capabilities associated with intelligent behavior. Finding appropriate benchmarks to test such capabilities is challenging; ideally, a benchmark should test a variety of capabilities in a natural and unconstrained way, while additionally being clearly measurable, understandable, accessible, and motivating.
Standardized tests, in particular science exams, are a rare example of a challenge that meets these requirements.
While not a full test of machine intelligence, they do explore several capabilities strongly associated with intelligence, including language understanding, reasoning, and use of common-sense knowledge. One of the most interesting and appealing aspects of science exams is their graduated and multifaceted nature; different questions explore different types of knowledge, varying substantially in difficulty. For this reason, they have been used as a compelling-and challenging-task for the field for many years (Brachman et al., 2005;.
Natural Language Processing
With the advent of contextualized word-embedding methods such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2018), and most recently RoBERTa (Liu et al., 2019b), the NLP community's benchmarks are being felled at a remarkable rate. These are, however, internally-generated yardsticks, such as SQuAD (Rajpurkar et al., 2016), Glue (Wang et al., 2019), SWAG (Zellers et al., 2018), TriviaQA , and many others.
In contrast, the 8th Grade science benchmark is an external, independently-generated benchmark where we can compare machine performance with human performance. Moreover, the breadth of the vocabulary and the depth of 1. Which equipment will best separate a mixture of iron filings and black pepper? (1) magnet (2) filter paper (3) triplebeam balance (4) voltmeter 2. Which form of energy is produced when a rubber band vibrates? (1) chemical (2) light (3) electrical (4) sound 3. Because copper is a metal, it is (1) liquid at room temperature (2) nonreactive with other substances (3) a poor conductor of electricity (4) a good conductor of heat 4. Which process in an apple tree primarily results from cell division? (1) growth (2) photosynthesis (3) gas exchange (4) waste removal Figure 1: Example questions from the NY Regents Exam (8th Grade), illustrating the need for both scientific and commonsense knowledge.
the questions is unprecedented. For example, in the ARC question corpus of science questions, the average question length is 22 words using a vocabulary of over 6300 distinct (stemmed) words . Finally, the questions often test scientific knowledge by applying it to everyday situations and thus require aspects of common sense. For example, consider the question: Which equipment will best separate a mixture of iron filings and black pepper? To answer this kind of question robustly, it is not sufficient to understand magnetism. Aristo also needs to have some model of "black pepper" and "mixture" because the answer would be different if the iron filings were submerged in a bottle of water. Aristo thus serves as a unique "poster child" for the remarkable and rapid advances achieved by leveraging contextual word-embedding models in, NLP.
Machine Understanding of Textbooks
Within NLP, machine understanding of textbooks is a grand AI challenge that dates back to the '70s, and was reinvigorated in Raj Reddy's 1988 AAAI Presidential Address and subsequent writing (Reddy, 1988(Reddy, , 2003. However, progress on this challenge has a checkered history. Early attempts side-stepped the natural language understanding (NLU) task, in the belief that the main challenge lay in problem-solving. For example, Larkin et al. (1980) manually encoded a physics textbook chapter as a set of rules that could then be used for question answering. Subsequent attempts to automate the reading task were unsuccessful, and the language task itself has emerged as a major challenge for AI.
In recent years there has been substantial progress in systems that can find factual answers in text, starting with IBM's Watson system (Ferrucci et al., 2010), and now with high-performing neural systems that can answer short questions provided they are given a text that contains the answer (e.g., Seo et al., 2016;Wang et al., 2018). The work presented here continues along this trajectory, but aims to also answer questions where the answer may not be written down explicitly. While not a full solution to the textbook grand challenge, this work is thus a further step along this path.
A Brief History of Aristo
Project Aristo emerged from the late Paul Allen's longstanding dream of a Digital Aristotle, an "easy-to-use, allencompassing knowledge storehouse...to advance the field of AI." (Allen, 2012). Initially, a small pilot program in 2003 aimed to encode 70 pages of a chemistry textbook and answer the questions at the end of the chapter. The pilot was considered successful (Friedland et al., 2004), with the significant caveat that both text and questions were manually encoded, side-stepping the natural language task, similar to earlier efforts. A subsequent larger program, called Project Halo, developed tools allowing domain experts to rapidly enter knowledge into the system. However, despite substantial progress (Gunning et al., 2010;Chaudhri et al., 2013), the project was ultimately unable to scale to reliably acquire textbook knowledge, and was unable to handle questions expressed in full natural language.
In 2013, with the creation of the Allen Institute for Artificial Intelligence (AI2), the project was rethought and relaunched as Project Aristo (connoting Aristotle as a child), designed to avoid earlier mistakes. In particular: handling natural language became a central focus; Most knowledge was to be acquired automatically (not manually); Machine learning was to play a central role; questions were to be answered exactly as written; and the project restarted at elementary-level science (rather than college-level) .
The metric progress of the Aristo system on the Regents 8th Grade exams (non-diagram, multiple choice part, for a hidden, held-out test set) is shown in Figure 2. The figure shows the variety of techniques attempted, and mirrors the rapidly changing trajectory of the Natural Language Processing (NLP) field in general. Early work was dominated by information retrieval, statistical, and automated rule extraction and reasoning methods (Clark et al., 2014Khot et al., 2017;Khashabi et al., 2018). Later work has harnessed state-of-the-art tools for large-scale language modeling and deep learning (Trivedi et al., 2019;Tandon et al., 2018), which have come to dominate the performance of the overall system and reflects the stunning progress of the field of NLP as a whole.
The Aristo System
We now describe the architecture of Aristo, and provide a brief summary of the solvers it uses.
Overview
The current configuration of Aristo comprises of eight solvers, described shortly, each of which attempts to answer a multiple choice question. To study particular phenomena and develop solvers, the project has created larger datasets to amplify and study different problems, resulting in 10 new datasets 2 and 5 large knowledge resources 3 for the community. The solvers can be loosely grouped into:
1. Statistical and information retrieval methods
Reasoning methods
Large-scale language model methods
Over the life of the project, the relative importance of the methods has shifted towards large-scale language methods.
Several methods make use of the Aristo Corpus, comprising a large Web-crawled corpus (5 × 10 10 tokens (280GB)) originally from the University of Waterloo, combined with targeted science content from Wikipedia, SimpleWikipedia, and several smaller online science texts .
Information Retrieval and Statistics
Three solvers use information retrieval (IR) and statistical measures to select answers. These methods are particularly effective for "lookup" questions where an answer is explicitly stated in the Aristo corpus.
The IR solver searches to see if the question along with an answer option is explicitly stated in the corpus, and returns the confidence that such a statement was found. To do this, for each answer option a i , it sends q + a i as a query to a search engine (we use ElasticSearch), and returns the search engines score for the top retrieved sentence s, where s also has at least one non-stopword overlap with q, and at least one with a i . This ensures s has some relevance to both q and a i . This is repeated for all options a i to score them all, and the option with the highest score selected. Further details are available in .
The PMI solver uses pointwise mutual information (Church and Hanks, 1989) to measure the strength of the associations between parts of q and parts of a i . Given a large corpus C, PMI for two n-grams x and y is defined as PMI(x, y) = log p(x,y) p(x)p(y) . Here p(x, y) is the joint probability that x and y occur together in C, within a certain window of text (we use a 10 word window). The term p(x)p(y), on the other hand, represents the probability with which x and y would occur together if they were statistically independent. The ratio of p(x, y) to p(x)p(y) is thus the ratio of the observed co-occurrence to the expected co-occurrence. The larger this ratio, the stronger the association between x and y. The solver extracts unigrams, bigrams, trigrams, and skip-bigrams from the question q and each answer option a i . It outputs the answer with the largest average PMI, calculated over all pairs of question n-grams and answer option n-grams. Further details are available in .
Finally, ACME (Abstract-Concrete Mapping Engine) searches for a cohesive link between a question q and candidate answer a i using a large knowledge base of vector spaces that relate words in language to a set of 5000 scientific terms enumerated in a term bank. ACME uses three types of vector spaces: terminology space, word space, and sentence space. Terminology space is designed for finding a term in the term bank that links a question to a candidate answer with strong lexical cohesion. Word space is designed to characterize a word by the context in which the word appears. Sentence space is designed to characterize a sentence by the words that it contains. The key insight in ACME is that we can better assess lexical cohesion of a question and answer by pivoting through scientific terminology, rather than by simple co-occurence frequencies of question and answer words. Further details are provided in (Turney, 2017).
These solvers together are particularly good at "lookup" questions where an answer is explicitly written down in the Aristo Corpus. For example, they correctly answer:
Infections may be caused by (1) mutations (2) microorganisms [correct] (3) toxic substances (4) climate changes as the corpus contains the sentence "Products contaminated with microorganisms may cause infection." (for the IR solver), as well as many other sentences mentioning both "infection" and "microorganisms" together (hence they are highly correlated, for the PMI solver), and both words are strongly correlated with the term "microorganism" (ACME).
Reasoning Methods
The TupleInference solver uses semi-structured knowledge in the form of tuples, extracted via Open Information Extraction (Open IE) (Banko et al., 2007). Two sources of tuples are used:
• A knowledge base of 263k tuples (T ), extracted from the Aristo Corpus plus several domain-targeted sources, using training questions to retrieve science-relevant information. This support graph is scored highest, hence option " (A) Moon" is chosen.
• On-the-fly tuples (T ), extracted at question-answering time from t¡he same corpus, to handle questions from new domains not covered by the training set. TupleInference treats the reasoning task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 3 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) (Dagan et al., 2010), however, we must score alignments between the tuples retrieved from the two sources above, T qa ∪ T qa , and a (potentially multi-sentence) multiple choice question qa.
The qterms, answer choices, and tuples fields (i.e. subject, predicate, objects) form the set of possible vertices, V, of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, E. The support graph, G(V, E), is a subgraph of G(V, E) where V and E denote "active" nodes and edges, respectively. We define an ILP optimization model to search for the best support graph (i.e., the active nodes and edges), where a set of constraints define the structure of a valid support graph (e.g., an edge must connect an answer choice to a tuple) and the objective defines the preferred properties (e.g. active edges should have high word-overlap). Details of the constraints are given in (Khot et al., 2017). We then use the SCIP ILP optimization engine (Achterberg, 2009) to solve the ILP model. To obtain the score for each answer choice a i , we force the node for that choice x ai to be active and use the objective function value of the ILP model as the score. The answer choice with the highest score is selected. Further details are available in (Khot et al., 2017).
Multee (Trivedi et al., 2019) is a solver that repurposes existing textual entailment tools for question answering. Textual entailment (TE) is the task of assessing if one text implies another, and there are several high-performing TE systems now available. However, question answering often requires reasoning over multiple texts, and so Multee Figure 4: Multee retrieves potentially relevant sentences, then for each answer option in turn, assesses the degree to which each sentence entails that answer. A multi-layered aggregator then combines this (weighted) evidence from each sentence. In this case, the strongest overall support is found for option "(C) table salt", so it is selected.
learns to reason with multiple individual entailment decisions. Specifically, Multee contains two components: (i) a sentence relevance model, which learns to focus on the relevant sentences, and (ii) a multi-layer aggregator, which uses an entailment model to obtain multiple layers of question-relevant representations for the premises and then composes them using the sentence-level scores from the relevance model. Finding relevant sentences is a form of local entailment between each premise and the answer hypothesis, whereas aggregating question-relevant representations is a form of global entailment between all premises and the answer hypothesis. This means we can effectively repurpose the same pre-trained entailment function f e for both components. Details of how this is done are given in (Trivedi et al., 2019). An example of a typical question and scored, retrieved evidence is shown in Figure 4. Further details are available in (Trivedi et al., 2019).
The QR (qualitative reasoning) solver is designed to answer questions about qualitative influence, i.e., how more/less of one quantity affects another (see Figure 5). Unlike the other solvers in Aristo, it is a specialist solver that only fires for a small subset of questions that ask about qualitative change, identified using (regex) language patterns.
The solver uses a knowledge base K of 50,000 (textual) statements about qualitative influence, e.g., "A sunscreen with a higher SPF protects the skin longer.", extracted automatically from a large corpus. It has then been trained to apply such statements to qualitative questions, e.g.,
John was looking at sunscreen at the retail store. He noticed that sunscreens that had lower SPF would offer protection that is (A) Longer (B) Shorter [correct] In particular, the system learns through training to track the polarity of influences: For example, if we were to change "lower" to "higher" in the above example, the system will change its answer choice. Another example is shown in Figure 5. Again, if "melted" were changed to "cooled", the Figure 5: Given a question about a qualitative relationship (How does one increase/decrease affect another?), the qualitative reasoning solver retrieves a relevant qualitative rule from a large database. It then assesses which answer option is best implied by that rule. In this case, as the rule states more heat implies faster movement, option "(C)... move more rapidly" is scored highest and selected, including recognizing that "heat" and "melted", and "faster" and "more rapidly" align. system would change its choice to "(B) less energy".
The QR solver learns to reason using the BERT language model (Devlin et al., 2018), using the approach described in Section 3.4 below. It is fine-tuned on 3800 crowdsourced qualitative questions illustrating the kinds of manipulation required, along with the associated qualitative knowledge sentence. The resulting system is able to answer questions that include significant linguistic and knowledge gaps between the question and retrieved knowledge (Table 1).
Because the number of qualitative questions is small in our dataset, the solver does not significantly change Aristo's performance, although it does provide an explanation for its answers. For this reason we omit it in the results later. Further details and a detailed separate evaluation is available in (Tafjord et al., 2019).
Large-Scale Language models
The field of NLP has advanced substantially with the advent of large-scale language models such as ELMo (Peters et al., 2018), ULMFit (Howard and Ruder, 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2018), and RoBERTa (Liu et al., 2019b). These models are trained to perform various language prediction tasks such as predicting a missing word or the next sentence, using large amounts of text (e.g., BERT was trained on Wikipedia + the Google Book Corpus of 10,000 books). They can also be fine-tuned to new language prediction tasks, such as question-answering, and Comparatives: "warmer" ↔ "increase temperature" "more difficult" ↔ "slower" "need more time" ↔ "have lesser amount" "decreased distance" ↔ "hugged" "cost increases" ↔ "more costly" "increase mass" ↔ "add extra" "more tightly packed" ↔ "add more"
Commonsense Knowledge: "more land development" ↔ "city grow larger" "not moving" ↔ "sits on the sidelines" "caught early" ↔ 'sooner treated" "lets more light in" ↔ "get a better picture" "stronger electrostatic force" ↔ "hairs stand up more" "less air pressure" ↔ "more difficult to breathe" "more photosynthesis" ↔ "increase sunlight"
Discrete Values: "stronger acid" ↔ "vinegar" vs. "tap water" "more energy" ↔ "ripple" vs. "tidal wave" "closer to Earth" ↔ "ball on Earth" vs. "ball in space" "mass" ↔ "baseball" vs. "basketball" "rougher" ↔ "notebook paper" vs. "sandpaper" "heavier" ↔ "small wagon" vs. "eighteen wheeler" Table 1: Examples of linguistic and semantic gaps between knowledge K i (left) and question Q i (right) that need to be bridged for answering qualitative questions.
have been remarkably successful in the few months that they have been available.
We apply BERT to multiple choice questions by treating the task as classification: Given a question q with answer options a i and optional background knowledge K i , we provide it to BERT as:
[CLS] K i [SEP] q [SEP] a i [SEP]
for each option (only the answer option is assigned as the second BERT "segment"). The [CLS] output token for each answer option is projected to a single logit and fed through a softmax layer, trained using cross-entropy loss against the correct answer.
The AristoBERT solver uses three methods to apply BERT more effectively. First, we retrieve and supply background knowledge along with the question when using BERT. This provides the potential for BERT to "read" that background knowledge and apply it to the question, although the exact nature of how it uses background knowledge is more complex and less interpretable. Second, we fine-tune BERT using a curriculum of several datasets, including some that are not science related. Finally, we ensemble different variants of BERT together.
Background Knowledge For background knowledge
K i we use up to 10 of the top sentences found by the IR solver, truncated to fit into the BERT max tokens setting (we use 256). We optimize the final fine-tuning using scores on the development set, performing a small hyperparameter search as suggested in the original BERT paper (Devlin et al., 2018).
Curriculum Fine-Tuning
Ensembling
We repeat the above using three variants of BERT, the original BERT-large-cased and BERT-largeuncased, as well as the later released BERT-large-casedwhole-word-masking. 5 We also add a model trained without background knowledge and ensemble them using the combination solver described below.
The AristoRoBERTa solver takes advantage of the recent release of Roberta (Liu et al., 2019b), a highperforming and optimized derivative of BERT trained on significantly more text. In AristoRoBERTa, we simply replace the BERT model in AristoBERT with RoBERTa, repeating similar fine-tuning steps. We ensemble two versions together, namely with and without the first fine-tuning step using RACE.
Ensembling
Each solver outputs a non-negative confidence score for each of the answer options along with other optional features. The Combiner then produces a combined confidence score (between 0 and 1) using the following two-step approach.
In the first step, each solver is "calibrated" on the training set by learning a logistic regression classifier from each answer option to a correct/incorrect label. The features for an answer option i include the raw confidence score s i as well as the score normalized across the answer options for a given question. We include two types of normalizations:
normal i = s i j s j softmax i = exp(s i ) j exp(s j )
Each solver can also provide other features capturing aspects of the question or the reasoning path. The output of this first step classifier is then a calibrated confidence for each solver s and answer option i: calib s i = 1/(1+exp(−β s ·f s )) where f s is the solver specific feature vector and β s the associated feature weights.
The second step uses these calibrated confidences as (the only) features to a second logistic regression classifier from answer option to correct/incorrect, resulting in a final confidence in [0, 1], which is used to rank the answers:
confidence i = 1/ 1 + exp − β 0 − s∈Solvers β s calib s i
Here, feature weights β s indicate the contribution of each solver to the final confidence. Empirically, this two-step approach yields more robust predictions given limited training data compared to a one-step approach where all solver features are fed directly into a single classification step.
Experiments and Results
This section describes our precise experimental methodology followed by our results.
Experimental Methodology
Omitted Question Classes In the experimental results reported below, we omitted questions that utilized diagrams. While these questions are frequent in the test, they are outside of our focus on language and reasoning. Moreover, the diagrams are highly varied (see Figure 6) and despite work that tackled narrow diagram types, e.g, food chains (Krishnamurthy et al., 2016), overall progress has been quite limited . We also omitted questions that require a direct answer (rather than selecting from multiple choices), for two reasons. First, after removing questions with diagrams, they are rare in the remainder. Of the 482 direct answer questions over 13 years of Regents 8th Grade Science exams, only 38 (<8%) do not involve a diagram. Second, they are complex, often requiring explanation and synthesis. Both diagram and direct-answer questions are natural topics for future work.
Dataset Formulation
We evaluate Aristo using several datasets of independently-authored science questions taken from standardized tests. Each dataset is divided into train, development, and test partitions, the test partitions being "blind", i.e., hidden to both the researchers and the Aristo system during training. All questions are taken verbatim from the original sources, with no rewording or modification. As mentioned earlier, we use only the non-diagram, multiple choice (NDMC) questions. We exclude questions with an associated diagram that is required to interpret the question. In the occasional case where two questions share the same preamble, the preamble is repeated for each question so they are independent. The Aristo solvers are trained using questions in the training partition (each solver is trained independently, as described earlier), and then the combination is fine-tuned using the development set.
The Regents exam questions are taken verbatim from the New York Regents Examination board, using the 4th Grade Science, 8th Grade Science, and 12th Grade Living Environment examinations. 6 The questions are partitioned into train/dev/test by exam, i.e., each exam is either in train, dev, or test but not split up between them. The ARC dataset is a larger corpus of science questions drawn from public resources across the country, spanning grades 3 to 9, and also includes the Regents 4th and 8th questions (using the same train/dev/test split). Further details of the datasets are described in . The datasets are publicly available 7 . Dataset sizes are shown in Table 3. All but 39 of the 9366 questions are 4-way multiple choice, the remaining 39 (<0.5%) being 3-or 5-way. A random score over the entire dataset is 25.02%.
For each question, the answer option with the highest overall confidence from Aristo's combination module is selected, scoring 1 point if the answer is correct, 0 otherwise. In the (very rare) case of N options having the same confidence (an N-way tie) that includes the correct option, the system receives 1/N points (equivalent to the asymptote of random guessing between the N).
Main Results
The results are summarized in Table 2, showing the performance of the solvers individually, and their combination in the full Aristo system. Note that Aristo is a single system run on the five datasets (not retuned for each dataset in turn).
Most notably, Aristo's scores on the Regents Exams far exceed earlier performances (e.g., Schoenick et al., 2016;, and represents a new high-point on science questions.
In addition, the results show the dramatic impact of new language modeling technology, embodied in AristoBERT and AristoRoBERTa, the scores for these two solvers dominating the performance of the overall system. Even on the ARC-Challenge questions, containing a wide variety of dif- Table 4: Aristo's score on the three most recent years of Regents Science (2017-19), not part of the hidden benchmark.
ficult questions, the language modeling based solvers dominate. The general increasing trend of solver scores from left to right in the table loosely reflects the progression of the NLP field over the six years of the project.
To check that we have not overfit to our data, we also ran Aristo on the most recent years of the Regents Grade Exams (4th and 8th Grade), years 2017-19, that were unavailable at the start of the project and were not part of our datasets. The results are shown in Table 4, a showing score similar to those on our larger datasets, suggesting the system is not overfit.
On the entire exam, the NY State Education Department considers a score of 65% as "Meeting the Standards", and over 85% as "Meeting the Standards with Distinction" 8 . If this rubric applies equally to the NDMC subset we have studied, this would mean Aristo has met the standard with distinction in 8th Grade Science.
Answer Only Performance
Several authors have observed that for some multiple choice datasets, systems can still perform well even when ignoring the question body and looking only at the answer options (Gururangan et al., 2018;Poliak et al., 2018). This surprising result is particularly true for crowdsourced datasets, where workers may use stock words or phrases (e.g., "not") in incorrect answer options that gives them away. A dataset with this characteristic is clearly problematic, as systems can spot such cues and do well without even reading the question.
To measure this phenomenon on our datasets, we trained and tested a new AristoRoBERTa model giving it only the answer options (no question body nor retrieved knowledge). The results on the test partition are shown in Table 5. We find scores significantly above random (25%), in particular for the 12th Grade set which has longer answers. But the scores are sufficiently low to indicate the datasets are relatively free of annotation artifacts that would allow the system to often guess the answer independent of the question. This desirable feature is likely due to the fact these are natural science questions, carefully crafted by experts for inclusion in exams, rather than mass-produced through crowdsourcing.
Adversarial Answer Options
One way of testing robustness in multiple choice is to change or add incorrect answer options, and see if the system's performance degrades mastery of the material, we would expect its score to be relatively unaffected by such modifications. To explore this, we investigated adversarially adding extra incorrect options, i.e., searching for answer options that might confuse the system, using AristoRoBERTa 9 , and adding them as extra choices to the existing questions.
To do this, for each question, we collect a large (≈ 100) number of candidate additional answer choices using the correct answers to other questions in the same dataset (and train/test split), where the top 100 are chosen by a superficial alignment score (features such as answer length and punctuation usage). We then re-rank these additional choices using AristoRoBERTa, take the top N, and add them to the original K (typically 4) choices for the question.
If we add N=4 extra choices to the normal 4-way questions, they become 8-way multiple choice, and performance drops dramatically (over 40 percentage points), albeit unfairly as we have by definition added choices that confuse the system. We then train the model further on this 8way adversarial dataset, a process known as inoculation (Liu et al., 2019a). After further training, we still find a drop, but significantly less (around 10 percentage points absolute, 13.8% relative, Table 6), even though many of the new distractor choices would be easy for a human to rule out.
For example, while the solver gets the right answer to the 9 For computational tractability, we slightly modify the way background knowledge is retrieved for this experiment (only), namely using a search query of just the question body q (rather than question + answer option q + ai).
following question:
The condition of the air outdoors at a certain time of day is known as (A) friction (B) light (C) force (D) weather [selected, correct] it fails for the 8-way variant:
The condition of the air outdoors at a certain time of day is known as (A)
friction (B) light (C) force (D) weather [correct] (Q) joule (R) gradient [selected] (S) trench (T) add heat
These results show that while Aristo performs well, it still has some blind spots that can be artificially uncovered through adversarial methods such as this.
Related Work
This section describes related work on answering standardized-test questions, and on math word problems in particular. It provides an overview rather than exhaustive citations.
Standardized Tests
Standardized tests have long been proposed as challenge problems for AI (e.g., Bringsjord and Schimanski, 2003;Brachman et al., 2005;Piatetsky-Shapiro et al., 2006), as they appear to require significant advances in AI technology while also being accessible, measurable, understandable, and motivating.
Earlier work on standardized tests focused on specialized tasks, for example, SAT word analogies (Turney, 2006), GRE word antonyms (Mohammad et al., 2013), and TOEFL synonyms (Landauer and Dumais, 1997). More recently, there have been attempts at building systems to pass university entrance exams. Under NII's Todai project, several systems were developed for parts of the University of Tokyo Entrance Exam, including maths, physics, English, and history (Strickland, 2013;NII, 2013;Fujita et al., 2014), although in some cases questions were modified or annotated before being given to the systems (e.g., Matsuzaki et al., 2014). Similarly, a smaller project worked on passing the Gaokao (China's college entrance exam) (e.g., Cheng et al., 2016;Guo et al., 2017). The Todai project was reported as ended in 2016, in part because of the challenges of building a machine that could "grasp meaning in a broad spectrum" (Mott, 2016).
Math Word Problems
Substantial progress has been achieved on math word problems. On plane geometry questions, (Seo et al., 2015) demonstrated an approach that achieve a 61% accuracy on SAT practice questions. The Euclid system (Hopkins et al., 2017) achieved a 43% recall and 91% precision on SAT "closed-vocabulary" algebra questions, a limited subset of questions that nonetheless constitutes approximately 45% of a typical math SAT exam. Closed-vocabulary questions are those that do not reference real-world situations (e.g., "what is the largest prime smaller than 100?" or "Twice the product of x and y is 8. What is the square of x times y?") Work on open-world math questions has continued, but results on standardized tests have not been reported and thus it is difficult to benchmark the progress relative to human performance. See Amini et al. (2019) for a recent snapshot of the state of the art, and references to the literature on this problem.
Summary and Conclusion
Answering science questions is a long-standing AI grand challenge (Reddy, 1988;Friedland et al., 2004). This paper reports on Aristo-the first system to achieve a score of over 90% on the non-diagram, multiple choice part of the New York Regents 8th Grade Science Exam, demonstrating that modern NLP methods can result in mastery of this task. Although Aristo only answers multiple choice questions without diagrams, and operates only in the domain of science, it nevertheless represents an important milestone towards systems that can read and understand. The momentum on this task has been remarkable, with accuracy moving from roughly 60% to over 90% in just three years. Finally, the use of independently authored questions from a standardized test allows us to benchmark AI performance relative to human students.
Beyond the use of a broad vocabulary and scientific concepts, many of the benchmark questions intuitively appear to require reasoning to answer (e.g., Figure 5). To what extent is Aristo reasoning to answer questions? For many years in AI, reasoning was thought of as the discrete, symbolic manipulation of sentences expressed in a formally designed language (Brachman and Levesque, 1985;Genesereth and Nilsson, 2012). With the advent of deep learning, this notion of reasoning has shifted, with machines performing challenging tasks using neural architectures rather than explicit representation languages. Today, we do not have a sufficiently fine-grained notion of reasoning to answer this question precisely, but we can observe surprising performance on answering science questions. This suggests that the machine has indeed learned something about language and the world, and how to manipulate that knowledge, albeit neither symbolically nor discretely.
Although an important milestone, this work is only a step on the long road toward a machine that has a deep understanding of science and achieves Paul Allen's original dream of a Digital Aristotle. A machine that has fully understood a textbook should not only be able to answer the multiple choice questions at the end of the chapter-it should also be able to generate both short and long answers to direct questions; it should be able to perform constructive tasks, e.g., designing an experiment for a particular hypothesis; it should be able to explain its answers in natural language and discuss them with a user; and it should be able to learn directly from an expert who can identify and correct the machine's misunderstandings. These are all ambitious tasks still largely beyond the current technology, but with the rapid progress happening in NLP and AI, solutions may arrive sooner than we expect.
Figure 2 :
2Aristo's scores on Regents 8th Grade Science (non-diagram, multiple choice) over time (held-out test set).
Figure 3 :
3The Tuple Inference Solver retrieves tuples relevant to the question, and constructs a support graph for each answer option. Here, the support graph for the choice "(A) Moon" is shown. The tuple facts "...Moon reflect light...", "...Moon is a ...satellite", and "Moon orbits planets" all support this answer, addressing different parts of the question.
Figure 6 :
6Following earlier work on multi-step fine-tuning(Sun et al., 2019), we first finetune on the large (87866 qs) RACE training set(Lai et al., A sample of the wide variety of diagrams used in the Regents exams, including food chains, pictures, tables, graphs, circuits, maps, temporal processes, cross-sections, pie charts, and flow diagrams. 2017), a challenging set of English comprehension multiple choice exams given in Chinese middle and high schools. We then further fine-tune on a collection of science multiple choice questions sets:• OpenBookQA train (4957 qs)(Mihaylov et al., 2018) • ARC-Easy train (2251 qs) • ARC-Challenge train (1119 qs) • 22 Regents Living Environment exams (665 qs). 4
ARC-Challenge is defined using IR and PMI results, i.e., are questions that by definition both IR and PMI get wrong.Test Set
Num Q
IR
PMI ACME TupInf Multee AristoBERT AristoRoBERTa ARISTO
Regents 4th
109
64.45 66.28
67.89
63.53
69.72
86.24
88.07
89.91
Regents 8th
119
66.60 69.12
67.65
61.41
68.91
86.55
88.24
91.60
Regents 12th
632
41.22 46.95
41.57
35.35
56.01
75.47
82.28
83.54
ARC-Easy
2376
74.48 77.76
66.60
57.73
64.69
81.78
82.88
86.99
ARC-Challenge
1172
n/a †
n/a †
20.44
23.73
37.36
57.59
64.59
64.33
†
Table 2 :
2This table shows the results of each of the Aristo solvers, as well as the overall Aristo system, on each of the test sets. Most notably, Aristo achieves 91.6% accuracy in 8th Grade, and exceeds 83% in 12th Grade. ("Num Q" refers to the number of questions in each test set.). Note that Aristo is a single system, run unchanged on each dataset (not retuned for each dataset).Partition
Dataset
Train Dev
Test Total
Regents 4th
127
20
109
256
Regents 8th
125
25
119
269
Regents 12th
665
282
632
1579
ARC-Easy
2251
570
2376 5197
ARC-Challenge 1119
299
1172 2590
Totals †
4035
1151 4180 9366
† ARC (Easy + Challenge) includes Regents 4th and 8th as a subset.
Table 3 :
3Dataset partition sizes (number of questions).
. If a system has 8 https://www.nysedregents.org/grade8/science/618/home.html"Answer only" % Drop
Test dataset
score
(relative)
Regents 4th
38.53
56.7
Regents 8th
37.82
56.3
Regents 12th
47.94
41.2
ARC-Easy
36.17
55.9
ARC-Challenge
35.92
44.7
All
37.11
48.5
Table 5: Scores when looking at the answer options
only for (retrained) AristoRoBERTa (no ensembling), com-
pared with using the full questions. The (desirably) low
scores/large drops indicate it is hard to guess the answer
without reading the question.
Adversarial
% drop
Test dataset
4-way MC
8-way MC
(relative)
Regents 4th
87.1
76.1
12.6
Regents 8th
78.9
76.4
3.1
Regents 12th
75.3
58.0
22.9
ARC-Easy
74.1
65.7
11.3
ARC-Challenge
55.5
47.7
14.0
ALL
69.1
59.5
13.8
Table 6 :
6Scores on the original 4-way multiple choice ques-
tions, and (after retraining) on adversarially generated 8-way
multiple choice versions, for AristoRoBERTa (no ensem-
bling).
See https://www.nysedregents.org/ for the original exams. 7 http://data.allenai.org/arc/, and the 12th Grade Regents data is available on request
AcknowledgementsWe gratefully acknowledge the many other contributors to this work, including Niranjan Balasubramanian, Matt Gardner, Peter Jansen, Jayant Krishnamurthy, Souvik Kundu, Todor Mihaylov, Harsh Trivedi, Peter Turney, and the Beaker team at AI2.
SCIP: Solving constraint integer programs. T Achterberg, Mathematical Programming Computation. 11T. Achterberg. SCIP: Solving constraint integer pro- grams. Mathematical Programming Computation, 1(1): 1-41, 2009.
Idea Man: A memoir by the cofounder of Microsoft. Allen , PenguinAllen. Idea Man: A memoir by the cofounder of Microsoft. Penguin, 2012.
MathQA: Towards interpretable math word problem solving with operation-based formalisms. S Amini, P Gabriel, R Lin, Y Koncel-Kedziorski, H Choi, Hajishirzi, NAACL-HLT. Amini, S. Gabriel, P. Lin, R. Koncel-Kedziorski, Y. Choi, and H. Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In NAACL-HLT, 2019.
Software tricks people into thinking it is human. J , Aron , New Scientist. Issue 2829J. Aron. Software tricks people into thinking it is human. New Scientist, (Issue 2829), Sept 2011.
Open information extraction from the web. M Banko, M J Cafarella, S Soderland, M Broadhead, O Etzioni, IJCAI. M. Banko, M. J. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. Open information extraction from the web. In IJCAI, 2007.
Computer AI passes Turing Test in 'world first'. BBC News. BBC. BBC. Computer AI passes Turing Test in 'world first'. BBC News, 2014. http://www.bbc.com/news/technology- 27762088.
Selected grand challenges in cognitive science. R Brachman, D Gunning, S Bringsjord, M Genesereth, L Hirschman, L Ferro, 05-1218The MITRE CorporationBedford MATechnical ReportR. Brachman, D. Gunning, S. Bringsjord, M. Genesereth, L. Hirschman, and L. Ferro. Selected grand challenges in cognitive science. Technical report, MITRE Technical Report 05-1218. Bedford MA: The MITRE Corporation, 2005.
Readings in knowledge representation. R J Brachman, H J Levesque, Morgan Kaufmann Publishers IncR. J. Brachman and H. J. Levesque. Readings in knowledge representation. Morgan Kaufmann Publishers Inc., 1985.
What is artificial intelligence? Psychometric AI as an answer. S Bringsjord, B Schimanski, IJCAI. CiteseerS. Bringsjord and B. Schimanski. What is artificial intel- ligence? Psychometric AI as an answer. In IJCAI, pp. 887-893. Citeseer, 2003.
Inquire Biology: A textbook that answers questions. K Chaudhri, B Cheng, A Overtholtzer, J Roschelle, A Spaulding, P Clark, M Greaves, D Gunning, AI Magazine. 343K. Chaudhri, B. Cheng, A. Overtholtzer, J. Roschelle, A. Spaulding, P. Clark, M. Greaves, and D. Gunning. In- quire Biology: A textbook that answers questions. AI Magazine, 34(3):55-72, 2013.
Taking up the gaokao challenge: An information retrieval approach. G Cheng, W Zhu, Z Wang, J Chen, Y Qu, IJCAI. G. Cheng, W. Zhu, Z. Wang, J. Chen, and Y. Qu. Taking up the gaokao challenge: An information retrieval approach. In IJCAI, pp. 2479-2485, 2016.
Structured set matching networks for one-shot part labeling. J Choi, J Krishnamurthy, A Kembhavi, A Farhadi, IEEE/CVF Conference on Computer Vision and Pattern Recognition. J. Choi, J. Krishnamurthy, A. Kembhavi, and A. Farhadi. Structured set matching networks for one-shot part label- ing. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3627-3636, 2017.
Word association norms, mutual information and lexicography. K W Church, P Hanks, 27th ACL. K. W. Church and P. Hanks. Word association norms, mutual information and lexicography. In 27th ACL, pp. 76-83, 1989.
Automatic construction of inference-supporting knowledge bases. N Clark, S Balasubramanian, K Bhakthavatsalam, J Humphreys, A Kinkead, O Sabharwal, Tafjord, 4th Workshop on Automated Knowledge Base Construction (AKBC). Montreal, CanadaClark, N. Balasubramanian, S. Bhakthavatsalam, K. Humphreys, J. Kinkead, A. Sabharwal, and O. Tafjord. Automatic construction of inference-supporting knowl- edge bases. In 4th Workshop on Automated Knowl- edge Base Construction (AKBC), Montreal, Canada, Dec. 2014.
Think you have solved question answering? Try ARC, the AI2 Reasoning Challenge. P Clark, I Cowhey, O Etzioni, T Khot, A Sabharwal, C Schoenick, O Tafjord, abs/1803.05457ArXiv. P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think you have solved question answering? Try ARC, the AI2 Reasoning Chal- lenge. ArXiv, abs/1803.05457, 2018.
My computer is an honor student -But how intelligent is it? Standardized tests as a measure of AI. P Clark, O Etzioni, 37AI MagazineP. Clark and O. Etzioni. My computer is an honor student - But how intelligent is it? Standardized tests as a measure of AI. AI Magazine, 37(1):5-12, 2016.
Combining retrieval, statistics, and inference to answer elementary science questions. P Clark, O Etzioni, T Khot, A Sabharwal, O Tafjord, P D Turney, D Khashabi, AAAI. P. Clark, O. Etzioni, T. Khot, A. Sabharwal, O. Tafjord, P. D. Turney, and D. Khashabi. Combining retrieval, statistics, and inference to answer elementary science questions. In AAAI, pp. 2580-2586, 2016.
A study of the knowledge base requirements for passing an elementary science test. P Clark, P Harrison, N Balasubramanian, Proceedings of the 2013 workshop on Automated knowledge base construction. the 2013 workshop on Automated knowledge base constructionACMP. Clark, P. Harrison, and N. Balasubramanian. A study of the knowledge base requirements for passing an elemen- tary science test. In Proceedings of the 2013 workshop on Automated knowledge base construction, pp. 37-42. ACM, 2013.
Recognizing textual entailment: Rational, evaluation and approacheserratum. I Dagan, B Dolan, B Magnini, D Roth, Natural Language Engineering. 1601I. Dagan, B. Dolan, B. Magnini, and D. Roth. Recognizing textual entailment: Rational, evaluation and approaches- erratum. Natural Language Engineering, 16(01):105- 105, 2010.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, NAACL. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. In NAACL, 2018.
Building Watson: An overview of the DeepQA project. D Ferrucci, E Brown, J Chu-Carroll, J Fan, D Gondek, A A Kalyanpur, A Lally, J W Murdock, E Nyberg, J Prager, AI magazine. 313D. Ferrucci, E. Brown, J. Chu-Carroll, J. Fan, D. Gondek, A. A. Kalyanpur, A. Lally, J. W. Murdock, E. Nyberg, J. Prager, et al. Building Watson: An overview of the DeepQA project. AI magazine, 31(3):59-79, 2010.
Project Halo: Towards a digital Aristotle. N S Friedland, P G Allen, G Matthews, M Witbrock, D Baxter, J Curtis, B Shepard, P Miraglia, J Angele, S Staab, AI magazine. 254N. S. Friedland, P. G. Allen, G. Matthews, M. Witbrock, D. Baxter, J. Curtis, B. Shepard, P. Miraglia, J. Angele, S. Staab, et al. Project Halo: Towards a digital Aristotle. AI magazine, 25(4):29-29, 2004.
Overview of Todai robot project and evaluation framework of its NLP-based problem solving. A Fujita, A Kameda, A Kawazoe, Y Miyao, LREC. A. Fujita, A. Kameda, A. Kawazoe, and Y. Miyao. Overview of Todai robot project and evaluation framework of its NLP-based problem solving. In LREC, 2014.
Logical foundations of artificial intelligence. M R Genesereth, N J Nilsson, Morgan KaufmannM. R. Genesereth and N. J. Nilsson. Logical foundations of artificial intelligence. Morgan Kaufmann, 2012.
Project Halo update -Progress toward digital Aristotle. D Gunning, V K Chaudhri, P E Clark, K Barker, S.-Y Chaw, M Greaves, B Grosof, A Leung, D D Mcdonald, S Mishra, AI Magazine. 313D. Gunning, V. K. Chaudhri, P. E. Clark, K. Barker, S.-Y. Chaw, M. Greaves, B. Grosof, A. Leung, D. D. McDon- ald, S. Mishra, et al. Project Halo update -Progress to- ward digital Aristotle. AI Magazine, 31(3):33-58, 2010.
Which is the effective way for gaokao: Information retrieval or neural networks?. S Guo, X Zeng, S He, K Liu, J Zhao, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterLong Papers1S. Guo, X. Zeng, S. He, K. Liu, and J. Zhao. Which is the effective way for gaokao: Information retrieval or neural networks? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 111-120, 2017.
Annotation artifacts in natural language inference data. S Gururangan, S Swayamdipta, O Levy, R Schwartz, S R Bowman, N A Smith, NAACL. S. Gururangan, S. Swayamdipta, O. Levy, R. Schwartz, S. R. Bowman, and N. A. Smith. Annotation artifacts in natural language inference data. In NAACL, 2018.
Beyond sentential semantic parsing: Tackling the math SAT with a cascade of tree transducers. M Hopkins, C Petrescu-Prahova, R Levin, R L Bras, A Herrasti, V Joshi, M. Hopkins, C. Petrescu-Prahova, R. Levin, R. L. Bras, A. Herrasti, and V. Joshi. Beyond sentential semantic parsing: Tackling the math SAT with a cascade of tree transducers. In EMNLP, 2017.
Universal language model finetuning for text classification. J Howard, S Ruder, ACL. J. Howard and S. Ruder. Universal language model fine- tuning for text classification. In ACL, 2018.
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. M Joshi, E Choi, D S Weld, L Zettlemoyer, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational LinguisticsM. Joshi, E. Choi, D. S. Weld, and L. Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th An- nual Meeting of the Association for Computational Lin- guistics, Vancouver, Canada, July 2017. Association for Computational Linguistics.
Question answering via integer programming over semi-structured knowledge. D Khashabi, T Khot, A Sabharwal, P Clark, O Etzioni, D Roth, IJCAI. D. Khashabi, T. Khot, A. Sabharwal, P. Clark, O. Etzioni, and D. Roth. Question answering via integer program- ming over semi-structured knowledge. In IJCAI, 2016.
Question answering as global reasoning over semantic abstractions. D Khashabi, T Khot, A Sabharwal, D Roth, AAAI. D. Khashabi, T. Khot, A. Sabharwal, and D. Roth. Question answering as global reasoning over semantic abstractions. In AAAI, 2018.
Answering complex questions using open information extraction. T Khot, A Sabharwal, P F Clark, ACL. T. Khot, A. Sabharwal, and P. F. Clark. Answering com- plex questions using open information extraction. In ACL, 2017.
Semantic parsing to probabilistic programs for situated question answering. J Krishnamurthy, O Tafjord, A Kembhavi, EMNLP. J. Krishnamurthy, O. Tafjord, and A. Kembhavi. Seman- tic parsing to probabilistic programs for situated question answering. In EMNLP, 2016.
Race: Largescale reading comprehension dataset from examinations. G Lai, Q Xie, H Liu, Y Yang, E Hovy, G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy. Race: Large- scale reading comprehension dataset from examinations. In EMNLP, 2017.
A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. T K Landauer, S T Dumais, Psychological review. 1042211T. K. Landauer and S. T. Dumais. A solution to Plato's prob- lem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychologi- cal review, 104(2):211, 1997.
Models of competence in solving physics problems. J H Larkin, J Mcdermott, D P Simon, H A Simon, Cognitive Science. 4J. H. Larkin, J. McDermott, D. P. Simon, and H. A. Simon. Models of competence in solving physics problems. Cog- nitive Science, 4:317-345, 1980.
Inoculation by fine-tuning: A method for analyzing challenge datasets. N F Liu, R Schwartz, N A Smith, abs/1904.02668ArXiv. N. F. Liu, R. Schwartz, and N. A. Smith. Inoculation by fine-tuning: A method for analyzing challenge datasets. ArXiv, abs/1904.02668, 2019a.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintY. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019b.
The most uncreative examinee: a first step toward wide coverage natural language math problem solving. T Matsuzaki, H Iwane, H Anai, N H Arai, Twenty-Eighth AAAI Conference on Artificial Intelligence. T. Matsuzaki, H. Iwane, H. Anai, and N. H. Arai. The most uncreative examinee: a first step toward wide coverage natural language math problem solving. In Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014.
Can a suit of armor conduct electricity? a new dataset for open book question answering. T Mihaylov, P F Clark, T Khot, A Sabharwal, In EMNLP. T. Mihaylov, P. F. Clark, T. Khot, and A. Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP, 2018.
. S M Mohammad, B J Dorr, G Hirst, P D Turney, Computing lexical contrast. Computational Linguistics. 393S. M. Mohammad, B. J. Dorr, G. Hirst, and P. D. Turney. Computing lexical contrast. Computational Linguistics, 39(3):555-590, 2013.
N Mott, Todai robot gives up on getting into the university of tokyo. Inverse. N. Mott. Todai robot gives up on getting into the university of tokyo. Inverse, 2016. (https://www.inverse.com/article/23761-todai-robot- gives-up-university-tokyo).
NII. The Todai robot project. NII Today. 46NII. The Todai robot project. NII Today, 46, July 2013. (http://www.nii.ac.jp/userdata/results/pr data/NII Today/ 60 en/all.pdf).
Deep contextualized word representations. M E Peters, M Neumann, M Iyyer, M P Gardner, C Clark, K Lee, L S Zettlemoyer, NAACL. M. E. Peters, M. Neumann, M. Iyyer, M. P. Gardner, C. Clark, K. Lee, and L. S. Zettlemoyer. Deep contex- tualized word representations. In NAACL, 2018.
What are the grand challenges for data mining?. G Piatetsky-Shapiro, C Djeraba, L Getoor, R Grossman, R Feldman, M Zaki, Kdd-2006 panel report. 8G. Piatetsky-Shapiro, C. Djeraba, L. Getoor, R. Grossman, R. Feldman, and M. Zaki. What are the grand challenges for data mining?: Kdd-2006 panel report. ACM SIGKDD Explorations Newsletter, 8(2):70-77, 2006.
Hypothesis only baselines in natural language inference. A Poliak, J Naradowsky, A Haldar, R Rudinger, B Van Durme, In StarSem. A. Poliak, J. Naradowsky, A. Haldar, R. Rudinger, and B. Van Durme. Hypothesis only baselines in natural lan- guage inference. In StarSem, 2018.
Improving language understanding by generative pretraining. A Radford, K Narasimhan, T Salimans, I Sutskever, OpenAITechnical reportA. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving language understanding by generative pre- training. Technical report, OpenAI, 2018.
SQuAD: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, EMNLP. P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, 2016.
Foundations and grand challenges of artificial intelligence: AAAI presidential address. AI Magazine. R Reddy, 9R. Reddy. Foundations and grand challenges of artificial intelligence: AAAI presidential address. AI Magazine, 9 (4), 1988.
Three open problems in AI. R Reddy, J. ACM. 50R. Reddy. Three open problems in AI. J. ACM, 50:83-86, 2003.
C Schoenick, P F Clark, O Tafjord, P D Turney, O Etzioni, Moving beyond the Turing Test with the Allen AI Science Challenge. CACM. C. Schoenick, P. F. Clark, O. Tafjord, P. D. Turney, and O. Etzioni. Moving beyond the Turing Test with the Allen AI Science Challenge. CACM, 2016.
Solving geometry problems: Combining text and diagram interpretation. M J Seo, H Hajishirzi, A Farhadi, O Etzioni, C Malcolm, EMNLP. M. J. Seo, H. Hajishirzi, A. Farhadi, O. Etzioni, and C. Mal- colm. Solving geometry problems: Combining text and diagram interpretation. In EMNLP, 2015.
Bidirectional attention flow for machine comprehension. M J Seo, A Kembhavi, A Farhadi, H Hajishirzi, abs/1611.01603ArXiv. M. J. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. Bidirectional attention flow for machine comprehension. ArXiv, abs/1611.01603, 2016.
Can an ai get into the university of tokyo?. E Strickland, IEEE Spectrum. 509E. Strickland. Can an ai get into the university of tokyo? IEEE Spectrum, 50(9):13-14, 2013.
Improving machine reading comprehension with general reading strategies. K Sun, D Yu, D Yu, C Cardie, NAACL-HLT. K. Sun, D. Yu, D. Yu, and C. Cardie. Improving machine reading comprehension with general reading strategies. In NAACL-HLT, 2019.
QuaRTz: An open-domain dataset of qualitative relationship questions. O Tafjord, M Gardner, K Lin, P Clark, EMNLP. to appearO. Tafjord, M. Gardner, K. Lin, and P. Clark. QuaRTz: An open-domain dataset of qualitative relationship questions. In EMNLP, 2019. (to appear).
N Tandon, B D Mishra, J Grus, W Yih, A Bosselut, P Clark, arXiv:1808.10012Reasoning about actions and state changes by injecting commonsense knowledge. arXiv preprintN. Tandon, B. D. Mishra, J. Grus, W.-t. Yih, A. Bosselut, and P. Clark. Reasoning about actions and state changes by injecting commonsense knowledge. arXiv preprint arXiv:1808.10012, 2018.
Repurposing entailment for multi-hop question answering tasks. H Trivedi, H Kwon, T Khot, A Sabharwal, N Balasubramanian, NAACL. H. Trivedi, H. Kwon, T. Khot, A. Sabharwal, and N. Bal- asubramanian. Repurposing entailment for multi-hop question answering tasks. In NAACL, 2019.
Computing machinery and intelligence. Mind, LIX(236). A M Turing, A. M. Turing. Computing machinery and intelligence. Mind, LIX(236), 1950.
Similarity of semantic relations. P D Turney, Computational Linguistics. 323P. D. Turney. Similarity of semantic relations. Computa- tional Linguistics, 32(3):379-416, 2006.
Leveraging term banks for answering complex questions: A case for sparse vectors. P D Turney, arXiv:1704.03543arXiv preprintP. D. Turney. Leveraging term banks for answering com- plex questions: A case for sparse vectors. arXiv preprint arXiv:1704.03543, 2017.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S R Bowman, ICLR. A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, 2019.
Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. W Wang, M Yan, C Wu, In ACL. W. Wang, M. Yan, and C. Wu. Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering. In ACL, 2018.
Swag: A large-scale adversarial dataset for grounded commonsense inference. R Zellers, Y Bisk, R Schwartz, Y Choi, abs/1808.05326ArXiv. R. Zellers, Y. Bisk, R. Schwartz, and Y. Choi. Swag: A large-scale adversarial dataset for grounded common- sense inference. ArXiv, abs/1808.05326, 2018.
| [] |
[
"CASCADE RNN-TRANSDUCER: SYLLABLE BASED STREAMING ON-DEVICE MANDARIN SPEECH RECOGNITION WITH A SYLLABLE-TO-CHARACTER CONVERTER",
"CASCADE RNN-TRANSDUCER: SYLLABLE BASED STREAMING ON-DEVICE MANDARIN SPEECH RECOGNITION WITH A SYLLABLE-TO-CHARACTER CONVERTER"
] | [
"Xiong Wang \nSchool of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n",
"Zhuoyuan Yao \nSchool of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n",
"Xian Shi \nSchool of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n",
"Lei Xie \nSchool of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n"
] | [
"School of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina",
"School of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina",
"School of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina",
"School of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina"
] | [] | End-to-end models are favored in automatic speech recognition (ASR) because of its simplified system structure and superior performance. Among these models, recurrent neural network transducer (RNN-T) has achieved significant progress in streaming on-device speech recognition because of its high-accuracy and low-latency. RNN-T adopts a prediction network to enhance language information, but its language modeling ability is limited because it still needs paired speech-text data to train. Further strengthening the language modeling ability through extra text data, such as shallow fusion with an external language model, only brings a small performance gain. In view of the fact that Mandarin Chinese is a character-based language and each character is pronounced as a tonal syllable, this paper proposes a novel cascade RNN-T approach to improve the language modeling ability of RNN-T. Our approach firstly uses an RNN-T to transform acoustic feature into syllable sequence, and then converts the syllable sequence into character sequence through an RNN-T-based syllable-to-character converter. Thus a rich text repository can be easily used to strengthen the language model ability. By introducing several important tricks, the cascade RNN-T approach surpasses the character-based RNN-T by a large margin on several Mandarin test sets, with much higher recognition quality and similar latency. | 10.1109/slt48900.2021.9383506 | [
"https://arxiv.org/pdf/2011.08469v1.pdf"
] | 226,975,826 | 2011.08469 | 8dccd0518965a473b98511e37adf303de53f088b |
CASCADE RNN-TRANSDUCER: SYLLABLE BASED STREAMING ON-DEVICE MANDARIN SPEECH RECOGNITION WITH A SYLLABLE-TO-CHARACTER CONVERTER
Xiong Wang
School of Computer Science
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
Zhuoyuan Yao
School of Computer Science
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
Xian Shi
School of Computer Science
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
Lei Xie
School of Computer Science
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
CASCADE RNN-TRANSDUCER: SYLLABLE BASED STREAMING ON-DEVICE MANDARIN SPEECH RECOGNITION WITH A SYLLABLE-TO-CHARACTER CONVERTER
Index Terms-end-to-end ASRrecurrent neural net- work transducersyllablelanguage modeling ability
End-to-end models are favored in automatic speech recognition (ASR) because of its simplified system structure and superior performance. Among these models, recurrent neural network transducer (RNN-T) has achieved significant progress in streaming on-device speech recognition because of its high-accuracy and low-latency. RNN-T adopts a prediction network to enhance language information, but its language modeling ability is limited because it still needs paired speech-text data to train. Further strengthening the language modeling ability through extra text data, such as shallow fusion with an external language model, only brings a small performance gain. In view of the fact that Mandarin Chinese is a character-based language and each character is pronounced as a tonal syllable, this paper proposes a novel cascade RNN-T approach to improve the language modeling ability of RNN-T. Our approach firstly uses an RNN-T to transform acoustic feature into syllable sequence, and then converts the syllable sequence into character sequence through an RNN-T-based syllable-to-character converter. Thus a rich text repository can be easily used to strengthen the language model ability. By introducing several important tricks, the cascade RNN-T approach surpasses the character-based RNN-T by a large margin on several Mandarin test sets, with much higher recognition quality and similar latency.
INTRODUCTION
Conventional automatic speech recognition (ASR) usually adopts a hybrid system of deep neural network -hidden Markov model (DNN-HMM) [1] which is complex and requires a considerable amount of computing resource, so it is difficult to deploy on edge devices. Recently, end-to-end (E2E) speech recognition has achieved significant progress with simplified system architecture and superior performance. The E2E models usually adopt a sequence-to-sequence (S2S) framework to directly transform acoustic feature sequences into text sequences through specifically-designed neural networks. These models are particularly favored on edge devices for more concise architecture and reduced computing resource consumption over the hybrid ASR systems. However, the E2E speech recognition models, modeling acoustic and language information jointly in a unified framework, usually require a large amount of paired speech-text data for model training. Thus it is difficult for the models themselves to acquire strong language modeling ability by using available text data with more orders of magnitude than the speech-text paired data, especially when the training set does not match the language domain of specific applications. This paper addresses this problem by introducing a novel approach to improve the language modeling ability of E2E models.
As an S2S model, recurrent neural network transducer (RNN-T) [2] and its variants have achieved high-accuracy and low-latency in streaming on-device speech recognition [3,4]. Neural transducer has the streaming decoding ability in nature, while other E2E competitors, particularly those based on attention mechanism, such as transformer [5,6,7] and listen, attend and spell [8] (LAS), have to be modified to possess the streaming ability. RNN-T uses a prediction network to enhance the language information based on the connectionist temporal classification (CTC) [9] criterion. But its language modeling ability is not satisfactory because it still needs paired speech-text data to train. A recent study has unveiled that the language modeling ability of the prediction network is still quite weak [10].
Plenty of effort has been made on improving the performance of E2E models by introducing additional language information. A common solution is to use a language model (LM) fusion strategy: an LM is first externally trained on text data and then incorporated into the E2E model [11,12]. Shallow fusion simply interpolates the label probabilities with the ones from an external LM during decoding stage. Other fusion variants, such as deep fusion, cold fusion and component fusion, have also been proposed. Data augmentation through speech synthesis is another solution. Work in [13,14] has shown that data augmentation with text-to-speech utterances yields improvement to E2E models; however, there still remains a substantial gap in performance between mod-els trained on human speech and those trained on synthesized speech. Similar to the tricks used in conventional hybrid approaches, two-pass decoding also can be introduced to E2E models with improved recognition performance. Recently, a two-pass RNN-T+LAS model, where LAS rescores hypotheses from RNN-T, has been proposed [15] and improved further with more tricks [16]. To surpass server-side conventional model, trade-off between quality and latency has been particularly considered.
Most approaches on neural transducer have been conducted on English corpora and different modeling units, such as phonemes, grapheme and word-piece, have been explored [17]. In this paper, we are particularly interested in streaming on-device Mandarin ASR using RNN-T. Mandarin Chinese is significantly different from English in both written and spoken aspects. Chinese is a character-based language and each character is pronounced as a tonal syllable. There are several studies on LAS and Transformer based Mandarin ASR, but we only find one paper on the use of RNN-T in Mandarin which shows its feasibility on modeling Chinese characters [18]. Character has been widely chosen as a natural modeling unit in Transformer-based Mandarin ASR as well. However, as Chinese has a huge set of characters, all these works have chosen a partial set of frequently-used characters to model while the rest are simply abandoned, which means the abandoned characters can never be outputted, leading to out-of-vocabulary (OOV) problem.
In this paper, we propose a novel cascade RNN-T approach for streaming on-device Mandarin speech recognition. Specifically, we cascade two RNN transducers to strengthen the language modeling ability -the first transforms acoustic input into a syllable sequence, while the second converts the syllable sequence into the final character sequence. The proposed approach has the following advantages: 1) a rich text repository can be easily used to strengthen the language modeling ability; 2) the OOV issue does not exist by the introduction of RNN-T based syllable-to-character (S2C) converter; 3) streaming ability has been maintained as the use of the transducer framework. By introducing several important tricks on the proposed syllable-based cascade RNN-T, including adding convolution layer, self shallow fusion, text augmentation and syllable correction, we manage to surpass the character-based RNN-T by a large margin. Compared with character RNN-T with shallow fusion, cascade RNN-T has an obvious improvement on several Mandarin test sets, with higher recognition quality and similar latency.
RNN-TRANSDUCER
Our proposed approach is based on a modified version of oracle RNN-T [2]. Here we first introduce the RNN-T structure as well as the typical shallow fusion strategy to strengthen the language modeling ability.
Acoustic Features
……
Character Embedding
N × LSTM Layer M 2 × Dilated 1D Convolution Layer
Model architecture
RNN-T has the ability to model the alignment between speech features x = [x 1 , x 2 , · · · , x T ] and output label sequences y = [y 1 , y 2 , · · · , y U ], where T is the number of feature frames and U is the length of the output label sequence. Fig. 1 shows the RNN-T structure used in this paper, which mainly includes an encoder, a predict network and a joint network. Specifically, the encoder converts the speech features into a high-dimensional representation h enc . Different from the oracle RNN-T [2], we add M 1 layers of 1-D convolution with a stride size greater than 1 for down-sampling. This operation will effectively reduce the resource consumption during the training process and decoding stage. In addition, the M 2 layers of dilated 1-D convolution followed can capture local context information more effectively at a small cost of a short latency. The whole process can be expressed as
h enc = Encoder(x).(1)
The main purpose of prediction network is to generate a higher-dimensional representation h pred u of last label y u−1 as shown in Eq. (2). In order to avoid the sparsity caused by directly using one-hot label as input, an embedding layer is arranged before N layers of LSTM.
h pred u = Prediction(y u−1 )(2)
The joint network is represented by several fully connected layers, and finally a softmax layer is used to predict the probability P (k|t, u) of the next label, as shown in
h t,u = W joint tanh(Uh enc t + Vh pred u + b) + b joint (3) P (k|t, u) = softmax(h t,u )(4)
where U and V is the projection matrix to combine h enc t and h pred u , and W joint is used to project network's output to the number of labels. During training, RNN-T uses the forwardbackward algorithm [4] to maximize the posterior of y given x.
Shallow fusion
Shallow fusion [12] adopts an RNNLM that can be trained using additional text data to improve the language modeling ability. The method is done by adding up the non-blank posterior probability of RNN-T and RNNLM in the logarithmic domain during the decoding process. As shown in Eq. (5), P (y|x) is the actual posterior probability used in the decoding process, log P (y|x) is the posterior probability of y given x, and log P (y) is the probability of y generated by RNNLM.
P (y|x) = log P (y|x) + λ log P (y)(5)
When using shallow fusion, beam search can be adopted to ensure that as many decoding paths as possible are considered.
CASCADE RNN-TRANSDUCER
Cascade model architecture
We propose a cascade RNN-T structure shown in Fig. 2. Formally in Eq. (6), the speech features x = [x 1 , x 2 , · · · , x T ] first go through a syllable-level RNN-T, to obtain the syllable sequence y s = [y s 1 , y s 2 , · · · , y s U s ], where U s is the length of the output syllable sequence.
y s = RNNT Char (x).(6)
Then we use an S2C converter to convert the syllable sequence y s into the character sequence y:
y = S2C(y s )(7)
During the training process, only the syllable-level RNN-T model in the first step needs data paired with speech and text, while the S2C converter in the second step can be trained only with text data. This is realized by another RNN-T.
RNN-T based S2C Converter
As shown in Fig. 3, the encoder input for the RNN-T based S2C converter is syllable sequence y s while the input for the prediction network is character sequence y. The output of joint network is the posterior distribution probability of the next character. Here we use an embedding layer to map onehot input to high-dimensional representations in both encoder and prediction networks, and RNN-T loss is used in the training process as well. Fig. 3. RNN-T based syllable to character converter.
Convolution layer
For a single LSTM layer, the network can only intuitively obtain the information of current token in each time step. In order to capture more context information, we add a convolution layer before the encoder, as described in
h conv = Convolution(y s )(8)
and h enc = Encoder(h conv ).
During training, in order to ensure the equal length of the input and the output of encoder, assuming that the kernel size of the convolution layer is M s , M s − 1 zeros are padded at the start the syllable sequence.
Self shallow fusion
When using RNN-T as an S2C converter, a large amount of text data can be adopted to train the model. Hence we add a fully-connected layer after the prediction network to give the prediction network an additional task functioning as RNNLM and perform shallow fusion P (y|y s ) = log P (y|y s ) + λ log FC(h pred ).
We call this trick as self shallow fusion. During the training process, we define the loss function aŝ
L = L RNN-T + L ce(11)
where L RN N −T is the loss of RNN-T and L ce is the loss of additional RNNLM task. The language modeling ability is implicitly embedded in the S2C RNN-T. But it still makes sense to use an additional task to explicitly do language modeling. The above multi-task training method can apparently reduce model parameters as well as computation as we do not need another RNNLM for shallow fusion -this is why we call this trick as self shallow fusion.
Text augmentation
In order to prevent over-fitting, we adopt a text augmentation strategy, which is similar to SpecAugment [19] that is done to audio spectrum. Specifically, during training of S2C RNN-T, we randomly change several syllables in the syllable sequence to some other syllables as the input of the encoder. Alleviating over-fitting, this method can also be regarded as simulation on syllable errors caused by the first syllable-level RNN-T.
Syllable correction
The output of syllable-level RNN-T has insertion, deletion and replacement errors. This means that the input of S2C RNN-T inevitably has errors. So the S2C RNN-T desires the ability to convert the erroneous syllable sequence into character sequence that is grammatically and semantically correct. Inspired by recently studies [20,21], we use a syllable correction strategy to map the 'noisy' syllable sequence to correct character sequence. Because RNN-T has the ability to map unequal length sequences in nature, we first decode the training set using the syllable-level RNN-T to syllable sequences. Then we use these syllable sequences with their corresponding correct character sequences to build additional correction text and mix these data with normal text data. At last, we use the mixed data to finetune an existing S2C RNN-T to improve the performance of the model.
EXPERIMENTS
Dataset
In this paper, we evaluated the proposed cascade RNN-T approach on two Mandarin speech recognition tasks: pub-lic AISHELL-2 corpus [22] and internal 7,500 hours corpus. The AISHELL-2 corpus contains 1,000 hours of clean reading speech data collected from 1991 speakers 1 . The 7500-hour corpus contains reading speech data in fields such as entertainment, journalism, literature, technology and free conversation and so on. For both datasets, we reserve 5,000 sentences as development set. In addition, we use 2GB Chinese text data crawled from internet to train the RNNLM and the S2C converter. Character error rate (CER) is reported for AISHELL-1 test set (TA1) [23], AISHELL-2 test set (TA2) [22], an internal voice input test set (VI) and a voice assistant (VA) test set. The VI test set consists of about 3.4 hours data with 3,063 sentences covering many proper nouns and named entities, which is used to verify the language generalization ability of the speech recognition model. The VA test set consists of about 3.9 hours data with 5,000 speech commands to a voice assistant, which is challenging not only in language aspect, but also acoustically because speech is collected at various conditions some with low SNR.
Experimental Setup
For all experiments, the speech features are 71-dimensional log Mel-filterbank (FBank) computed on 25ms window with 10ms shift. We also applied SpecAugment [19] for acoustic data augmentation. For modeling units, we choose 5,139 characters for character-based models and 1,733 tonal syllables for syllable-based models. All experiments are conducted using TensorFlow [24] and Horovod [25]. During training process, we use random state passing (RSP) to avoid long-form problem [26]. We adopt AdamOptimizer [27] with learning rate at 0.0003 and gradient clipping at 5.0 for all models. Moreover, we employ layer normalization and variational recurrent dropout to prevent over-fitting. We use breadth-first beam search algorithm which is effective in exploring and combining alternative alignments [28].
For the character RNN-T, the encoder network consists of 6 convolution layers and 5 LSTM layers while all convolution layers' kernel size is 3 and stride for each layer is {2, 2, 1, 1, 1}, while the number of filters is set to {256, 256, 512, 512, 512} and dilation rate is {1, 1, 1, 2, 4}. The prediction network has 2 LSTM layers and the joint network has 640 hidden units. For the cascade RNN-T, the first syllable-level RNN-T has the same architecture as the character RNN-T just mentioned. And for the second RNN-T for S2C conversion, the encoder network consists of 2 LSTM layers, the prediction network has a single LSTM layer and the joint network has 640 hidden units. The additional convolution layer has 1,024 filters and the kernel size is 3. All the LSTM layers mentioned above have 1,280 hidden units followed by a 640-dimensional projection layer. The RNNLM for shallow fusion in the character RNN-T approach consists of 2 LSTM layers each has 2,048 hidden units with a 640-dimensional projection layer.
AISHELL-2 Task
To verify our proposed cascade RNN-T, we first evaluate on the open AISHELL-2 corpus. As shown in Tab. 1, we obtain a small performance improvement from using beam search (B1) and shallow fusion (B2) for character-level RNN-T compared with B0 when the hyper-parameter λ for shallow fusion is set to 0.35 and beam size is 5. For cascade RNN-T, during decoding process, we use greedy search for syllable-level RNN-T and beam search with beam size is 5 for S2C RNN-T. The performance of the cascade RNN-T (E0) is worse than character-level RNN-T. But we notice that the syllable error rate on TA2 for the syllable-level RNN-T in the cascade approach (E0) is 12.8%, which is significantly lower than the CER of character RNN-T (B0) on the same test set. This phenomenon unveils that the syllable accuracy of the syllablelevel RNN-T is higher than that of character-level RNN-T, so we believe that the second S2C RNN-T in the proposed cascade architecture holds great potential to be further strengthened to bring much better character error rate. We further improve the S2C RNN-T followed the tricks introduced in Section 3.2, and the results are shown in Tab. 2. From the comparison between E1 and E0, it can be concluded that when the S2C RNN-T captures more context information through the convolution layer, the performance of cascade RNN-T can be significantly improved. For example, the CER on TA1 has been dramatically decreased from 17.4% to 6.66%. From E2 and E3, suppressing the overfitting of the model using text augmentation and explicitly introducing language information using self shallow fusion can also play a positive role. We use the decode result of syllable-level RNN-T on training set to generate correction text, and then finetune the model in E3 with the learning rate of 1e-4 while the normal text data is 7 times larger than correction text data in each batch. And finally, syllable correction (E4) can bring further performance gain and we manage to surpass the characterbased RNN-T (B2) on all testing sets. Specifically on the challenging VI test set, we can achieve 12.65% relative CER reduction. Tab. 3 gives a detailed illustration the changes of deletion, insertion and substitute errors for the proposed tricks on TA2. From E0 to E4, we can see that all kinds of tricks we use can effectively reduce the substitute errors. At the same time, it also confirms that our proposed cascade RNN-T can greatly improve the language modeling ability of the model because substitute errors often represent grammatical errors.
7500-hour Task
We further evaluate the proposed approach on our internal large-scale 7500-hour corpus and use the same experimental setup and hyper-parameters used in AISHELL-2 Task. This time we add another more difficult test set VA that is collected at challenging acoustic conditions. As shown in Tab. 4, we first notice that with the increase of training data, there is a big CER reduction, especially on TA2 and VI test sets, as compared with the AISHELL-2 1000-hour results in Tab. 2. Comparing E9 with B5, our proposed cascade RNN-T can better enhance the language modeling ability of RNN-T than the character RNN-T with shallow fusion. Note that they both use the same 2GB text data for strengthening language modeling. In addition, on the challenging VA test set, cascade RNN-T has a large improvement over character-level RNN-T, with 14.18% relative CER reduction. These results can prove that our proposed method can significantly improve the language modeling ability. In order to determine the advantage of using RNN-T as the S2C converter, we also try LSTM and BLSTM as the S2C converter which contains 2 LSTM or BLSTM layers with 2,048 hidden units followed by a 640dimensional projection layer. From A0 and A1, we can observe that RNN-T based S2C converter has obvious advan-tages over LSTM which is also a streaming structure. The performance of RNN-T based S2C converter also surpasses the BLSTM-based one which is non-streaming and uses both past and future context.
Parameters, Latency and Quality
Tab. 5 summarizes several typical models in terms of the number of parameters, recognition latency and quality. Comparing E4 with B2, we can see that the proposed cascade RNN-T achieves superior performance over the character-based RNN with similar levels of model parameters and recognition latency.
CONCLUSIONS
In this paper, we propose a novel cascade RNN-T approach to improve the language modeling ability of RNN-T. Cascade RNN-T aims to train the language model separately from the acoustic model in order to introduce a large amount of additional text to strengthen the language modeling ability. Specifically, we first use an RNN-T to transform acoustic feature into syllable sequence, and then convert the syllable sequence into character sequence by another RNN-T. By introducing several important tricks, including spanning context through convolution layer, self shallow fusion, text augmentation and syllable correction, our approach manages to surpass character-based RNN-T with a large margin on several Mandarin test sets. As our second RNN-T is an unequal mapping from pronunciation units to graphemes, we plan to try smaller units, e.g., phoneme to grapheme, particularly in English ASR in the future.
Fig. 1 .
1The RNN-T architecture used in this paper.
Fig. 2. The architecture of cascade RNN-T.Syllable based RNN-T
Acoustic Features
……
Syllable Sequence
Syllable-to-Character Converter
……
Character Sequence
Syllable Tokens
……
Character Embedding
N × LSTM Layer
M × LSTM Layer
Joint Network
Softmax Layer
Predict Labels
Character Tokens
Encoder
Prediction Network
Syllable Embedding
1D convolution Layer
Fully Connected Layer
+
Table 1 .
1Comparison of character-level RNN-T and Cascade RNN-T on test sets in CER.Exp ID Model
CER (%)
TA1 TA2
VI
B0
RNN-T Character 6.71 16.27 21.02
B1
+ Beam search
6.55 15.79 20.44
B2
+ Shallow fusion 6.11 15.50 20.15
E0
Cascade RNN-T
17.4 22.08 24.52
Table 2 .
2Comparison of the results of various tricks to improve the performance of cascade RNN-T on test sets in CER.Exp ID Model
CER (%)
TA1 TA2
VI
RNN-T Character
B2
+ Beam search
6.11 15.50 20.15
+ Shallow fusion
E0
Cascade RNN-T
17.4 22.08 24.52
E1
+ Convolution layer
6.66 15.59 18.27
E2
+ Text augmentation
6.4 15.24 17.96
E3
+ Self shallow fusion 5.89 14.93 17.78
E4
+ Syllable correction 5.72 14.85 17.60
Table 3. Comparison of deletion, insertion and substitute er-
rors of each model on the TA2 test set.
Exp ID Model
CER (%) (D/I/S)
TA2
RNN-T Character
B2
+ Beam search
15.50 (0.71/0.26/14.53)
+ Shallow fusion
E0
Cascade RNN-T
22.08(0.84/0.21/21.03)
E1
+ Convolution layer
15.59(0.76/0.17/14.66)
E2
+ Text augmentation
15.24(0.75/0.17/14.32)
E3
+ Self shallow fusion 14.93(0.75/0.17/14.01)
E4
+ Syllable correction 14.85(0.75/0.16/13.94)
Table 4 .
4Comparison of character-level RNN-T and Cascade RNN-T on test sets in CER, training with 7500-hour corpus.Exp ID Model
CER (%)
TA1
TA2
VI
VA
B3
RNN-T Character
5.15 10.57 10.21 33.63
B4
+ Beam search
4.99 10.08 9.69 32.88
B5
+ Shallow fusion
4.85
9.96
9.5
32.71
E5
Cascade RNN-T
14.56 18.51 18.16 38.09
E6
+ Convolution layer
6.32 11.33 10.62 31.76
E7
+ Text augmentation
5.53 10.42 9.59 29.53
E8
+ Self shallow fusion
4.62
9.33
8.82 28.21
E9
+ Syllable correction
4.57
9.16
8.65 28.07
A0
RNN-T Syllable + LSTM S2C
11.48 15.02 14.71 36.33
A1
RNN-T Syllable + BLSTM S2C 4.98
9.59
9.05
32.1
Table 5 .
5Comparison of parameters, latency and performance for different models.Exp ID Model
Param.(M)/Latency
CER (%)
TA1
TA2
VI
VA
RNN-T Character
B5
+ Beam search
93M/280ms
4.85
9.96
9.5
32.71
+ Shallow fusion
E5
Cascade RNN-T
88.5M/280ms
14.56 18.51 18.16 38.09
E6
+ Convolution layer
92.5M/300ms
6.32 11.33 10.62 31.76
E7
+ Text augmentation
92.5M/300ms
5.53 10.42 9.59 29.53
E8
+ Self shallow fusion
95.5M/300ms
4.62
9.33
8.82 28.21
E9
+ Syllable correction
95.5M/300ms
4.57
9.16
8.65 28.07
Can be acquired from: www.aishelltech.com/aishell 2
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-Rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, IEEE Signal processing magazine. 296Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Se- nior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al., "Deep neural networks for acoustic mod- eling in speech recognition: The shared views of four re- search groups," IEEE Signal processing magazine, vol. 29, no. 6, pp. 82-97, 2012.
Alex Graves, arXiv:1211.3711Sequence transduction with recurrent neural networks. arXiv preprintAlex Graves, "Sequence transduction with recurrent neural networks," arXiv preprint arXiv:1211.3711, 2012.
Streaming end-to-end speech recognition for mobile devices. Yanzhang He, N Tara, Rohit Sainath, Ian Prabhavalkar, Raziel Mcgraw, Ding Alvarez, David Zhao, Anjuli Rybach, Yonghui Kannan, Ruoming Wu, Pang, IEEE International Conference on Acoustics, Speech and Signal Processing. Yanzhang He, Tara N Sainath, Rohit Prabhavalkar, Ian McGraw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, Ruoming Pang, et al., "Streaming end-to-end speech recognition for mobile devices," in IEEE International Conference on Acous- tics, Speech and Signal Processing, 2019, pp. 6381- 6385.
Speech recognition with deep recurrent neural networks. Alex Graves, Mohamed Abdel-Rahman, Geoffrey Hinton, IEEE international conference on acoustics, speech and signal processing. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton, "Speech recognition with deep recurrent neural networks," in IEEE international conference on acous- tics, speech and signal processing, 2013, pp. 6645- 6649.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin, "Attention is all you need," in Ad- vances in neural information processing systems, 2017, pp. 5998-6008.
A time-restricted selfattention layer for ASR. Daniel Povey, Hossein Hadian, Pegah Ghahremani, Ke Li, Sanjeev Khudanpur, IEEE International Conference on Acoustics, Speech and Signal Processing. Daniel Povey, Hossein Hadian, Pegah Ghahremani, Ke Li, and Sanjeev Khudanpur, "A time-restricted self- attention layer for ASR," in IEEE International Confer- ence on Acoustics, Speech and Signal Processing, 2018, pp. 5874-5878.
Speechtransformer: a no-recurrence sequence-to-sequence model for speech recognition. Linhao Dong, Shuang Xu, Bo Xu, IEEE International Conference on Acoustics, Speech and Signal Processing. Linhao Dong, Shuang Xu, and Bo Xu, "Speech- transformer: a no-recurrence sequence-to-sequence model for speech recognition," in IEEE International Conference on Acoustics, Speech and Signal Process- ing, 2018, pp. 5884-5888.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. William Chan, Navdeep Jaitly, Quoc Le, Oriol Vinyals, IEEE International Conference on Acoustics, Speech and Signal Processing. William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in IEEE International Conference on Acoustics, Speech and Signal Processing, 2016, pp. 4960-4964.
Towards end-to-end speech recognition with recurrent neural networks. Alex Graves, Navdeep Jaitly, International conference on machine learning. Alex Graves and Navdeep Jaitly, "Towards end-to-end speech recognition with recurrent neural networks," in International conference on machine learning, 2014, pp. 1764-1772.
RNNtransducer with stateless prediction network. Mohammadreza Ghodsi, Xiaofeng Liu, James Apfel, Rodrigo Cabrera, Eugene Weinstein, IEEE International Conference on Acoustics, Speech and Signal Processing. Mohammadreza Ghodsi, Xiaofeng Liu, James Apfel, Rodrigo Cabrera, and Eugene Weinstein, "RNN- transducer with stateless prediction network," in IEEE International Conference on Acoustics, Speech and Sig- nal Processing, 2020, pp. 7049-7053.
A density ratio approach to language model fusion in endto-end automatic speech recognition. Erik Mcdermott, Hasim Sak, Ehsan Variani, IEEE Automatic Speech Recognition and Understanding Workshop. Erik McDermott, Hasim Sak, and Ehsan Variani, "A density ratio approach to language model fusion in end- to-end automatic speech recognition," in IEEE Au- tomatic Speech Recognition and Understanding Work- shop, 2019, pp. 434-441.
Shallowfusion end-to-end contextual biasing. Ding Zhao, Tara N Sainath, David Rybach, Pat Rondon, Deepti Bhatia, Bo Li, Ruoming Pang, InterspeechDing Zhao, Tara N Sainath, David Rybach, Pat Rondon, Deepti Bhatia, Bo Li, and Ruoming Pang, "Shallow- fusion end-to-end contextual biasing.," in Interspeech, 2019, pp. 1418-1422.
You do not need more data: Improving end-toend speech recognition by text-to-speech data augmentation. Aleksandr Laptev, Roman Korostik, Aleksey Svischev, Andrei Andrusenko, Ivan Medennikov, Sergey Rybin, arXiv:2005.07157arXiv preprintAleksandr Laptev, Roman Korostik, Aleksey Svischev, Andrei Andrusenko, Ivan Medennikov, and Sergey Ry- bin, "You do not need more data: Improving end-to- end speech recognition by text-to-speech data augmen- tation," arXiv preprint arXiv:2005.07157, 2020.
Speech recognition with augmented synthesized speech. Andrew Rosenberg, Yu Zhang, Bhuvana Ramabhadran, Ye Jia, Pedro Moreno, Yonghui Wu, Zelin Wu, IEEE Automatic Speech Recognition and Understanding Workshop. Andrew Rosenberg, Yu Zhang, Bhuvana Ramabhad- ran, Ye Jia, Pedro Moreno, Yonghui Wu, and Zelin Wu, "Speech recognition with augmented synthesized speech," in IEEE Automatic Speech Recognition and Understanding Workshop, 2019, pp. 996-1002.
Towards fast and accurate streaming end-to-end ASR. Bo Li, Tara N Shuo-Yiin Chang, Ruoming Sainath, Yanzhang Pang, Trevor He, Yonghui Strohman, Wu, IEEE International Conference on Acoustics, Speech and Signal Processing. Bo Li, Shuo-yiin Chang, Tara N Sainath, Ruoming Pang, Yanzhang He, Trevor Strohman, and Yonghui Wu, "Towards fast and accurate streaming end-to-end ASR," in IEEE International Conference on Acoustics, Speech and Signal Processing, 2020, pp. 6069-6073.
A streaming on-device end-to-end model surpassing server-side conventional model quality and latency. N Tara, Yanzhang Sainath, Bo He, Arun Li, Ruoming Narayanan, Antoine Pang, Bruguier, Shuo-Yiin, Wei Chang, Raziel Li, Zhifeng Alvarez, Chen, IEEE International Conference on Acoustics, Speech and Signal Processing. Tara N Sainath, Yanzhang He, Bo Li, Arun Narayanan, Ruoming Pang, Antoine Bruguier, Shuo-yiin Chang, Wei Li, Raziel Alvarez, Zhifeng Chen, et al., "A stream- ing on-device end-to-end model surpassing server-side conventional model quality and latency," in IEEE In- ternational Conference on Acoustics, Speech and Signal Processing, 2020, pp. 6059-6063.
Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer. Kanishka Rao, Haşim Sak, Rohit Prabhavalkar, IEEE Automatic Speech Recognition and Understanding Workshop. Kanishka Rao, Haşim Sak, and Rohit Prabhavalkar, "Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer," in IEEE Automatic Speech Recognition and Under- standing Workshop, 2017, pp. 193-199.
Exploring RNN-transducer for Chinese speech recognition. Senmao Wang, Pan Zhou, Wei Chen, Jia Jia, Lei Xie, Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Senmao Wang, Pan Zhou, Wei Chen, Jia Jia, and Lei Xie, "Exploring RNN-transducer for Chinese speech recognition," in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2019, pp. 1364-1369.
Specaugment: A simple data augmentation method for automatic speech recognition. Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, Quoc V Le, InterspeechtDaniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le, "Specaugment: A simple data augmentation method for automatic speech recognition," in Interspeecht, 2019, pp. 2613-2617.
A spelling correction model for end-to-end speech recognition. Jinxi Guo, N Tara, Ron J Sainath, Weiss, IEEE International Conference on Acoustics, Speech and Signal Processing. Jinxi Guo, Tara N Sainath, and Ron J Weiss, "A spelling correction model for end-to-end speech recognition," in IEEE International Conference on Acoustics, Speech and Signal Processing, 2019, pp. 5651-5655.
Automatic spelling correction with transformer for ctcbased end-to-end speech recognition. Shiliang Zhang, Ming Lei, Zhijie Yan, arXiv:1904.10045arXiv preprintShiliang Zhang, Ming Lei, and Zhijie Yan, "Au- tomatic spelling correction with transformer for ctc- based end-to-end speech recognition," arXiv preprint arXiv:1904.10045, 2019.
Aishell-2: transforming Mandarin ASR research into industrial scale. Jiayu Du, Xingyu Na, Xuechen Liu, Hui Bu, arXiv:1808.10583arXiv preprintJiayu Du, Xingyu Na, Xuechen Liu, and Hui Bu, "Aishell-2: transforming Mandarin ASR research into industrial scale," arXiv preprint arXiv:1808.10583, 2018.
Aishell-1: An open-source Mandarin speech corpus and a speech recognition baseline," in Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment. Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, Hao Zheng, Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng, "Aishell-1: An open-source Mandarin speech corpus and a speech recognition baseline," in Confer- ence of the Oriental Chapter of the International Co- ordinating Committee on Speech Databases and Speech I/O Systems and Assessment, 2017, pp. 1-5.
TensorFlow: Large-scale machine learning on heterogeneous systems. Martín Abadi, Software available from tensorflow.orgMartín Abadi et al., "TensorFlow: Large-scale machine learning on heterogeneous systems," 2015, Software available from tensorflow.org.
Horovod: fast and easy distributed deep learning in tensorflow. Alexander Sergeev, Mike Del Balso, arXiv:1802.05799arXiv preprintAlexander Sergeev and Mike Del Balso, "Horovod: fast and easy distributed deep learning in tensorflow," arXiv preprint arXiv:1802.05799, 2018.
Recognizing long-form speech using streaming end-to-end models. Arun Narayanan, Rohit Prabhavalkar, Chung-Cheng Chiu, David Rybach, Tara N Sainath, Trevor Strohman, IEEE Automatic Speech Recognition and Understanding Workshop. Arun Narayanan, Rohit Prabhavalkar, Chung-Cheng Chiu, David Rybach, Tara N Sainath, and Trevor Strohman, "Recognizing long-form speech using streaming end-to-end models," in IEEE Automatic Speech Recognition and Understanding Workshop, 2019, pp. 920-927.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P. Kingma and Jimmy Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
Monotonic recurrent neural network transducer and decoding strategies. Anshuman Tripathi, Han Lu, Hasim Sak, Hagen Soltau, IEEE Automatic Speech Recognition and Understanding Workshop. Anshuman Tripathi, Han Lu, Hasim Sak, and Hagen Soltau, "Monotonic recurrent neural network transducer and decoding strategies," in IEEE Automatic Speech Recognition and Understanding Workshop, 2019, pp. 944-948.
| [] |
[
"Improving Graph-Based Text Representations with Character and Word Level N-grams",
"Improving Graph-Based Text Representations with Character and Word Level N-grams"
] | [
"Wenzhe Li \nComputer Science Department\nUniversity of Sheffield\nUK\n",
"Nikolaos Aletras n.aletras@sheffield.ac.uk \nComputer Science Department\nUniversity of Sheffield\nUK\n"
] | [
"Computer Science Department\nUniversity of Sheffield\nUK",
"Computer Science Department\nUniversity of Sheffield\nUK"
] | [] | Graph-based text representation focuses on how text documents are represented as graphs for exploiting dependency information between tokens and documents within a corpus. Despite the increasing interest in graph representation learning, there is limited research in exploring new ways for graph-based text representation, which is important in downstream natural language processing tasks. In this paper, we first propose a new heterogeneous word-character text graph that combines word and character n-gram nodes together with document nodes, allowing us to better learn dependencies among these entities. Additionally, we propose two new graph-based neural models, WCTextGCN and WCTextGAT, for modeling our proposed text graph. Extensive experiments in text classification and automatic text summarization benchmarks demonstrate that our proposed models consistently outperform competitive baselines and state-of-the-art graph-based models. 1 | 10.48550/arxiv.2210.05999 | [
"https://export.arxiv.org/pdf/2210.05999v1.pdf"
] | 252,846,544 | 2210.05999 | 2da720db3ef6a5abe611b45d5176658f542570ac |
Improving Graph-Based Text Representations with Character and Word Level N-grams
Wenzhe Li
Computer Science Department
University of Sheffield
UK
Nikolaos Aletras n.aletras@sheffield.ac.uk
Computer Science Department
University of Sheffield
UK
Improving Graph-Based Text Representations with Character and Word Level N-grams
Graph-based text representation focuses on how text documents are represented as graphs for exploiting dependency information between tokens and documents within a corpus. Despite the increasing interest in graph representation learning, there is limited research in exploring new ways for graph-based text representation, which is important in downstream natural language processing tasks. In this paper, we first propose a new heterogeneous word-character text graph that combines word and character n-gram nodes together with document nodes, allowing us to better learn dependencies among these entities. Additionally, we propose two new graph-based neural models, WCTextGCN and WCTextGAT, for modeling our proposed text graph. Extensive experiments in text classification and automatic text summarization benchmarks demonstrate that our proposed models consistently outperform competitive baselines and state-of-the-art graph-based models. 1
Introduction
State-of-the art graph neural network (GNN) architectures (Scarselli et al., 2008) such as graph convolutional networks (GCNs) (Kipf and Welling, 2016) and graph attention networks (GATs) (Veličković et al., 2017) have been successfully applied to various natural language processing (NLP) tasks such as text classification (Yao et al., 2019;Liang et al., 2022;Ragesh et al., 2021;Yao et al., 2021) and automatic summarization (Wang et al., 2020;.
The success of GNNs in NLP tasks highly depends on how effectively the text is represented as a graph. A simple and widely adopted way to construct a graph from text is to represent documents and words as graph nodes and encode their dependencies as edges (i.e., word-document graph). A given text is converted into a heterogeneous graph where nodes representing documents are connected to nodes representing words if the document contains that particular word (Minaee et al., 2021;Wang et al., 2020). Edges among words are typically weighted using word co-occurrence statistics that quantify the association between two words, as shown in Figure 1 (left).
However, word-document graphs have several drawbacks. Simply connecting individual word nodes to document nodes ignores the ordering of words in the document which is important in understanding the semantic meaning of text. Moreover, such graphs cannot deal effectively with word sparsity. Most of the words in a corpus only appear a few times that results in inaccurate representations of word nodes using GNNs. This limitation is especially true for languages with large vocabularies and many rare words, as noted by (Bojanowski et al., 2017). Current word-document graphs also ignore explicit document relations i.e., connections created from pair-wise document similarity, that may play an important role for learning better document representations (Li et al., 2020).
Contributions: In this paper, we propose a new simple yet effective way of constructing graphs from text for GNNs. First, we assume that word ordering plays an important role for semantic understanding which could be captured by higherorder n-gram nodes. Second, we introduce character n-gram nodes as an effective way for mitigating sparsity (Bojanowski et al., 2017). Third, we take into account document similarity allowing the model to learn better associations between documents. Figure 1 (right) shows our proposed Word-Character Heterogeneous text graph compared to a standard word-document graph (left). Finally, we propose two variants of GNNs, WCTextGCN and WCTextGAT, that extend GCN and GAT respectively, for modeling our proposed text graph. (1) word-document connection if a document contains a word (i.e., tf-idf); (2) word-word connection based on co-occurrence statistics (i.e., PMI); (3) document-document connection with similarity score (i.e., cosine similarity); (4) word n-grams and words connection if a word is part of n-grams (0/1); (5) word n-grams and document connection if a document contains a n-grams (0/1); and (6) character n-grams and words connection if a character n-grams is part of a word (0/1).
Methodology
Given a corpus as a list of text documents C = {D 1 , ..., D n }, our goal is to learn an embedding h i for each document D i using GNNs. This representation can subsequently be used in different downstream tasks such as text classification and summarization.
Word-Character Heterogeneous Graph
The Word-Character Heterogeneous graph G = (V, E) consists of the node set
V = V d ∪ V w ∪ V g ∪ V c , where V d = {d 1 , .
., d n } corresponds to a set of documents, V w = {w 1 , ..., w m } denotes a set of unique words, V g = {g 1 , ..., g l } denotes a set of unique n-gram tokens, and finally V c = {c 1 , ..., c p } denotes a set of unique character n-grams. The edge types among different nodes vary depending on the types of the connected nodes. In addition, we also add edges between two documents if their cosine similarity is larger than a pre-defined threshold.
Word and Character N-grams Enhanced Text GNNs
The goal of GNN models is to learn representation for each node. We use H d ∈ R n d ×k , H w ∈ R nw×k , H g ∈ R ng×k , H c ∈ R nc×k to denote representations of document nodes, word nodes, word n-grams nodes and character n-grams nodes, where k is the size of the hidden dimension size.
n d , n w , n g , n w represent the number of documents, words, word n-grams and character n-grams in the graph respectively. We use e dw ij to denote the edge weight between the ith document and jth word. Similarly, e cw kj denotes the edge weight between the kth character n-gram and jth word.
The original GCN and GAT models only consider simple graphs where the graph contains a single type of nodes and edges. Since we now are dealing with our Word-Character Heterogeneous graph, we introduce appropriate modifications.
Word and Character N-grams Enhanced Text GCN (WCTextGCN) In order to support our new graph type for GCNs, we need a modification for the adjacency matrix A. The updating equation for original GCN is:
H (L+1) = f (ÂH L W L )
where W L is the free parameter to be learned for layer L. We assume H is simply the concatenation of H d , H w , H g , H c . For WCTextGCN, the adjacency matrix A is re-defined as:
A = A dd sim A dw tf idf A dg tf idf − A wd tf idf A ww pmi A wg 0/1 A wc 0/1 A gd tf idf A gw 0/1 − − − A cw 0/1 − −
where A dd sim denotes the pair-wise similarity between documents 2 , sub-matrix A dw tf idf represents the tf-idf score for all edges linking documents to words, A wg 0/1 is the boolean sub-matrix representing whether a word n-gram contains a specific word, and so on. The sub-matrix A dw tf idf is the transpose of sub-matrix A wd tf idf . Word and Character N-grams Enhanced Text GAT WCTextGAT In GAT, the updates to the node representation is computed by weighting the importance of neighboring nodes. Since our text graph contains four types of nodes, each updating procedure consists of the following four phases (dependency relation among nodes can be seen in Figure 1
):Ĥ d = GAT(H d , H w , H g ) H w = GAT(H d , H w , H g , H c ) H g = GAT(H d , H w ) H c = GAT(H w )
For example, to update word representationĤ w , we need to aggregate information from document nodes, word nodes, word n-gram nodes and character n-gram nodes, respectively. Assume that we update the embedding for word node i by considering neighboring document nodes only (similar procedure applies to other three types of nodes). The computation is as follows:
z ij = Leaky(a T [W v h w i ; W d h d j ; W e e wd ij ]) α ij = exp(z ij ) l∈N i exp(z il ) h 1 i = σ( j∈N i α ij W d h d j )
where W v , W d , W e are the trainable weights of the model, that are applied to different types of nodes. α ij is the attention weight between word i and document j. N i denotes the set of neighboring documents for word i, and σ(.) is the activation function. Multi-head attention (Vaswani et al., 2017) is also introduced to capture different aspects of semantic representations for text:
h 1 i = K k=1 σ( j∈N i α k ij W k d h j )
Similarly, we can also computeĥ 2 i ,ĥ 3 i ,ĥ 4 i by considering other types of neighboring nodes. Finally, these representations are concatenated, followed by linear transformation.
Experiments and Results
We conduct experiments on two NLP tasks, i.e., text classification and extractive summarization. The latter one can be also viewed as a classification problem for each sentence level (i.e., to be included in the summary or not).
Text Classification
Data We select five widely used benchmark datasets including 20-Newsgroups, Ohsumed, R52, R8 and MR. The statistics and the descriptions for these datasets can be found in (Yao et al., 2019).
Baselines We compare our models to multiple existing state-of-the-art text classification methods including TF-IDF+LR, fastText (Joulin et al., 2016), CNN (Le and Mikolov, 2014), LSTM (Liu et al., 2016), PTE (Tang et al., 2015), BERT (Devlin et al., 2018), TextGCN (Yao et al., 2019) and TextGAT.
Experimental Settings We randomly select 10% of the training set for the validation. For the WCTextGCN model, we set the hidden size to 200. For the TextGAT and WCTextGAT models, we use 8 attention heads with each containing 16 hidden units, and set edge feature dimension to 32. The learning rate is 0.002 and dropout rate 0.5. We train all models for 200 epochs using Adam optimizer (Kingma and Ba, 2014) and early stopping with patience 20. For all the GNNs models, we use two hidden layers and 1-of-K encoding for initialization.
Results Table 1 shows the text classification results. We observe that the incorporation of word ngrams, character n-grams and document similarity are helpful and consistently improve predictive performance over other models. i.e., the WCTextGCN model improves accuracy on 20NG over 0.8% compared to the TextGCN model. The improvements in MR and R8 datasets are more substantial than others, 0.5% and 1.1%, respectively. This is because character n-grams help more when text is short, which is consistent with our hypothesis that character n-grams are helpful for mitigating sparsity problems. Varying the size of n-grams For character ngrams, we set n-grams ranging from 3 to 6 characters, and record the performance in different combinations of n-grams, i.e., 3-grams to 4-grams, 3grams to 5-grams and so on. The results are shown in Table 2 with best scores in bold. We observe that the best results are often obtained when we vary the range of n from 3 to 4. Further increase of n provides limited effects in model performance. In terms of word n-grams, we observe similar results.
Extractive Text Summarization
Extractive single-document summarization is formulated as a binary classification for each sentence with the aim to predict whether a sentence should be included in the summary or not. We follow the same setting as the HeterogeneousSumGraph (HSG) proposed by Wang et al. (2020)
Baselines and Experimental Settings
We evaluate our models on single document summarization by comparing to three different baselines (Wang et al., 2020), Ext-BILSTM, Ext-Transformer and HSG. For all experiments, we simply follow the same settings as in Wang et al. (2020) and evaluate performance using ROUGE (Lin and Hovy, 2003).
Results Tables 3 and 4 show the ROUGE scores on the two datasets. HGS-Ours with our new text graph performs consistently better than competing ones. In particular, for NYT50 data, the R-1 and R-2 metrics improve more than 0.5 compared to the HSG model. We observe similar performance differences for R-L on CNN/DailyMail data. This highlights the efficacy of our new text graph in learning better word and sentence representations, especially for the words that appear only few times but play an important role in summarization.
Conclusion
In this paper, we proposed a new text graph representation by incorporating word and character level information. GNN models trained using our text graph provide superior performance in text classification and single-document summarization compared to previous work. In the future, we plan to extend our proposed method to other tasks such as opinion extraction (Mensah et al., 2021), misinformation detection (Chandra et al., 2020;Mu and Aletras, 2020;Mu et al., 2022), voting intention forecasting (Tsakalidis et al., 2018) and socioeconomic attribute analysis (Aletras and Chamberlain, 2018). We finally plan to extend our GNN models by weighting the contribution of neighboring nodes (Zhang et al., 2022).
Figure 1 :
1A simple word-document graph (left); and our proposed Word-Character Heterogeneous graph (right). For right figure, the edge types are defined as follows:
Table 2 :
2The effect on performance by using character n-grams of n in {3,..,6}.
NYT50(Durrett et al., 2016). The first contains 287,227/13,368/11,490 examples for training, validation and test. The second contains 110,540 articles with their summaries and is split into 100,834 and 9,706 for training and test. Following Durrett et al. (2016), we use the last 4,000 documents from the training set for validation and 3,452 test examples.except that
we use our new Word-Character Heterogeneous
graph representation denoted as HSG-Ours.
Data We select two widely used benchmark
newes articles datasets, CNN/DailyMail (Hermann
et al., 2015) and
Ext-Transformer 45.07 24.72 40.85 HSG 46.89 26.26 42.58 HSG-Ours 46.96 26.20 43.43Model
R-1
R-2
R-L
Ext-BiLSTM
46.32 25.84 42.16
Table 3 :
3Performance (ROUGE) of different models on CNN/DailyMail.Model
R-1
R-2
R-L
Ext-BiLSTM
41.59 19.03 38.04
Ext-Transformer 41.33 18.83 37.65
HSG
42.31 19.51 38.74
HSG-Ours
42.85 20.03 38.90
Table 4 :
4Performance (ROUGE) of different models on NYT50.
Code is available here: https://github.com/ GraphForAI/TextGraph
We remove edges with similarity score less than a predefined threshold to avoid uninformative links.
Predicting twitter user socioeconomic attributes with network and language information. Nikolaos Aletras, Benjamin Paul Chamberlain, Proceedings of the 29th on Hypertext and Social Media. the 29th on Hypertext and Social MediaNikolaos Aletras and Benjamin Paul Chamberlain. 2018. Predicting twitter user socioeconomic at- tributes with network and language information. In Proceedings of the 29th on Hypertext and Social Me- dia, pages 20-24.
Enhancing scientific papers summarization with citation graph. Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2021. Enhancing scientific papers summarization with citation graph. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12498-12506.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.
Graph-based modeling of online communities for fake news detection. Shantanu Chandra, Pushkar Mishra, Helen Yannakoudakis, Madhav Nimishakavi, Marzieh Saeidi, Ekaterina Shutova, arXiv:2008.06274arXiv preprintShantanu Chandra, Pushkar Mishra, Helen Yan- nakoudakis, Madhav Nimishakavi, Marzieh Saeidi, and Ekaterina Shutova. 2020. Graph-based model- ing of online communities for fake news detection. arXiv preprint arXiv:2008.06274.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Learning-based single-document summarization with compression and anaphoricity constraints. Greg Durrett, Taylor Berg-Kirkpatrick, Dan Klein, Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summariza- tion with compression and anaphoricity constraints.
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1998-2008.
Karl Moritz Hermann, Tomáš Kočiskỳ, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, arXiv:1506.03340Teaching machines to read and comprehend. arXiv preprintKarl Moritz Hermann, Tomáš Kočiskỳ, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. arXiv preprint arXiv:1506.03340.
Bag of tricks for efficient text classification. Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov, arXiv:1607.01759arXiv preprintArmand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Semisupervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, arXiv:1609.02907arXiv preprintThomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
Distributed representations of sentences and documents. Quoc Le, Tomas Mikolov, PMLRInternational conference on machine learning. Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Interna- tional conference on machine learning, pages 1188- 1196. PMLR.
Textsgcn: Documentlevel graph topology refinement for text classification. Chen Li, Xutan Peng, Hao Peng, Jianxin Li, Lihong Wang, S Yu Philip, Chen Li, Xutan Peng, Hao Peng, Jianxin Li, Lihong Wang, and S Yu Philip. 2020. Textsgcn: Document- level graph topology refinement for text classifica- tion.
Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks. Knowledge-Based Systems. Bin Liang, Hang Su, Lin Gui, Erik Cambria, Ruifeng Xu, 235107643Bin Liang, Hang Su, Lin Gui, Erik Cambria, and Ruifeng Xu. 2022. Aspect-based sentiment anal- ysis via affective knowledge enhanced graph con- volutional networks. Knowledge-Based Systems, 235:107643.
Automatic evaluation of summaries using n-gram cooccurrence statistics. Chin-Yew Lin, Eduard Hovy, Proceedings of the 2003 human language technology conference of the North American chapter of the association for computational linguistics. the 2003 human language technology conference of the North American chapter of the association for computational linguisticsChin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 hu- man language technology conference of the North American chapter of the association for computa- tional linguistics, pages 150-157.
Recurrent neural network for text classification with multi-task learning. Pengfei Liu, Xipeng Qiu, Xuanjing Huang, arXiv:1605.05101arXiv preprintPengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classi- fication with multi-task learning. arXiv preprint arXiv:1605.05101.
An empirical study on leveraging position embeddings for target-oriented opinion words extraction. Samuel Mensah, Kai Sun, Nikolaos Aletras, 10.18653/v1/2021.emnlp-main.722Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsOnline and Punta CanaDominican RepublicSamuel Mensah, Kai Sun, and Nikolaos Aletras. 2021. An empirical study on leveraging position embed- dings for target-oriented opinion words extraction. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 9174-9179, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.
Deep learning-based text classification: A comprehensive review. Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, Jianfeng Gao, ACM Computing Surveys (CSUR). 543Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Nar- jes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2021. Deep learning-based text classification: A comprehensive review. ACM Computing Surveys (CSUR), 54(3):1-40.
Identifying twitter users who repost unreliable news sources with linguistic information. Yida Mu, Nikolaos Aletras, 10.7717/peerj-cs.325PeerJ Computer Science. 6325Yida Mu and Nikolaos Aletras. 2020. Identifying twit- ter users who repost unreliable news sources with linguistic information. PeerJ Computer Science, 6:e325.
Identifying and characterizing active citizens who refute misinformation in social media. Yida Mu, Pu Niu, Nikolaos Aletras, 10.1145/3501247.353155914th ACM Web Science Conference 2022, WebSci '22. New York, NY, USAAssociation for Computing MachineryYida Mu, Pu Niu, and Nikolaos Aletras. 2022. Identify- ing and characterizing active citizens who refute mis- information in social media. In 14th ACM Web Sci- ence Conference 2022, WebSci '22, page 401-410, New York, NY, USA. Association for Computing Machinery.
Hetegcn: Heterogeneous graph convolutional networks for text classification. Rahul Ragesh, Sundararajan Sellamanickam, Arun Iyer, Ramakrishna Bairi, Vijay Lingam, Proceedings of the 14th ACM International Conference on Web Search and Data Mining. the 14th ACM International Conference on Web Search and Data MiningRahul Ragesh, Sundararajan Sellamanickam, Arun Iyer, Ramakrishna Bairi, and Vijay Lingam. 2021. Hetegcn: Heterogeneous graph convolutional net- works for text classification. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 860-868.
The graph neural network model. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, Gabriele Monfardini, IEEE transactions on neural networks. 201Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80.
Pte: Predictive text embedding through large-scale heterogeneous text networks. Jian Tang, Meng Qu, Qiaozhu Mei, Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. the 21th ACM SIGKDD international conference on knowledge discovery and data miningJian Tang, Meng Qu, and Qiaozhu Mei. 2015. Pte: Pre- dictive text embedding through large-scale hetero- geneous text networks. In Proceedings of the 21th ACM SIGKDD international conference on knowl- edge discovery and data mining, pages 1165-1174.
Nowcasting the stance of social media users in a sudden vote: The case of the greek referendum. Adam Tsakalidis, Nikolaos Aletras, Alexandra I Cristea, Maria Liakata, Proceedings of the 27th ACM International Conference on Information and Knowledge Management. the 27th ACM International Conference on Information and Knowledge ManagementAdam Tsakalidis, Nikolaos Aletras, Alexandra I Cristea, and Maria Liakata. 2018. Nowcasting the stance of social media users in a sudden vote: The case of the greek referendum. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 367-376.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762Attention is all you need. arXiv preprintAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, arXiv:1710.10903Graph attention networks. arXiv preprintPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.
Danqing Wang, Pengfei Liu, Yining Zheng, arXiv:2004.12393Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. arXiv preprintDanqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document sum- marization. arXiv preprint arXiv:2004.12393.
Knowledge-aware meta-learning for low-resource text classification. Huaxiu Yao, Yingxin Wu, Maruan Al-Shedivat, Eric P Xing, arXiv:2109.04707arXiv preprintHuaxiu Yao, Yingxin Wu, Maruan Al-Shedivat, and Eric P Xing. 2021. Knowledge-aware meta-learning for low-resource text classification. arXiv preprint arXiv:2109.04707.
Graph convolutional networks for text classification. Liang Yao, Chengsheng Mao, Yuan Luo, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7370-7377.
Node-feature convolution for graph convolutional networks. Li Zhang, Heda Song, Nikolaos Aletras, Haiping Lu, Pattern Recognition. 128108661Li Zhang, Heda Song, Nikolaos Aletras, and Haip- ing Lu. 2022. Node-feature convolution for graph convolutional networks. Pattern Recognition, 128:108661.
| [] |
[
"PaperRobot: Incremental Draft Generation of Scientific Ideas",
"PaperRobot: Incremental Draft Generation of Scientific Ideas"
] | [
"Qingyun Wang \nRensselaer Polytechnic Institute\n\n",
"Lifu Huang \nRensselaer Polytechnic Institute\n\n",
"Zhiying Jiang \nRensselaer Polytechnic Institute\n\n",
"Kevin Knight kevinknight@didiglobal.com \nDiDi Labs\n\n",
"Heng Ji hengji@illinois.edu \nRensselaer Polytechnic Institute\n\n\nUniversity of Illinois at Urbana\nChampaign\n",
"Mohit Bansal \nUniversity of North Carolina at Chapel Hill\n\n",
"Yi Luan \nUniversity of Washington\n\n"
] | [
"Rensselaer Polytechnic Institute\n",
"Rensselaer Polytechnic Institute\n",
"Rensselaer Polytechnic Institute\n",
"DiDi Labs\n",
"Rensselaer Polytechnic Institute\n",
"University of Illinois at Urbana\nChampaign",
"University of North Carolina at Chapel Hill\n",
"University of Washington\n"
] | [
"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics"
] | We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively. 1 | 10.18653/v1/p19-1191 | [
"https://www.aclweb.org/anthology/P19-1191.pdf"
] | 159,040,684 | 1905.07870 | a6aed0c4e0f39a55edb407f492e41f178a62907f |
PaperRobot: Incremental Draft Generation of Scientific Ideas
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 28 -August 2, 2019. 2019
Qingyun Wang
Rensselaer Polytechnic Institute
Lifu Huang
Rensselaer Polytechnic Institute
Zhiying Jiang
Rensselaer Polytechnic Institute
Kevin Knight kevinknight@didiglobal.com
DiDi Labs
Heng Ji hengji@illinois.edu
Rensselaer Polytechnic Institute
University of Illinois at Urbana
Champaign
Mohit Bansal
University of North Carolina at Chapel Hill
Yi Luan
University of Washington
PaperRobot: Incremental Draft Generation of Scientific Ideas
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsJuly 28 -August 2, 2019. 20191980
We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper. Turing Tests, where a biomedical domain expert is asked to compare a system output and a human-authored string, show PaperRobot generated abstracts, conclusion and future work sections, and new titles are chosen over human-written ones up to 30%, 24% and 12% of the time, respectively. 1
Introduction
Our ambitious goal is to speed up scientific discovery and production by building a PaperRobot, who addresses three main tasks as follows.
Read Existing Papers. Scientists now find it difficult to keep up with the overwhelming amount of papers. For example, in the biomedical domain, on average more than 500K papers are published every year 2 , and more than 1.2 million new papers are published in 2016 alone, bringing the total number of papers to over 26 million (Van Noorden, 2014). However, human's reading ability keeps almost the same across years. In 2012, US scientists estimated that they read, on average, only 264 papers per year (1 out of 5000 available papers), which is, statistically, not different from what they reported in an identical survey last conducted in 2005. PaperRobot automatically reads existing papers to build background knowledge graphs (KGs), in which nodes are entities/concepts and edges are the relations between these entities (Section 2.2). Create New Ideas. Scientific discovery can be considered as creating new nodes or links in the knowledge graphs. Creating new nodes usually means discovering new entities (e.g., new proteins) through a series of real laboratory experiments, which is probably too difficult for Paper-Robot. In contrast, creating new edges is easier to automate using the background knowledge graph as the starting point. Foster et al. (2015) shows that more than 60% of 6.4 million papers in biomedicine and chemistry are about incremental work. This inspires us to automate the incremental creation of new ideas and hypotheses by predicting new links in background KGs. In fact, when there is more data available, we can construct larger and richer background KGs for more reliable link prediction. Recent work (Ji et al., 2015b) successfully mines strong relevance between drugs and diseases from biomedical pa- Figure 2: PaperRobot Architecture Overview pers based on KGs constructed from weighted cooccurrence. We propose a new entity representation that combines KG structure and unstructured contextual text for link prediction (Section 2.3).
Write a New Paper about New Ideas. The final step is to communicate the new ideas to the reader clearly, which is a very difficult thing to do; many scientists are, in fact, bad writers (Pinker, 2014). Using a novel memory-attention network architecture, PaperRobot automatically writes a new paper abstract about an input title along with predicted related entities, then further writes conclusion and future work based on the abstract, and finally predicts a new title for a future follow-on paper, as shown in Figure 1 (Section 2.4).
We choose biomedical science as our target domain due to the sheer volume of available papers. Turing tests show that PaperRobot-generated output strings are sometimes chosen over humanwritten ones; and most paper abstracts only require minimal edits from domain experts to become highly informative and coherent.
Approach
Overview
The overall framework of PaperRobot is illustrated in Figure 2. A walk-through example produced from this whole process is shown in Table 1. In the following subsections, we will elaborate on the algorithms for each step.
Background Knowledge Extraction
From a massive collection of existing biomedical papers, we extract entities and their relations to construct background knowledge graphs (KGs). We apply an entity mention extraction and linking system (Wei et al., 2013) to extract mentions of three entity types (Disease, Chemical and Gene) which are the core data categories in the Comparative Toxicogenomics Database (CTD) (Davis et al., 2016), and obtain a Medical Subject Headings (MeSH) Unique ID for each mention. Based on the MeSH Unique IDs, we further link all entities to the CTD and extract 133 subtypes of relations such as Marker/Mechanism, Therapeutic, and Increase Expression. Figure 3 shows an example.
Link Prediction
After constructing the initial KGs from existing papers, we perform link prediction to enrich them. Both contextual text information and graph structure are important to represent an entity, thus we combine them to generate a rich representation for each entity. Based on the entity representations, we determine whether any two entities are semantically similar, and if so, we propagate the neighbors of one entity to the other. For example, in Figure 3, because Calcium and Zinc are similar in terms of contextual text information and graph structure, we predict two new neighbors for Calcium: CD14 molecule and neuropilin 2 which are neighbors of Zinc in the initial KGs.
We formulate the initial KGs as a list of tuples numbered from 0 to κ. Each tuple (e h i , r i , e t i ) is composed of a head entity e h i , a tail entity e t i , and their relation r i . Each entity e i may be involved in multiple tuples and its one-hop connected neighbors are denoted as N e i = [n i1 , n i2 , ...]. e i is
Gene Chemical
Contextual Sentence: So, Ca 2+ possibly promoted caspases activation upstream of cytochrome c release, but inactivated caspase activity by calpain and/or fast depletion of ATP; whereas Zn 2+ blocked the activation ofprocaspase-3 with no visible change in the level of cytochrome c, and the block possibly resulted from its direct inhibition on caspase-3 enzyme. also associated with a context description s i which is randomly selected from the sentences where e i occurs. We randomly initialize vector representations e i and r i for e i and r i respectively. Graph Structure Encoder To capture the importance of each neighbor's feature to e i , we perform self-attention (Veličković et al., 2018) and compute a weight distribution over N e i :
e i = W e e i , n ij = W e n ij c ij = LeakyReLU(W f (e i ⊕ n ij )) c i = Softmax(c i )
where W e is a linear transformation matrix applied to each entity. W f is the parameter for a single layer feedforward network. ⊕ denotes the concatenation operation between two matrices. Then we use c i and N e i to compute a structure based context representation of i = σ c ij n ij , where n ij ∈ N e i and σ is Sigmoid function.
In order to capture various types of relations between e i and its neighbors, we further perform multi-head attention on each entity, based on multiple linear transformation matrices. Finally, we get a structure based context representationẽ
i = [ 0 i ⊕ ... ⊕ M i ],
where m i refers to the context representation obtained with the m-th head, and e i is the concatenated representation based on the attention of all M heads. Contextual Text Encoder Each entity e is also associated with a context sentence [w 1 , ..., w l ].
To incorporate the local context information, we first apply a bi-directional long short-term memory (LSTM) (Graves and Schmidhuber, 2005) network to get the encoder hidden states H s = [h 1 , ..., h l ], where h i represents the hidden state of w i . Then we compute a bilinear attention weight for each word w i : µ i = e W s h i , µ = Softmax(µ), where W s is a bilinear term. We finally get the context representationê = µ h i . Gated Combination To combine the graph-based representationẽ and local context based representationsê, we design a gate function to balance these two types of information: g e = σ(g e ), e = g e ẽ + (1 − g e ) ê
where g e is an entity-dependent gate function of which each element is in [0, 1],g e is a learnable parameter for each entity e, σ is a Sigmoid function, and is an element-wise multiplication. Training and Prediction To optimize both entity and relation representations, following TransE (Bordes et al., 2013), we assume the relation between two entities can be interpreted as translations operated on the entity representations,
namely h + r ≈ t if (h, r, t) holds. Therefore, for each tuple (e h i , r i , e t i ), we can compute their dis- tance score: F (e h i , r i , e t i ) = e h i + r i − e t i 2 2 .
We use marginal loss to train the model:
Loss = (e h i ,r i ,e t i )∈K (ē h i ,r i ,ē t i )∈K max(0, γ + F (e h i , r i , e t i ) − F (ē h i ,r i ,ē t i ))
where (e h , r, t h ) is a positive tuple and (ē h ,r h ,t h ) is a negative tuple, and γ is a margin. The negative tuples are generated by either replacing the head or the tail entity of positive tuples with a randomly chosen different entity. Background: Maspin, a putative tumor suppressor that is down-regulated in breast and prostate cancer, has been associated with decreased cell motility.
Snail transcription factor is a zinc finger protein that is increased in breast cancer and is associated with increased tumor motility and invasion by induction of epithelial-mesenchymal transition (EMT). We investigated the molecular mechanisms by which Snail increases tumor motility and invasion utilizing prostate cancer cells. Methods: Expression levels were analyzed by RT-PCR and western blot analyses. Cell motility and invasion assays were performed , while Snail regulation and binding to maspin promoter was analyzed by luciferase reporter and chromatin immunoprecipitation (ChIP) assays. Results: Snail protein expression was higher in different prostate cancer cells lines as compared to normal prostate epithelial cells.
Background: Snail is a multifunctional protein that plays an important role in the pathogenesis of prostate cancer. However, it has been shown to be associated with poor prognosis. The purpose of this study was to investigate the effect of negatively on the expression of maspin in human nasopharyngeal carcinoma cell lines. Methods: Quantitative real-time PCR and western blot analysis were used to determine whether the demethylating agent was investigated by quantitative RT-PCR (qRT-PCR) and Western blotting. Results showed that the binding protein plays a significant role in the regulation of tumor growth and progression.
Background: Snail is a multifunctional protein that plays an important role in the pathogenesis of prostate cancer. It has been shown associated with poor prognosis. The purpose of this study is to investigate the negative effect of on the expression of Maspin in human nasopharyngeal carcinoma cell lines. Methods: Quantitative RT-PCR (qRT-PCR) and western blot analyses were used to determine correlation of the two proteins expressions. Results showed that the binding protein plays a significant role in the regulation of tumor growth and progression.
Conclusion and Future work
Collectively, our results indicate for the first time that Snail can negatively regulate maspin through direct promoter repression resulting in increased migration and invasion in prostate cancer cells. This study reveals a novel mechanism of how Snail may function and show the importance of therapeutic targeting of Snail signaling in future.
In summary, our study demonstrates that Snail negatively inhibited the expression of Maspin in human nasopharyngeal carcinoma cell lines and in vitro.
Our results indicate that the combination of the demethylating agent might be a potential therapeutic target for the treatment of prostate cancer.
In summary, our study in vitro demonstrates that Snail negatively inhibits the expression of Maspin in human nasopharyngeal carcinoma cell lines. Our results further indicate that Maspin might be a potential therapeutic target for the treatment of prostate cancer.
New Title
Role of maspin in cancer (Berardi et al., 2013) The role of nasopharyngeal carcinoma in the rat model of prostate cancer cells
The role of Maspin in the rat model of nasopharyngeal carcinoma cells After training, for each pair of indirectly connected entities e i , e j and a relation type r, we compute a score y to indicate the probability that (e i , r, e j ) holds, and obtain an enriched knowledge graph K = [(e h κ+1 , r κ+1 , e t κ+1 , y κ+1 )...].
New Paper Writing
In this section, we use title-to-abstract generation as a case study to describe the details of our paper writing approach. Other tasks (abstract-toconclusion and future work, and conclusion and future work-to-title) follow the same architecture. Given a reference title τ = [w 1 , ..., w l ], we apply the knowledge extractor (Section 2.2) to extract entities from τ . For each entity, we retrieve a set of related entities from the enriched knowledge graph K after link prediction. We rank all the related entities by confidence scores and select up to 10 most related entities E τ = [e τ 1 , ..., e τ v ]. Then we feed τ and E τ together into the paper generation framework as shown in Figure 2. The framework is based on a hybrid approach of a Mem2seq model (Madotto et al., 2018) and a pointer generator (Gu et al., 2016;See et al., 2017). It allows us to balance three types of sources for each time step during decoding: the probability of generating a token from the entire word vocabulary based on language model, the probability of copying a word from the reference title, such as regulates in Table 1, and the probability of incorporating a related entity, such as Snail in Table 1. The output is a paragraph Y = [y 1 , ..., y o ]. 3 Reference Encoder For each word in the refer-ence title, we randomly embed it into a vector and obtain τ = [w 1 , ..., w l ]. Then, we apply a bi-directional Gated Recurrent Unit (GRU) encoder (Cho et al., 2014) on τ to produce the encoder hidden states H = [h 1 , ..., h l ]. Decoder Hidden State Initialization Not all predicted entities are equally relevant to the title. For example, for the title in Table 2, we predict multiple related entities including nasopharyngeal carcinoma and diallyl disulfide, but nasopharyngeal carcinoma is more related because nasopharyngeal carcinoma is also a cancer related to snail transcription factor, while diallyl disulfide is less related because diallyl disulfide's anticancer mechanism is not closely related to maspin tumor suppressor. We propose to apply memoryattention networks to further filter the irrelevant ones. Recent approaches (Sukhbaatar et al., 2015;Madotto et al., 2018) show that compared with soft-attention, memory-based multihop attention is able to refine the attention weight of each memory cell to the query multiple times, drawing better correlations. Therefore, we apply a multihop attention mechanism to generate the initial decoder hidden state.
Given the set of related entities E = [e 1 , ..., e v ], we randomly initialize their vector representation E = [e 1 , ..., e v ] and store them in memories. Then we use the last hidden state of reference encoder h l as the first query vector q 0 , and iteratively compute the attention distribution over all memories and update the query vector:
p ki = ν k tanh W k q q k−1 + U k e e i + b k q k = p k e + q k−1
where k denotes the k-th hop among ϕ hops in total. 4 After ϕ hops, we obtain q ϕ and take it as the initial hidden state of the GRU decoder. Memory Network To better capture the contribution of each entity e j to each decoding output, at each decoding step i, we compute an attention weight for each entity and apply a memory network to refine the weights multiple times. We take the hidden stateh i as the initial queryq 0 =h i and iteratively update it:
p kj = ν k tanh W k qqk−1 + U k e e j + Wĉĉ ij + b k u ik =p k e j ,q k = u ik +q k−1 whereĉ ij = i−1 m=0 β mj is an entity coverage vector and β i is the attention distribution of last hop β i =p ψ , and ψ is the total number of hops. We then obtain a final memory based context vector for the set of related entities χ i = u iψ . Reference Attention Our reference attention is similar to (Bahdanau et al., 2015;See et al., 2017), which aims to capture the contribution of each word in the reference title to the decoding output. At each time step i, the decoder receives the previous word embedding and generate decoder statẽ h i , the attention weight of each reference token is computed as:
α ij = ς tanh W hhi + W τ h j + Wcc ij + b τ α i = Softmax (α i ) ; φ i = α i h j c ij = i−1
m=0 α mj is a reference coverage vector, which is the sum of attention distributions over all previous decoder time steps to reduce repetition (See et al., 2017). φ i is the reference context vector. Generator For a particular word w, it may occur multiple times in the reference title or in multiple related entities. Therefore, at each decoding step i, for each word w, we aggregate its attention weights from the reference attention and memory attention distributions: P i τ = m|wm=w α im and P i e = m|w∈em β im respectively. In addition, at each decoding step i, each word in the vocabulary may also be generated with a probability according to the language model. The probability is computed from the decoder stateh i , the reference context vector φ i , and the memory context vector χ i : P gen = Softmax(W gen [h i ; φ i ; χ i ] + b gen ), where W gen and b gen are learnable parameters. To combine P τ , P e and P gen , we compute a gate g τ as a soft switch between generating a word from the vocabulary and copying words from the reference title τ or the related entities E: g p = σ(W ph i + W z z i−1 + b p ), where z i−1 is the embedding of the previous generated token at step i − 1. W p , W z , and b p are learnable parameters, and σ is a Sigmoid function. We also compute a gateg p as a soft switch between copying words from reference text and the related entities:
g p = σ(W φ φ i + W χ χ i +b p ), where W φ , W χ ,
andb p are learnable parameters.
The final probability of generating a token z at decoding step i can be computed by: P (z i ) = g p P gen + (1 − g p ) (g p P τ + (1 −g p )P e ) The loss function, combined with the coverage loss (See et al., 2017) for both reference attention and memory distribution, is presented as:
Loss = i − log P (z i ) + λ i (min (α ij ,c ij ) + min (β ij ,ĉ ij ))
where P (z i ) is the prediction probability of the ground truth token z i , and λ is a hyperparameter. Repetition Removal Similar to many other long text generation tasks (Suzuki and Nagata, 2017), repetition remains a major challenge (Foster and White, 2007;Xie, 2017). In fact, 11% sentences in human written abstracts include repeated entities, which may mislead the language model. Following the coverage mechanism proposed by (Tu et al., 2016;See et al., 2017), we use a coverage loss to avoid any entity in reference input text or related entity receiving attention multiple times. We further design a new and simple masking method to remove repetition during the test time. We apply beam search with beam size 4 to generate each output, if a word is not a stop word or punctuation and it is already generated in the previous context, we will not choose it again in the same output.
Experiment
Data
We collect biomedical papers from the PMC Open Access Subset. 5 To construct ground truth for new title prediction, if a human written paper A cites a paper B, we assume the title of A is generated from B's conclusion and future work session. We construct background knowledge graphs from 1,687,060 papers which include 30,483 entities and 875,698 relations. Tables 2 shows the detailed data statistics. The hyperparameters of our model are presented in the Appendix.
Automatic Evaluation
Previous work Lowe et al., 2015) has proven it to be a major challenge to automatically evaluate long text generation. Following the story generation work (Fan et al., 2018), we use METEOR (Denkowski and Lavie, 2014) to measure the topic relevance towards given titles and use perplexity to further evaluate the quality of the language model. The perplexity scores of our model are based on the language model 6 learned on other PubMed papers (500,000 titles, 50,000 abstracts, 50,000 conclusions and future work) which are not used for training or testing in our experiment. 7 The results are shown in Table 3. We can see that our framework outperforms all previous approaches.
Turing Test
Similar to (Wang et al., 2018b), we conduct Turing tests by a biomedical expert (non-native speaker) and a non-expert (native speaker). Each human judge is asked to compare a system output and a human-authored string, and select the better one. Table 4 shows the results on 50 pairs in each setting. We can see that PaperRobot generated abstracts are chosen over human-written ones by the expert up to 30% times, conclusion and future work up to 24% times, and new titles up to 12% times. We don't observe the domain expert performs significantly better than the non-expert, because they tend to focus on different aspectsthe expert focuses on content (entities, topics, etc.) while the non-expert focuses on the language.
Human Post-Editing
In order to measure the effectiveness of Paper-Robot acting as a wring assistant, we randomly select 50 paper abstracts generated by the system during the first iteration and ask the domain expert to edit them until he thinks they are informative and coherent. The BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and TER (Snover et al., 2006) scores by comparing the abstracts before and after human editing are presented in Table 5. It took about 40 minutes for the expert to finish editing 50 abstracts. Table 1 includes the post-edited example. We can see that most edits are stylist changes.
Analysis and Discussions
To better justify the function of each component, we conduct ablation studies by removing memory networks, link prediction, and repetition removal respectively. The results are shown in Table 6. We can see that the approach without memory networks tends to diverge from the main topic, especially for generating long texts such as abstracts (the detailed length statistics are shown in Table 8). From Table 6 we can see the later parts of the abstract (Methods and Results) include topically irrelevant entities such as "imipramine" which is used to treat depression instead of human prostate cancer.
Link prediction successfully introduces new and topically related ideas, such as "RT-PCR" and "western blot" which are two methods for analyzing the expression level of Snail protein, as also mentioned in the human written abstract in Table 1. Table 7 shows more examples of entities which are related to the entities in input titles based on link prediction. We can see that the predicted entities are often genes or proteins which cause the disease mentioned in a given title, or other diseases from the same family.
Our simple beam search based masking method successfully removes some repeated words and phrases and thus produces more informative output. The plagiarism check in Table 9 shows our model is creative, because it's not simply copying from the human input.
Remaining Challenges
Our generation model is still largely dependent on language model and extracted facts, and thus it lacks of knowledge reasoning. It generates a few incorrect abbreviations such as "Organophosphates(BA)", "chronic kidney disease(UC)" and "Fibrosis(DC)") because they appear rarely in the training data and thus their contextual representations are not reliable. It also generates some incorrect numbers (e.g., "The patients were divided into four groups : Group 1 , Group B...") and pronouns (e.g., "A 63-year-old man was referred to our hospital ... she was treated with the use of the descending coronary artery" ).
Output
Without Memory Networks Without Link Prediction Without Repetition Removal Abstract
Background: Snail has been reported to exhibit a variety of biological functions. In this study, we investigated the effect of negatively on maspin demethylation in human prostate cancer cells. Methods: Quantitative real-time PCR and western blot analysis were used to investigate the effects of the demethylating agent on the expression of the protein kinase (TF) gene promoter. Results: The results showed that the presence of a single dose of 50 µM in a dose-dependent manner, whereas the level of the BMP imipramine was significantly higher than that of the control group.
Background: Snail has been shown to be associated with poor prognosis. In this study, we investigated the effect of negatively on the expression of maspin in human prostate cancer cells. Methods: Cells were treated with a single dose of radiotherapy for 24 h, and was used to investigate the significance of a quantitative factor for the treatment of the disease. Results: The remaining controls showed a significant increase in the G2/M phase of the tumor suppressor protein (p<0.05).
Background: Snail is a major health problem in human malignancies. However, the role of Snail on the expression of maspin in human prostate cancer cells is not well understood. The aim of this study was to investigate the effect of Snail on the expression of maspin in human prostate cancer cells.
Methods: The expression of the expression of Snail and maspin was investigated using quantitative RT-PCR and western blot analysis. Results: The remaining overall survival (OS) and overall survival (OS) were analyzed. Conclusion and Future work
In summary, our study demonstrated that negatively inhibited the expression of the BMP imipramine in human prostate cancer cells. Our findings suggest that the inhibition of maspin may be a promising therapeutic strategy for the treatment.
In summary, our results demonstrate that negatively inhibited the expression of maspin in human prostate cancer cells. Our findings suggest that the combination of radiotherapy may be a potential therapeutic target for the treatment of disease.
In summary, our results demonstrate that snail inhibited the expression of maspin in human prostatic cells. The expression of snail in PC-3 cells by snail, and the expression of maspin was observed in the presence of the expression of maspin. New Title Protective effects of homolog on human breast cancer cells by inhibiting the Endoplasmic Reticulum Stress
The role of prostate cancer in human breast cancer cells
The role of maspin and maspin in human breast cancer cells Table 9: Plagiarism Check: Percentage (%) of n-grams in human input which appear in system generated output for test data.
All of the system generated titles are declarative sentences while human generated titles are often more engaging (e.g., "Does HPV play any role in the initiation or prognosis of endometrial adenocarcinomas?"). Human generated titles often include more concrete and detailed ideas such as "etumorType , An Algorithm of Discriminating Cancer Types for Circulating Tumor Cells or Cellfree DNAs in Blood", and even create new entity abbreviations such as etumorType in this example.
Requirements to Make PaperRobot Work: Case Study on NLP Domain
When a cool Natural Language Processing (NLP) system like PaperRobot is built, it's natural to ask whether she can benefit the NLP community itself. We re-build the system based on 23,594 NLP papers from the new ACL Anthology Network (Radev et al., 2013). For knowledge extraction we apply our previous system trained for the NLP domain (Luan et al., 2018). But the results are much less satisfactory compared to the biomedical domain. Due to the small size of data, the language model is not able to effectively copy out-of-vocabulary words and thus the output is often too generic. For example, given a title "Statistics based hybrid approach to Chinese base phrase identification", PaperRobot generates a fluent but uninformative abstract "This paper describes a novel approach to the task of Chinese-base-phrase identification. We first utilize the solid foundation for the Chinese parser, and we show that our tool can be easily extended to meet the needs of the sentence structure.". Moreover, compared to the biomedical domain, the types of entities and relations in the NLP domain are rather coarse-grained, which often leads to inaccurate prediction of related entities. For example, for an NLP paper title "Extracting molecular binding relationships from biomedical text", PaperRobot mistakenly extracts "prolog" as a related entity and generates an abstract "In this paper, we present a novel approach to the problem of extracting relationships among the prolog program. We present a system that uses a macromolecular binding relationships to extract the relationships between the abstracts of the entry. The results show that the system is able to extract the most important concepts in the prolog program.".
Related Work Link Prediction.
Translation-based approaches (Nickel et al., 2011;Bordes et al., 2013;Wang et al., 2014;Lin et al., 2015;Ji et al., 2015a) have been widely exploited for link prediction. Compared with these studies, we are the first to incorporate multi-head graph attention (Sukhbaatar et al., 2015;Madotto et al., 2018;Veličković et al., 2018) to encourage the model to capture multi-aspect relevance among nodes. Similar to (Wang and Li, 2016;Xu et al., 2017), we enrich entity representation by combining the contextual sentences that include the target entity and its neighbors from the graph structure. This is the first work to incorporate new idea creation via link prediction into automatic paper writing.
Knowledge-driven Generation. Deep Neural Networks have been applied to generate natural language to describe structured knowledge bases (Duma and Klein, 2013;Konstas and Lapata, 2013;Flanigan et al., 2016;Hardy and Vlachos, 2018;Pourdamghani et al., 2016;Trisedya et al., 2018;Xu et al., 2018;Madotto et al., 2018;Nie et al., 2018), biographies based on attributes (Lebret et al., 2016;Chisholm et al., 2017;Kaffee et al., 2018;Wang et al., 2018a;Wiseman et al., 2018), and image/video captions based on background entities and events (Krishnamoorthy et al., 2013;Lu et al., 2018). To handle unknown words, we design an architecture similar to pointer-generator networks (See et al., 2017) and copy mechanism (Gu et al., 2016). Some interesting applications include generating abstracts based on titles for the natural language processing domain (Wang et al., 2018b), generating a poster (Qiang et al., 2016) or a science news blog title (Vadapalli et al., 2018) about a published paper. This is the first work on automatic writing of key paper elements for the biomedical domain, especially conclusion and future work, and follow-on paper titles.
Conclusions and Future Work
We build a PaperRobot who can predict related entities for an input title and write some key elements of a new paper (abstract, conclusion and future work) and predict a new title. Automatic evaluations and human Turing tests both demonstrate her promising performance. PaperRobot is merely an assistant to help scientists speed up scientific discovery and production. Conducting experiments is beyond her scope, and each of her current components still requires human intervention: constructed knowledge graphs cannot cover all technical details, predicted new links need to be verified, and paper drafts need further editing. In the future, we plan to develop techniques for extracting entities of more fine-grained entity types, and extend PaperRobot to write related work, predict authors, their affiliations and publication venues.
Figure 1 :
1PaperRobot Incremental Writing
Figure 3 :
3Biomedical Knowledge Extraction and Link Prediction Example (dash lines are predicted links)
Table 1 :
1Comparison of Human and System Written Paper Elements (bold words are topically related entities; italic words show human edits)
Table 2 :
2Paper Writing StatisticsModel
Title-to-Abstract
Abstract-to-Conclusion
and Future Work
Conclusion and
Future Work-to-Title
Perplexity METEOR Perplexity
METEOR Perplexity METEOR
Seq2seq (Bahdanau et al., 2015)
19.6
9.1
44.4
8.6
49.7
6.0
Editing Network (Wang et al., 2018b)
18.8
9.2
30.5
8.7
55.7
5.5
Pointer Network (See et al., 2017)
146.7
8.5
74.0
8.1
47.1
6.6
Our Approach (-Repetition Removal)
13.4
12.4
24.9
12.3
31.8
7.4
Our Approach
11.5
13.0
18.3
11.2
14.8
8.9
Table 3 :
3Automatic Evaluation on Paper Writing for Diagnostic Tasks (%). The Pointer Network can be viewed as removing memory network part from our approach without repetition removal.
Table 4 :
4Turing Test Human Subject Passing Rates (%). Percentages show how often a human judge chooses our
system's output over human's when it is mixed with a human-authored string. If the output strings (e.g., abstracts)
are based on the same input string (e.g., title), the Input condition is marked "Same", otherwise "Different".
BLEU1 BLEU2 BLEU3 BLEU4 ROUGE TER
59.6
58.1
56.7
55.4
73.3
35.2
Table 5 :
5Evaluation on Human Post-Editing(%)
Table 6 :
6Ablation Test Results on the Same Title inTable 1Titles Predicted Related Entities Pseudoachondroplasia/COMP translating from the bench to the bedside osteoarthritis; skeletal dysplasia; thrombospondin-5Role of ceramide in diabetes mellitus: evidence and mechanisms diabetes insulin ceramide; metabolic disease Exuberant clinical picture of Buschke-Fischer-Brauer palmoplantar keratoderma in bedridden patient neoplasms; retinoids; autosomal dominant disease Relationship between serum adipokine levels and radiographic progression in patients with ankylosing spondylitis leptin; rheumatic diseases; adiponectin; necrosis; DKK-1; IL-6-RFP
Table 7 :
7More Link Prediction Examples (bold words are entities detected from titles)Abstract
Conclusion and
Future Work
Title
System
112.4
88.1
16.5
Human
106.5
105.5
13.0
Table 8 :
8The Average Number of Words of System and Human OutputOutput
1
2
3
4
5
Abstracts
58.3 20.1 8.03 3.60 1.46
Conclusions 43.8 12.5 5.52 2.58 1.28
Titles
20.1 1.31 0.23 0.06 0.00
The programs, data and resources are publicly available for research purpose at: https://github.com/ EagleW/PaperRobot 2 http://dan.corlan.net/medline-trend/ language/absolute.html
During training, we truncate both of the input and the output to around 120 tokens to expedite training. We label the words with frequency < 5 as Out-of-vocabulary.
We set ϕ = 3 since it performs the best on the development set.
ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/ oa_package/
https://github.com/pytorch/examples/ tree/master/word_language_model 7 The perplexity scores of the language model are in the Appendix.
AcknowledgmentsThe knowledge extraction and prediction components were supported by the U.S. NSF No. 1741634 and Tencent AI Lab Rhino-Bird Gift Fund. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Proceedings of the 5th International Conference on Learning Representations. the 5th International Conference on Learning RepresentationsDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 5th International Conference on Learning Rep- resentations.
Role of maspin in cancer. Clinical and translational medicine. Rossana Berardi, Francesca Morgese, Azzurra Onofri, Paola Mazzanti, Mirco Pistelli, Zelmira Ballatore, Agnese Savini, Mariagrazia De Lisa, Miriam Caramanti, Silvia Rinaldi, Rossana Berardi, Francesca Morgese, Azzurra Onofri, Paola Mazzanti, Mirco Pistelli, Zelmira Ballatore, Agnese Savini, Mariagrazia De Lisa, Miriam Cara- manti, Silvia Rinaldi, et al. 2013. Role of maspin in cancer. Clinical and translational medicine.
Translating embeddings for modeling multirelational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Advances in neural information processing systems. Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems.
Learning to generate one-sentence biographies from Wikidata. Andrew Chisholm, Will Radford, Ben Hachey, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsAndrew Chisholm, Will Radford, and Ben Hachey. 2017. Learning to generate one-sentence biogra- phies from Wikidata. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingKyunghyun Cho, Bart Van Merriënboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing.
Allan Peter Davis, Cynthia J Grondin, Robin J Johnson, Daniela Sciaky, L Benjamin, Roy King, Jolene Mcmorran, Wiegers, C Thomas, Carolyn J Wiegers, Mattingly, The comparative toxicogenomics database: update 2017. Nucleic acids research. Allan Peter Davis, Cynthia J Grondin, Robin J Johnson, Daniela Sciaky, Benjamin L King, Roy McMorran, Jolene Wiegers, Thomas C Wiegers, and Carolyn J Mattingly. 2016. The comparative toxicogenomics database: update 2017. Nucleic acids research.
Meteor universal: Language specific translation evaluation for any target language. Michael Denkowski, Alon Lavie, Proceedings of the 9th Workshop on Statistical Machine Translation. the 9th Workshop on Statistical Machine TranslationMichael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the 9th Workshop on Statistical Machine Translation.
Generating natural language from linked data: Unsupervised template extraction. Daniel Duma, Ewan Klein, Proceedings of the 10th International Conference on Computational Semantics. the 10th International Conference on Computational SemanticsDaniel Duma and Ewan Klein. 2013. Generating nat- ural language from linked data: Unsupervised tem- plate extraction. In Proceedings of the 10th Interna- tional Conference on Computational Semantics.
Hierarchical neural story generation. Angela Fan, Mike Lewis, Yann Dauphin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAngela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
Generation from abstract meaning representation using tree transducers. Jeffrey Flanigan, Chris Dyer, Noah A Smith, Jaime Carbonell, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.
Tradition and innovation in scientists research strategies. Jacob G Foster, Andrey Rzhetsky, James A Evans, American Sociological Review. Jacob G. Foster, Andrey Rzhetsky, and James A. Evans. 2015. Tradition and innovation in scientists research strategies. American Sociological Review.
Avoiding repetition in generated text. Mary Ellen Foster, Michael White, Proceedings of the 11th European Workshop on Natural Language Generation. the 11th European Workshop on Natural Language GenerationMary Ellen Foster and Michael White. 2007. Avoid- ing repetition in generated text. In Proceedings of the 11th European Workshop on Natural Language Generation.
Framewise phoneme classification with bidirectional lstm and other neural network architectures. Alex Graves, Jürgen Schmidhuber, Proceedings of the 2015 IEEE International Joint Conference on Neural Networks. the 2015 IEEE International Joint Conference on Neural NetworksAlex Graves and Jürgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. In Proceed- ings of the 2015 IEEE International Joint Confer- ence on Neural Networks.
Incorporating copying mechanism in sequence-to-sequence learning. Jiatao Gu, Zhengdong Lu, Hang Li, O K Victor, Li, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsJiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics.
Guided neural language generation for abstractive summarization using Abstract Meaning Representation. Hardy Hardy, Andreas Vlachos, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingHardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summa- rization using Abstract Meaning Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
Knowledge graph embedding via dynamic mapping matrix. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, Jun Zhao, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingGuoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015a. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing.
Mining strong relevance between heterogeneous entities from unstructured biomedical data. Ming Ji, Qi He, Jiawei Han, Scott Spangler, Data Mining and Knowledge Discovery. 29976998Ming Ji, Qi He, Jiawei Han, and Scott Spangler. 2015b. Mining strong relevance between heterogeneous en- tities from unstructured biomedical data. Data Min- ing and Knowledge Discovery, 29:976998.
Learning to generate Wikipedia summaries for underserved languages from Wikidata. Lucie-Aimée Kaffee, Hady Elsahar, Pavlos Vougiouklis, Christophe Gravier, Frederique Laforest, Jonathon Hare, Elena Simperl, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLucie-Aimée Kaffee, Hady Elsahar, Pavlos Vou- giouklis, Christophe Gravier, Frederique Laforest, Jonathon Hare, and Elena Simperl. 2018. Learning to generate Wikipedia summaries for underserved languages from Wikidata. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies.
A global model for concept-to-text generation. Ioannis Konstas, Mirella Lapata, Journal of Artificial Intelligence Research. Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. Journal of Ar- tificial Intelligence Research.
Generating natural-language video descriptions using text-mined knowledge. Niveda Krishnamoorthy, Girish Malkarnenkar, J Raymond, Kate Mooney, Sergio Saenko, Guadarrama, Proceedings of the 27th AAAI Conference on Artificial Intelligence. the 27th AAAI Conference on Artificial IntelligenceNiveda Krishnamoorthy, Girish Malkarnenkar, Ray- mond J Mooney, Kate Saenko, and Sergio Guadar- rama. 2013. Generating natural-language video de- scriptions using text-mined knowledge. In Proceed- ings of the 27th AAAI Conference on Artificial Intel- ligence.
Neural text generation from structured data with application to the biography domain. Rémi Lebret, David Grangier, Michael Auli, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingRémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with ap- plication to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.
A persona-based neural conversation model. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, Bill Dolan, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsJiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Proceedings of Text Summarization Branches Out. Text Summarization Branches OutChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Proceedings of Text Summarization Branches Out.
Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu, Proceedings of the 39th AAAI Conference on Artificial Intelligence. the 39th AAAI Conference on Artificial IntelligenceYankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of the 39th AAAI Conference on Artificial Intelligence.
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, Joelle Pineau, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingChia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.
Table-to-text generation by structure-aware seq2seq learning. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui, Proceedings of the 32nd AAAI Conference on Artificial Intelligence. the 32nd AAAI Conference on Artificial IntelligenceTianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau, Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 16th Annual Meeting of the Special Interest Group on Discourse and DialogueRyan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue.
Entity-aware image caption generation. Di Lu, Spencer Whitehead, Lifu Huang, Ji Heng, Shih-Fu Chang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingDi Lu, Spencer Whitehead, Lifu Huang, Heng Ji, and Shih-Fu Chang. 2018. Entity-aware image caption generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing.
Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. Yi Luan, Luheng He, Mari Ostendorf, Hannaneh Hajishirzi, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingYi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of enti- ties, relations, and coreference for scientific knowl- edge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing.
Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. Andrea Madotto, Chien-Sheng Wu, Pascale Fung, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAndrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
. Corey L Neal, Veronica Henderson, Bethany N Smith, Danielle Mckeithen, Tisheeka Graham, T Baohan, Corey L. Neal, Veronica Henderson, Bethany N. Smith, Danielle McKeithen, Tisheeka Graham, Baohan T.
Snail transcription factor negatively regulates maspin tumor suppressor in human prostate cancer cells. Valerie A Vo, Odero-Marah, BMC Cancer. Vo, and Valerie A. Odero-Marah. 2012. Snail tran- scription factor negatively regulates maspin tumor suppressor in human prostate cancer cells. BMC Cancer.
A three-way model for collective learning on multi-relational data. Maximilian Nickel, Hans-Peter Volker Tresp, Kriegel, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine LearningMaximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning.
Operation-guided neural networks for high fidelity data-to-text generation. Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, Chin-Yew Lin, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingFeng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, and Chin-Yew Lin. 2018. Operation-guided neural net- works for high fidelity data-to-text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
BLEU: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics.
Why academics stink at writing. The Chronicle of Higher Education. Steven Pinker, Steven Pinker. 2014. Why academics stink at writing. The Chronicle of Higher Education.
Generating English from Abstract Meaning Representations. Nima Pourdamghani, Kevin Knight, Ulf Hermjakob, Proceedings of the 9th International Natural Language Generation conference. the 9th International Natural Language Generation conferenceNima Pourdamghani, Kevin Knight, and Ulf Herm- jakob. 2016. Generating English from Abstract Meaning Representations. In Proceedings of the 9th International Natural Language Generation confer- ence.
Learning to generate posters of scientific papers. Yuting Qiang, Yanwei Fu, Yanwen Guo, Zhi-Hua Zhou, Leonid Sigal, Proceedings of the 30th AAAI Conference on Artificial Intelligence. the 30th AAAI Conference on Artificial IntelligenceYuting Qiang, Yanwei Fu, Yanwen Guo, Zhi-Hua Zhou, and Leonid Sigal. 2016. Learning to gener- ate posters of scientific papers. In Proceedings of the 30th AAAI Conference on Artificial Intelligence.
The acl anthology network corpus. Language Resources and Evaluation. R Dragomir, Pradeep Radev, Vahed Muthukrishnan, Amjad Qazvinian, Abu-Jbara, Dragomir R. Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The acl an- thology network corpus. Language Resources and Evaluation, pages 1-26.
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAbigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics.
Orderplanning neural text generation from structured data. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui, Proceedings of the 32nd AAAI Conference on Artificial Intelligence. the 32nd AAAI Conference on Artificial IntelligenceLei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order- planning neural text generation from structured data. In Proceedings of the 32nd AAAI Conference on Ar- tificial Intelligence.
A study of translation edit rate with targeted human annotation. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul, Proceedings of the Association for Machine Translation in the Americas. the Association for Machine Translation in the AmericasMatthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In Proceedings of the Association for Machine Translation in the Americas.
End-to-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, Advances in Neural Information Processing Systems. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems.
Cutting-off redundant repeating generations for neural abstractive summarization. Jun Suzuki, Masaaki Nagata, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterAssociation for Computational LinguisticsJun Suzuki and Masaaki Nagata. 2017. Cutting-off re- dundant repeating generations for neural abstractive summarization. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics.
GTR-LSTM: A triple encoder for sentence generation from RDF data. Jianzhong Bayu Distiawan Trisedya, Rui Qi, Wei Zhang, Wang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsBayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. GTR-LSTM: A triple encoder for sentence generation from RDF data. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics.
Modeling coverage for neural machine translation. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsZhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics.
When science journalism meets artificial intelligence: An interactive demonstration. Raghuram Vadapalli, Bakhtiyar Syed, Nishant Prabhu, Vasudeva Balaji Vasan Srinivasan, Varma, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingRaghuram Vadapalli, Bakhtiyar Syed, Nishant Prabhu, Balaji Vasan Srinivasan, and Vasudeva Varma. 2018. When science journalism meets artificial intelli- gence: An interactive demonstration. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing.
Scientists may be reaching a peak in reading habits. Richard Van Noorden, Nature. Richard Van Noorden. 2014. Scientists may be reach- ing a peak in reading habits. Nature.
Graph attention networks. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, Proceedings of the 8th International Conference on Learning Representations. the 8th International Conference on Learning RepresentationsPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. Proceedings of the 8th International Conference on Learning Represen- tations.
Describing a knowledge base. Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Ji Heng, Kevin Knight, Proceedings of the 11th International Conference on Natural Language Generation. the 11th International Conference on Natural Language GenerationQingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, and Kevin Knight. 2018a. Describing a knowledge base. In Proceed- ings of the 11th International Conference on Natural Language Generation.
Paper abstract writing through editing mechanism. Qingyun Wang, Zhihao Zhou, Lifu Huang, Spencer Whitehead, Boliang Zhang, Ji Heng, Kevin Knight, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsQingyun Wang, Zhihao Zhou, Lifu Huang, Spencer Whitehead, Boliang Zhang, Heng Ji, and Kevin Knight. 2018b. Paper abstract writing through edit- ing mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics.
Knowledge graph embedding by translating on hyperplanes. Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen, Proceedings of the 28th AAAI Conference on Artificial Intelligence. the 28th AAAI Conference on Artificial IntelligenceZhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence.
Text-enhanced representation learning for knowledge graph. Zhigang Wang, Juan-Zi Li, Proceedings of the 25th International Joint Conference on Artificial Intelligence. the 25th International Joint Conference on Artificial IntelligenceZhigang Wang and Juan-Zi Li. 2016. Text-enhanced representation learning for knowledge graph. In Proceedings of the 25th International Joint Confer- ence on Artificial Intelligence.
PubTator: a web-based text mining tool for assisting biocuration. Chih-Hsuan Wei, Hung-Yu Kao, Zhiyong Lu, Nucleic acids research. Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2013. PubTator: a web-based text mining tool for assisting biocuration. Nucleic acids research.
Incorporating background knowledge into video description generation. Spencer Whitehead, Heng Ji, Mohit Bansal, Shih-Fu Chang, Clare Voss, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingSpencer Whitehead, Heng Ji, Mohit Bansal, Shih-Fu Chang, and Clare Voss. 2018. Incorporating back- ground knowledge into video description genera- tion. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing.
Learning neural templates for text generation. Sam Wiseman, Stuart Shieber, Alexander Rush, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingSam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
Image captioning and visual question answering based on attributes and external knowledge. Qi Wu, Chunhua Shen, Peng Wang, Anthony Dick, Anton Van Den, Hengel, Proceedings of the 2018 IEEE transactions on pattern analysis and machine intelligence. the 2018 IEEE transactions on pattern analysis and machine intelligenceQi Wu, Chunhua Shen, Peng Wang, Anthony Dick, and Anton van den Hengel. 2018. Image captioning and visual question answering based on attributes and external knowledge. In Proceedings of the 2018 IEEE transactions on pattern analysis and machine intelligence.
Ziang Xie, arXiv:1711.09534Neural text generation: A practical guide. arXiv preprintZiang Xie. 2017. Neural text generation: A practical guide. arXiv preprint arXiv:1711.09534.
Knowledge graph representation with jointly structural and textual encoding. Jiacheng Xu, Kan Chen, Xipeng Qiu, Xuanjing Huang, Proceedings of the 26th International Joint Conference on Artificial Intelligence. the 26th International Joint Conference on Artificial IntelligenceJiacheng Xu, Kan Chen, Xipeng Qiu, and Xuanjing Huang. 2017. Knowledge graph representation with jointly structural and textual encoding. In Proceed- ings of the 26th International Joint Conference on Artificial Intelligence.
SQL-to-text generation with graph-to-sequence model. Kun Xu, Lingfei Wu, Zhiguo Wang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingYansong Feng, and Vadim SheininKun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, and Vadim Sheinin. 2018. SQL-to-text generation with graph-to-sequence model. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing.
| [
"https://github.com/pytorch/examples/"
] |
[
"ARAML: A Stable Adversarial Training Framework for Text Generation",
"ARAML: A Stable Adversarial Training Framework for Text Generation"
] | [
"Pei Ke \nInstitute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina\n",
"Fei Huang f-huang18@mails.tsinghua.edu.cnaihuang@tsinghua.edu.cn \nInstitute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina\n",
"Minlie Huang \nInstitute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina\n",
"Xiaoyan Zhu \nInstitute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina\n"
] | [
"Institute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina",
"Institute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina",
"Institute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina",
"Institute for Artificial Intelligence\nState Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology\nTsinghua University\n100084BeijingChina"
] | [
"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing"
] | Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process. | 10.18653/v1/d19-1436 | [
"https://www.aclweb.org/anthology/D19-1436.pdf"
] | 201,103,950 | 1908.07195 | f696879a5459d4ceaa4a403e61b804050ebdedf0 |
ARAML: A Stable Adversarial Training Framework for Text Generation
November 3-7, 2019
Pei Ke
Institute for Artificial Intelligence
State Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology
Tsinghua University
100084BeijingChina
Fei Huang f-huang18@mails.tsinghua.edu.cnaihuang@tsinghua.edu.cn
Institute for Artificial Intelligence
State Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology
Tsinghua University
100084BeijingChina
Minlie Huang
Institute for Artificial Intelligence
State Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology
Tsinghua University
100084BeijingChina
Xiaoyan Zhu
Institute for Artificial Intelligence
State Key Lab of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology
Tsinghua University
100084BeijingChina
ARAML: A Stable Adversarial Training Framework for Text Generation
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaNovember 3-7, 20194271
Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.
Introduction
Natural text generation, as a key task in NLP, has been advanced substantially thanks to the flourish of neural models (Bengio et al., 2003;Mikolov et al., 2010). Typical frameworks such as sequence-to-sequence (seq2seq) have been applied to various generation tasks, including machine translation (Sutskever et al., 2014) and dialogue generation (Vinyals and Le, 2015). The standard paradigm to train such neural models is maximum likelihood estimation (MLE), which maximizes the log-likelihood of observing each word in the text given the ground-truth proceeding context (Graves, 2013).
Although widely used, MLE suffers from the exposure bias problem (Bengio et al., 2015;Ranzato et al., 2016): during test, the model sequentially predicts the next word conditioned on its previous generated words while during training conditioned on ground-truth words. To tackle this * Equal contribution † Corresponding author: Minlie Huang problem, generative adversarial networks (GAN) with reinforcement learning (RL) training approaches have been introduced to text generation tasks Che et al., 2017;Lin et al., 2017;Shi et al., 2018;, where the discriminator is trained to distinguish real and generated text samples to provide reward signals for the generator, and the generator is optimized via policy gradient . However, recent studies have shown that potential issues of training GANs on discrete data are more severe than exposure bias (Semeniuta1 et al., 2018;Caccia et al., 2018). One of the fundamental issues when generating discrete text samples with GANs is training instability. Updating the generator with policy gradient always leads to an unstable training process because it's difficult for the generator to derive positive and stable reward signals from the discriminator even with careful pretraining (Che et al., 2017). As a result, the generator gets lost due to the high variance of reward signals and the training process may finally collapse .
In this paper, we propose a novel adversarial training framework called Adversarial Reward Augmented Maximum Likelihood (ARAML) to deal with the instability issue of training GANs for text generation. At each iteration of adversarial training, we first train the discriminator to assign higher rewards to real data than to generated samples. Then, inspired by reward augmented maximum likelihood (RAML) (Norouzi et al., 2016), the generator is updated on the samples acquired from a stationary distribution with maximum likelihood estimation (MLE), weighted by the discriminator's rewards. This stationary distribution is designed to guarantee that training samples are surrounding the real data, thus the exploration space of our generator is indeed restricted by the MLE training objective, resulting in more stable training. Compared to other text GANs with RL training techniques, our framework acquires samples from the stationary distribution rather than the generator's distribution, and uses RAML training paradigm to optimize the generator instead of policy gradient. Our contributions are mainly as follows:
• We analyze the fundamental issue of current GANs for text generation from the perspectives of training instability.
• We propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML), which incorporates stable RAML training into adversarial training paradigm. Experimental results on three text generation tasks show the effectiveness of our method.
Related Work
Recently, text generation has been widely studied with neural models trained with maximum likelihood estimation (Graves, 2013). However, MLE tends to generate universal text . Various methods have been proposed to enhance the generation quality by refining the objective function Mou et al., 2016) or modifying the generation distribution with external information like topic (Xing et al., 2017), sentence type (Ke et al., 2018), emotion (Zhou et al., 2018a) and knowledge (Zhou et al., 2018b). As mentioned above, MLE suffers from the exposure bias problem (Bengio et al., 2015;Ranzato et al., 2016). Thus, reinforcement learning has been introduced to text generation tasks such as policy gradient (Ranzato et al., 2016) and actorcritic (Bahdanau et al., 2017). (Norouzi et al., 2016) proposed an efficient and stable approach called Reward Augmented Maximum Likelihood (RAML), which connects the log-likelihood and expected rewards to incorporate MLE training objective into RL framework.
Since some text generation tasks have no explicit metrics to be directly optimized, adversarial training has been applied to generating discrete text samples with a discriminator to learn a proper reward. For instance, SeqGAN devised a discriminator to distinguish the real data and generated samples, and a generator to maximize the reward from the discriminator via pol-icy gradient. Other variants of GANs have been proposed to improve the generator or the discriminator. To improve the generator, MaliGAN (Che et al., 2017) developed a normalized maximum likelihood optimization target for the generator to stably model the discrete sequences. LeakGAN guided the generator with reward signals leaked from the discriminator at all generation steps to deal with long text generation task. MaskGAN employed an actor-critic architecture to make the generator fill in missing text conditioned on the surrounding context, which is expected to mitigate the problem of mode collapse. As for the discriminator, RankGAN (Lin et al., 2017) replaced traditional discriminator with a ranker to learn the relative ranking information between the real texts and generated ones. Inverse reinforcement learning (Shi et al., 2018) used a trainable reward approximator as the discriminator to provide dense reward signals at each generation step. DPGAN ) introduced a language model based discriminator and regarded cross-entropy as rewards to promote the diversity of generation results.
The most similar works to our model are RAML (Norouzi et al., 2016) and MaliGAN (Che et al., 2017): 1) Compared with RAML, our model adds a discriminator to learn the reward signals instead of choosing existing metrics as rewards. We believe that our model can adapt to various text generation tasks, particularly those without explicit evaluation metrics. 2) Unlike MaliGAN, we acquire samples from a fixed distribution near the real data rather than the generator's distribution, which is expected to make the training process more stable.
Model
Task Definition and Model Overview
Text generation can be formulated as follows: given the real data distribution P data (X), the task is to train a generative model G θ where P G θ (X) can fit P data (X) well. In this formulation, X = x 1 x 2 · · · x m and x t (1 ≤ t ≤ m) denotes a word in the vocabulary V. Figure 1 shows the overview of our model ARAML. This adversarial training framework consists of two phases: 1) The discriminator is trained to assign higher rewards to real data than to generated data. 2) The generator is trained on the samples acquired from a stationary distribu- Figure 1: Overview of ARAML. The training samples are acquired from a stationary distribution P s based on the real data. The generator is then trained on the samples augmented by the discriminator's rewards. The discriminator is trained to distinguish real data and generated data. tion with reward augmented MLE training objective. This training paradigm of the generator indeed constrains the search space with the MLE training objective, which alleviates the issue of unstable training.
Discriminator
The discriminator D φ aims to distinguish real data and generated data like other GANs. Inspired by Least-Square GAN (Mao et al., 2017), we devise the loss function as follows:
L D φ = 1 2 E X∼P data (X) (D φ (X) − 1) 2 + 1 2 E X∼P G θ (X) (D φ (X)) 2 (1)
This loss function forces the discriminator to assign higher rewards to real data than to generated data, so the discriminator can learn to provide more proper rewards as the training proceeds.
Generator
The training objective of our generator G θ is derived from the objective of other discrete GANs with RL training method:
L RL,θ = −E X∼P G θ (X) [r φ (X)] − τ H(P G θ (X))(2)
where r φ (X) denotes the rewards from the discriminator D φ and the entropy regularized term H(P G θ (X)) encourages G θ to generate diverse text samples. τ is a temperature hyper-parameter to balance these two terms.
As mentioned above, discrete GANs suffer from the instability issue due to policy gradient, thus they are consequently difficult to train. Inspired by RAML (Norouzi et al., 2016), we introduce an exponential payoff distribution Q φ (X) to connect RL loss with RAML loss:
Q φ (X) = 1 Z exp(r φ (X)/τ )(3)
where Z = X exp(r φ (X)/τ ). Thus, we can rewrite L RL,θ with P G θ (X) and Q φ (X) as follows:
L RL,θ = τ KL(P G θ (X)||Q φ (X)) + constant(4)
Following RAML, we remove the constant term and optimize the KL divergence in the opposite direction: (5) where H(Q φ (X)) is a constant in the training phase of the generator. It has been proved that L RL,θ and L RAML,θ are equivalent up to their first order Taylor approximations, and they have the same global optimum (Norouzi et al., 2016). L RAML,θ can be trained in a MLE-like fashion but sampling from the distribution Q φ (X) is intractable in the adversarial setting, because Q φ (X) varies with the discriminator D φ . Thus, we introduce importance sampling to separate sampling process from D φ and obtain the final loss function:
L RAML,θ = KL(Q φ (X)||P G θ (X)) = −E X∼Q φ (X) [log P G θ (X)] − H(Q φ (X)) = −E X∼Q φ (X) [log P G θ (X)] + constantL G θ = −E X∼Ps(X) [W φ (X) log P G θ (X)] (6)
where P s (X) denotes a stationary distribution and W φ (X) ∝ Q φ (X)/P s (X). To optimize this loss function, we first construct the fixed distribution P s (X) to get samples, and devise the proper reward function r φ (X) to train the generator in a stable and effective way.
Sampling
We construct the distribution P s based on P data :
P s (X) = E X∼P data (X) [P s (X s |X)](7)
In this way, P s (X s |X) can be designed to guarantee that P s (X) is near P data (X), leading to a more stable training process. To obtain a new sample X s from a real data sample X, we can design three steps which contain sampling an edit distance d, the positions {p 1 , p 2 , · · · , p d } for substitution and the new words {w 1 , w 2 , · · · , w d } filled into the corresponding positions. Thus, P s (X s |X) can be decomposed into three terms:
P s (X s |X) = P (d, p, w|X) = P (d|X)P (p|X, d)P (w|X, d, p) (8)
The first step is to sample an edit distance based on a real data sample X, where X = x 1 x 2 · · · x m is a sequence of length m. The number of sentences which have the edit distance e to some input sentence can be computed approximately as below:
c(e, m) = m e · (|V| − 1) e(9)
where c(e, m) denotes the number of sentences which have an edit distance e(e ∈ {0, 1, ..., m}) to a sentence of length m, and |V| indicates the size of vocabulary. We then follow (Norouzi et al., 2016) to re-scale the counts by exp{−e/τ } and do normalization, so that we can sample an edit distance d * from:
P (d = d * |X) = exp{−d * /τ }c(d * , m) m e=0 exp{−e/τ }c(e, m)(10)
where τ , as a temperature hyper-parameter, restricts the search space surrounding the original sentence. Larger τ brings more samples with long edit distances.
The next step is to select positions for substitution based on the sampled edit distance d * . Intuitively, we can randomly choose d * distinct positions in X to be replaced by new words. The probability of choosing the position p * is calculated as follows:
P (p = p * |X, d = d * ) = d * m(11)
Following this sampling strategy, we can obtain the position set {p 1 , p 2 , · · · , p d * }. This strategy approximately guarantees that the edit distance between a new sentence and the original sentence is d * .
At the final step, our model determines new words for substitution at each sampled position p j (j = 1, 2, ..., d * ). We can formulate this sampling process from the original sequence X to a new sample X s as a sequential transition X = X 0 → X 1 → · · · → X d * = X s . At each step from X j−1 to X j (j = 1, · · · , d * ), we first sample a new word w j from the distribution P (w|X j−1 , p = p j ), then replace the old word at position p j of X j−1 to obtain X j . The whole sampling process can be decomposed as follows:
P (w|X, d = d * ,p = {p 1 , p 2 , · · · , p d * }) = d * j=1 P (w j |X j−1 , p = p j ) (12)
There are two common sampling strategies to model P (w|X j−1 , p = p j ), i.e. random sampling and constrained sampling. Random sampling strategy samples a new word w j according to the uniform distribution over the vocabulary V (Norouzi et al., 2016), while constrained sampling strategy samples w j to maximize the language model score of the target sentence X j (Su et al., 2018;Miao et al., 2019). Here, we adopt constrained sampling in our model and compare the performances of two strategies in the experiment.
Training
We devise the reward function r φ (X) according to the discriminator's output D φ (X) and the stationary distribution P s (X):
r φ (X) = τ · [log P s (X) + D φ (X)](13)
Intuitively, this reward function encourages the generator to generate sentences with large sampling probability and high rewards from the discriminator. Thus, the weight of samples W φ (X) can be calculated as follows:
W φ (X) ∝ Q φ (X) P s (X) ∝ exp {D φ (X)}(14)
So far, we can successfully optimize the generator's loss L G θ via Equation 6. This training paradigm makes our generator avoid possible variances caused by policy gradient and get more stable reward signals from the discriminator, because our generator is restricted to explore the training samples near the real data.
Algorithm 1 Adversarial Reward Augmented Maximum Likelihood Require: Total adversarial training iterations: N iters Steps of training generator: G steps Steps of training discriminator: D steps 1: Pre-train the generator G θ with MLE loss 2: Generate samples from P G θ 3: Pre-train the discriminator D φ via Eq.(1) 4: Construct P s (X) via Eq. (7)
Extension to Conditional Text Generation
We have shown our adversarial training framework for text generation tasks without an input. Actually, it can also be extended to conditional text generation tasks like dialogue generation. Given the data distribution P data (C, X) where C, X denote contexts and responses respectively, the objective function of ARAML's generator can be modified as below:
L G θ = −E (C,X)∼P data (C,X) E Xs∼Ps(Xs|C,X) [W φ (C, X s ) log P G θ (X s |C)](15)
where W φ (C, X s ) ∝ exp{D φ (C, X s )} and D φ (C, X s ) is trained to distinguish whether X s is the true response to C.
Comparison with RAML and MaliGAN
The most similar works to our framework are RAML (Norouzi et al., 2016) and MaliGAN (Che et al., 2017). The main difference among them is the training objective of their generators. We have shown different objective functions in Table 1. For comparison, we use the form with no input for all the three models. Our model is greatly inspired by RAML, which gets samples from a non-parametric distribution Q(X) constructed based on a specific reward. Compared to RAML, our reward comes from a learnable discriminator which varies as the adversarial training proceeds rather than a specific reward function. This difference equips our framework with the ability to adapt to the text generation tasks with no explicit evaluation metrics as rewards.
Our model is also similar to MaliGAN, which gets samples from the generator's distribution. In MaliGAN's training objective, G θ also indicates the generator's distribution but it's used in the sampling phase and fixed at each optimization step. The weight of samples W φ (X) ∝ D φ (X) 1−D φ (X) . Different from our model, MaliGAN acquires samples from the generator's distribution P G θ , which usually brings samples with low rewards even with careful pre-training for the generator, leading to training instability. Instead, our framework gets samples from a stationary distribution P s around real data, thus our training process is more stable.
Model
Training Objective of Generator RAML
LG We evaluated ARAML on three datasets: COCO image caption dataset (Chen et al., 2015), EMNLP2017 WMT dataset 1 and Weibo-Dial single-turn dialogue dataset (Qian et al., 2018). COCO and EMNLP2017 WMT are the common benchmarks with no input to evaluate the performance of discrete GANs, and we followed the existing works to preprocess these datasets (Shi et al., 2018;. WeiboDial, as a dialogue dataset, was applied to test the performance of our model with input trigger. We simply removed post-response pairs containing lowfrequency words and randomly selected a subset for our training/test set. The statistics of three datasets are presented in Table 2.
θ = −E X∼Q(X) [log PG θ (X)] MaliGAN LG θ = −E X∼P G θ (X) [W φ (X) log PG θ (X)] ARAML LG θ = −E X∼Ps(X) [W φ (X) log PG θ (X)]
Baselines
We compared our model with MLE, RL and GAN baselines. Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively: MLE: a RNN model trained with MLE objective (Graves, 2013). Its extension, Seq2Seq, can work on the dialogue dataset (Sutskever et al., 2014). SeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator . LeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator . MaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective (Che et al., 2017). IRL: This inverse reinforcement learning method replaces the discriminator with a reward approximator to provide dense rewards (Shi et al., 2018). RAML: A RL approach to incorporate MLE objective into RL training framework, which regards BLEU as rewards (Norouzi et al., 2016). DialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective . DPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards .
Note that MLE, SeqGAN, LeakGAN, Mali-GAN and IRL are the baselines on COCO and EMNLP2017 WMT, while MLE, RAML, Dialog-GAN, and DPGAN on WeiboDial. The original codes are used to test the baselines.
Implementation Details
The implementation details of our model are shown in Table 3. For COCO / EMNLP2017, the generator is a LSTM unit (Hochreiter and Schmidhuber, 1997) with 128 cells, and the discriminator is implemented based on . For WeiboDial, the generator is an encoder-decoder structure with attention mechanism, where both the encoder and the decoder consist of a two-layer GRU (Cho et al., 2014) with 128 cells. The discriminator is implemented based on (Tao et al., 2018). The language model used in the constrained sampling of ARAML is implemented in the same setting as the generators, and is pretrained on the training set of each dataset. The codes and the datasets are available at https: //github.com/kepei1106/ARAML. As for the details of the baselines, the generators of all the baselines except LeakGAN are the same as ours. Note that the generator of Leak-GAN consists of a hierarchical LSTM unit, thus we followed the implementation in the original paper. In terms of the differences, the discriminators of GAN baselines are implemented based on the original papers. Other hyper-parameters of baselines including batch size, learning rate, and pre-training epochs, were set based on the original codes, because the convergence of baselines is sensitive to these hyper-parameters.
Language Generation on COCO and EMNLP2017 WMT
We adopted forward/reverse perplexity and Self-BLEU to evaluate the quality of generated texts. Forward perplexity (PPL-F) indicates the perplexity on the generated data provided by a language model trained on real data to measure the fluency of generated samples. Reverse perplexity (PPL-R) switches the roles of generated data and real data to reflect the discrepancy between the generated distribution and the data distribution. Self-BLEU (S-BLEU) regards each sentence in the generated collection as hypothesis and the others as reference to obtain BLEU scores, which evaluates the diversity of generated results. Results are shown in Table 4. LeakGAN performs best on forward perplexity because it can generate more fluent samples. As for reverse perplexity, our model ARAML beats other baselines, showing that our model can fit the data distribution better. Other GANs, particularly LeakGAN, obtain high reverse perplexity due to mode collapse (Shi et al., 2018), thus they only capture limited fluent expressions, resulting in large discrepancy between the generated distribution and data distribution. ARAML also outperforms the baselines in terms of Self-BLEU, indicating that our model doesn't fall into mode collapse with the help of the MLE training objective and has the ability to generate more diverse sentences.
We also provide standard deviation of each metric in Table 4, reflecting the stability of each model's performance. Our model ARAML nearly achieves the smallest standard deviation in all the metrics, indicating that our framework outperforms policy gradient in the stability of adversarial training.
Dialogue Generation on WeiboDial
Dialogue evaluation is an open problem and existing works have found that automatic metrics have low correlation to human evaluation (Liu et al., 2016;Novikova et al., 2017;Chaganty et al., 2018). Thus, we resorted to manual evaluation to assess the generation quality on WeiboDial. We randomly sampled 200 posts from the test set and collected the generated results from all the models. For each pair of responses (one from ARAML and the other from a baseline, given the same input post), five annotators were hired to label which response is better (i.e. win, lose or tie) in terms of grammaticality (whether a response itself is gram-matical and logical) and relevance (whether a response is appropriate and relevant to the post). The two metrics were evaluated independently.
The evaluation results are shown in Table 5. To measure the inter-annotator agreement, we calculated Fleiss' kappa (Fleiss, 1971) for each pairwise comparison where results show moderate agreement (0.4 ≤ κ ≤ 0.6). We also conducted sign test to check the significance of the differences.
As shown in Table 5, ARAML performs significantly better than other baselines in all the cases. This result indicates that the samples surrounding true responses provide stable rewards for the generator, and stable RAML training paradigm significantly enhances the performance in both metrics. To verify the training stability, we conducted experiments on COCO many times and chose the best 5 trials for SeqGAN, LeakGAN, IRL, Mali-GAN and ARAML, respectively. Then, we presented the forward/reverse perplexity in the train- ing process in Figure 2. We can see that our model with smaller standard deviation is more stable than other GAN baselines in both metrics. Although LeakGAN reaches the best forward perplexity, its standard deviation is extremely large and it performs badly in reverse perplexity, indicating that it generates limited expressions that are grammatical yet divergent from the data distribution.
Further Analysis on Stability
Ablation Study
Impact of Temperature
The temperature τ controls the search space surrounding the real data as we analyze in Section 3.3.1. To investigate its impact on the performance of our model, we fixed all the other hyperparameters and test ARAML with different temperatures on COCO. The experimental results are shown in Figure 3. We can see that as the temperature becomes larger, forward perplexity increases gradually while Self-BLEU decreases. As mentioned in Section 3.3.1, large temperatures encourage our generator to explore the samples that are distant from real data distribution, thus the diversity of generated results will be improved. However, these samples distant from the data distribution are more likely to be poor in fluency, leading to worse forward perplexity. Reverse perplexity is influenced by both generation quality and diversity, so the correlation between temperature and reverse perplexity is not intuitive. We can observe that the model with τ = 0.95 reaches the best reverse perplexity.
Impact of Sampling Strategy
We have mentioned two common sampling strategies in Section 3.3.1, i.e. random sampling and constrained sampling. To analyze their impact, we keep all the model structures and hyperparameters fixed and test ARAML with these two strategies on COCO. Table 6: PPL-F, PPL-R and S-BLEU of ARAML with random sampling (ARAML-R) and constrained sampling (ARAML-C) on COCO. Table 6 shows the results. It's obvious that random sampling hurts the model performance except Self-BLEU-1, because it indeed allows lowquality samples available to the generator. Exploring these samples degrades the quality and diversity of generated results. Despite the worse performance on automatic metrics, random sampling doesn't affect the training stability of our framework. The standard deviation of ARAML-R is still smaller than other GAN baselines. Table 7 presents the examples generated by the models on COCO. We can find that other baselines suffer from grammatical errors (e.g. "in front of flying her kite" from MLE), repetitive expressions Model Generated Samples MLE A little girl sitting on a beach in front of flying her kite at the beach. A little boy standing in a room next to a desk. SeqGAN A man sitting on a bench with snow board in the background. A brown gray cat is in the corner of a street. LeakGAN A person that is holding something while another kid is standing in the water. A room with a television, mantle, and a chair. MaliGAN A man with a shirt on holding one large pink giant and white kite. A couple and vases are outside on the bed. IRL A group of people wearing helmet sitting down on a cell phone. A group of people sitting in the middle of tracks. ARAML A man is wearing a hat and holding a toothbrush as he stands on the grass of a field. A boy reading a book on a sofa in a room. (e.g. "A group of people" from IRL) and incoherent statements (e.g. "A group of people sitting on a cell phone" from IRL). By contrast, our model performs well in these sentences and has the ability to generate grammatical and coherent results. Table 8 shows the generated examples on Wei-boDial. It's obvious that other baselines don't capture the topic word "late" in the post, thus generate irrelevant responses. ARAML can provide a response that is grammatical and closely relevant to the post.
Case Study
Conclusion
We propose a novel adversarial training framework to deal with the instability problem of current GANs for text generation. To address the instability issue caused by policy gradient, we incorporate RAML into the adversarial training paradigm to make our generator acquire stable rewards. Experiments show that our model performs better than several state-of-the-art GAN baselines with lower training variance, yet producing better performance on three text generation tasks.
-Eq.(12) 5: for each s = 1, 2, ..., N iters do 6:for each j = 1, 2, ...
Figure 2 :
2PPL-F/PPL-R curves of ARAML, SeqGAN, LeakGAN, MaliGAN and IRL in the training process. The shade area indicates the standard deviation at each data point. The dotted vertical lines separate pre-training and adversarial training phases (50 for ARAML, IRL and MaliGAN, 80 for SeqGAN and LeakGAN).
Figure 3 :
3PPL-F, PPL-R and S-BLEU of ARAML with different temperatures τ ∈ {0.8, 0.85, 0.9, 0.95} on COCO.
37.48±0.53 37.44±0.56 0.752/0.571/0.384 ARAML-C 26.97±0.55 35.79±0.49 0.777/0.560/0.366
Table 1 :
1Training objectives of generators for RAML, MaliGAN and ARAML.4 Experiment
4.1 Datasets
Dataset
Amount(Train/Test) Vocabulary Length
COCO
80,000/5,000
4,839
12.8
EMNLP2017
49,996/10,000
5,721
27.8
WeiboDial
100,000/5,000
7,998
7.3/10.8
Table 2 :
2Statistics of COCO, EMNLP2017 WMT and
WeiboDial. The average lengths 7.3/10.8 of Weibo-
Dial indicate the lengths of posts and responses, respec-
tively.
Table 3 :
3Implementation details of ARAML. G/D/LM
indicates the generator / discriminator / language model
used in constrained sampling, respectively.
Table 4 :
4Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.
Table 5 :
5Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. κ denotes Fleiss' kappa (all are moderate agreement).The scores marked with
Table 7 :
7Examples of generated sentences on COCO. Grammatical errors are in red, while blue text represents repetitive expressions and green part indicates incoherent statements.
Post :
Post你还迟到哪?你没这习惯啊 Are you late? You don't have this habit. MLE: 你是不是要好好照顾自己的!!! Would you take care of yourself !!! RAML: 这是我的生活,我也不想干 This is my life. I don't want to do this. DialogGAN: 这社会的温度快到了。周末,要躺床上! The temperature of the society has nearly reached. I'll lie on my bed on the weekend! DPGAN: 我在上班,我就不习惯了 I'm working and I'm used to doing this. ARAML: 嗯,周末就不迟到了! Hum. I won't be late on the weekend!
Table 8 :
8Examples of generated responses on Weibo-Dial.
http://statmt.org/wmt17/translation-task.html
AcknowledgmentsThis work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support.
An actor-critic algorithm for sequence prediction. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio, Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsDzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In Proceedings of International Conference on Learning Represen- tations.
Scheduled sampling for sequence prediction with recurrent neural networks. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, Advances in Neural Information Processing Systems. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Advances in Neural Information Processing Sys- tems, pages 1171-1179.
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, Journal of Machine Learning Research. 3Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, 3:1137-1155.
Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, Laurent Charlin, arXiv:1811.02549Language gans falling short. arXiv preprintMassimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2018. Language gans falling short. arXiv preprint arXiv: 1811.02549.
The price of debiasing automatic metrics in natural language evaluation. Stephen Arun Tejasvi Chaganty, Percy Mussmann, Liang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsArun Tejasvi Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic met- rics in natural language evaluation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 643-653.
Yanran Tong Che, Ruixiang Li, Devon Zhang, Wenjie Hjelm, Li, arXiv:1702.07983Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprintTong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Ben- gio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv: 1702.07983.
Microsoft coco captions: Data collection and evaluation server. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C Lawrence Zitnick, arXiv:1504.00325arXiv preprintXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. 2015. Microsoft coco cap- tions: Data collection and evaluation server. arXiv preprint arXiv: 1504.00325.
On the properties of neural machine translation: Encoder-decoder approaches. Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, Yoshua Bengio, Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Eighth Workshop on Syntax, Semantics and Structure in Statistical TranslationKyunghyun Cho, Bart Van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the proper- ties of neural machine translation: Encoder-decoder approaches. In Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Trans- lation, pages 103-111.
Maskgan: Better text generation via filling in the. William Fedus, Ian J Goodfellow, Andrew M Dai, Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsWilliam Fedus, Ian J. Goodfellow, and Andrew M. Dai. 2018. Maskgan: Better text generation via filling in the . In Proceedings of International Confer- ence on Learning Representations.
Measuring nominal scale agreement among many raters. L Joseph, Fleiss, Psychological Bulletin. 765Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological Bulletin, 76(5):378-382.
Generating sequences with recurrent neural networks. Alex Graves, arXiv:1308.0850arXiv preprintAlex Graves. 2013. Generating sequences with re- current neural networks. arXiv preprint arXiv: 1308.0850.
Long text generation via adversarial training with leaked information. Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, Jun Wang, Proceedings of AAAI conference on Artificial Intelligence. AAAI conference on Artificial IntelligenceJiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In Pro- ceedings of AAAI conference on Artificial Intelli- gence, pages 5141-5148.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Generating informative responses with controlled sentence function. Pei Ke, Jian Guan, Minlie Huang, Xiaoyan Zhu, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsPei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with con- trolled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics, pages 1499-1508.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.
Adversarial learning for neural dialogue generation. Jiwei Li, Will Monroe, Tianlin Shi, Sebastien Jean, Alan Ritter, Dan Jurafsky, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingJiwei Li, Will Monroe, Tianlin Shi, Sebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing, pages 2157-2169.
Adversarial ranking for language generation. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, Ming-Ting Sun, Advances in Neural Information Processing Systems. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Advances in Neural Infor- mation Processing Systems, pages 3155-3165.
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. Chia Wei Liu, Ryan Lowe, V Iulian, Michael Serban, Laurent Noseworthy, Joelle Charlin, Pineau, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingChia Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing, pages 2122-2132.
Least squares generative adversarial networks. Xudong Mao, Qing Li, Haoran Xie, Y K Raymond, Zhen Lau, Stephen Paul Wang, Smolley, International Conference on Computer Vision. Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, and Stephen Paul Smolley. 2017. Least squares generative adversarial net- works. In International Conference on Computer Vision, pages 2813-2821.
Cgmh: Constrained sentence generation by metropolis-hastings sampling. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei Li, Proceedings of AAAI conference on Artificial Intelligence. AAAI conference on Artificial IntelligenceNing Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In Proceedings of AAAI conference on Artificial Intelligence.
Recurrent neural network based language model. Tomas Mikolov, Martin Karafiat, Lukas Burget, Proceedings of the 11st Annual Conference of the International Speech Communication Association. the 11st Annual Conference of the International Speech Communication AssociationHonza Cernock, and Sanjeev KhudanpurTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Honza Cernock, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11st Annual Conference of the International Speech Communication Association, pages 1045-1048.
Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin, Proceedings of 26th International Conference on Computational Linguistics. 26th International Conference on Computational LinguisticsLili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and for- ward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of 26th International Conference on Computational Linguistics, pages 3349-3358.
Reward augmented maximum likelihood for neural structured prediction. Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, Advances in Neural Information Processing Systems. Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. 2016. Reward augmented max- imum likelihood for neural structured prediction. In Advances in Neural Information Processing Sys- tems, pages 1723-1731.
Why we need new evaluation metrics for NLG. Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, Verena Rieser, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingJekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252.
Assigning personality/profile to a chatting machine for coherent conversation generation. Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, Xiaoyan Zhu, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. the Twenty-Seventh International Joint Conference on Artificial IntelligenceQiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Assigning personal- ity/profile to a chatting machine for coherent con- versation generation. In Proceedings of the Twenty- Seventh International Joint Conference on Artificial Intelligence, pages 4279-4285.
Sequence level training with recurrent neural networks. Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In Proceedings of International Conference on Learning Represen- tations.
On accurate evaluation of gans for language generation. Aliaksei Stanislau Semeniuta1, Sylvain Severyn, Gelly, arXiv:1806.04936arXiv preprintStanislau Semeniuta1, Aliaksei Severyn, and Syl- vain Gelly. 2018. On accurate evaluation of gans for language generation. arXiv preprint arXiv: 1806.04936.
Toward diverse text generation with inverse reinforcement learning. Zhan Shi, Xinchi Chen, Xipeng Qiu, Xuanjing Huang, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. the Twenty-Seventh International Joint Conference on Artificial IntelligenceZhan Shi, Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2018. Toward diverse text generation with inverse reinforcement learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 4361-4367.
Incorporating discriminator in sentence generation: A gibbs sampling method. Jinyue Su, Jiacheng Xu, Xipeng Qiu, Xuanjing Huang, Proceedings of AAAI conference on Artificial Intelligence. AAAI conference on Artificial IntelligenceJinyue Su, Jiacheng Xu, Xipeng Qiu, and Xuanjing Huang. 2018. Incorporating discriminator in sen- tence generation: A gibbs sampling method. In Proceedings of AAAI conference on Artificial Intel- ligence, pages 5496-5503.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in Neural Information Processing Systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems, pages 3104-3112.
Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. Chongyang Tao, Lili Mou, Dongyan Zhao, Rui Yan, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. the Thirty-Second AAAI Conference on Artificial IntelligenceChongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for au- tomatic evaluation of open-domain dialog systems. In Proceedings of the Thirty-Second AAAI Confer- ence on Artificial Intelligence, pages 722-729.
A neural conversational model. Oriol Vinyals, Quoc Le, International Conference on Machine Learning Deep Learning Workshop. Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In International Conference on Ma- chine Learning Deep Learning Workshop.
Topic aware neural response generation. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, Wei-Ying Ma, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceChen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelli- gence, pages 3351-3357.
Diversity-promoting gan: A crossentropy based generative adversarial network for diversified text generation. Jingjing Xu, Xuancheng Ren, Junyang Lin, Xu Sun, Conference on Empirical Methods in Natural Language Processing. Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. 2018. Diversity-promoting gan: A cross- entropy based generative adversarial network for di- versified text generation. In Conference on Empiri- cal Methods in Natural Language Processing, page 3940-3949.
Seqgan: Sequence generative adversarial nets with policy gradient. Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu, Proceedings of AAAI conference on Artificial Intelligence. AAAI conference on Artificial IntelligenceLantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of AAAI con- ference on Artificial Intelligence, pages 2852-2858.
Adversarially regularized autoencoders. Jake Junbo, Yoon Zhao, Kelly Kim, Alexander M Zhang, Yann Rush, Lecun, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningJunbo Jake Zhao, Yoon Kim, Kelly Zhang, Alexan- der M. Rush, and Yann LeCun. 2018. Adversari- ally regularized autoencoders. In Proceedings of the 35th International Conference on Machine Learn- ing, pages 5897-5906.
Emotional chatting machine: Emotional conversation generation with internal and external memory. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, Bing Liu, Proceedings of AAAI conference on Artificial Intelligence. AAAI conference on Artificial IntelligenceHao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018a. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In Proceedings of AAAI conference on Artificial Intelligence.
Commonsense knowledge aware conversation generation with graph attention. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, Xiaoyan Zhu, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. the Twenty-Seventh International Joint Conference on Artificial IntelligenceHao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018b. Com- monsense knowledge aware conversation generation with graph attention. In Proceedings of the Twenty- Seventh International Joint Conference on Artificial Intelligence, pages 4623-4629.
Texygen: A benchmarking platform for text generation models. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, Yong Yu, Proceedings of the 41st International ACM SIGIR Conference on Research Development in Information Retrieval. the 41st International ACM SIGIR Conference on Research Development in Information RetrievalYaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text gener- ation models. In Proceedings of the 41st Interna- tional ACM SIGIR Conference on Research Devel- opment in Information Retrieval, pages 1097-1100.
| [] |
[
"skweak: Weak Supervision Made Easy for NLP",
"skweak: Weak Supervision Made Easy for NLP"
] | [
"Pierre Lison plison@nr.no ",
"Jeremy Barnes jeremycb@ifi.uio.no ",
"Aliaksandr Hubin ",
"\nLanguage Technology Group\nNorwegian Computing Center Oslo\nNorway\n",
"\nDepartment of Mathematics\nUniversity of Oslo\nUniversity of Oslo\n\n"
] | [
"Language Technology Group\nNorwegian Computing Center Oslo\nNorway",
"Department of Mathematics\nUniversity of Oslo\nUniversity of Oslo\n"
] | [
"Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations"
] | We present skweak, a versatile, Python-based software toolkit enabling NLP developers to apply weak supervision to a wide range of NLP tasks. Weak supervision is an emerging machine learning paradigm based on a simple idea: instead of labelling data points by hand, we use labelling functions derived from domain knowledge to automatically obtain annotations for a given dataset. The resulting labels are then aggregated with a generative model that estimates the accuracy (and possible confusions) of each labelling function. The skweak toolkit makes it easy to implement a large spectrum of labelling functions (such as heuristics, gazetteers, neural models or linguistic constraints) on text data, apply them on a corpus, and aggregate their results in a fully unsupervised fashion. skweak is especially designed to facilitate the use of weak supervision for NLP tasks such as text classification and sequence labelling. We illustrate the use of skweak for NER and sentiment analysis. skweak is released under an open-source license and is available at: https://github.com/NorskRegnesentral/skweak | 10.18653/v1/2021.acl-demo.40 | [
"https://www.aclanthology.org/2021.acl-demo.40.pdf"
] | 233,307,206 | 2104.09683 | 3b12773da06b53a0649caf2d04d24c0ffbafed53 |
skweak: Weak Supervision Made Easy for NLP
August 1st -August 6th, 2021
Pierre Lison plison@nr.no
Jeremy Barnes jeremycb@ifi.uio.no
Aliaksandr Hubin
Language Technology Group
Norwegian Computing Center Oslo
Norway
Department of Mathematics
University of Oslo
University of Oslo
skweak: Weak Supervision Made Easy for NLP
Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations
the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System DemonstrationsAugust 1st -August 6th, 2021337
We present skweak, a versatile, Python-based software toolkit enabling NLP developers to apply weak supervision to a wide range of NLP tasks. Weak supervision is an emerging machine learning paradigm based on a simple idea: instead of labelling data points by hand, we use labelling functions derived from domain knowledge to automatically obtain annotations for a given dataset. The resulting labels are then aggregated with a generative model that estimates the accuracy (and possible confusions) of each labelling function. The skweak toolkit makes it easy to implement a large spectrum of labelling functions (such as heuristics, gazetteers, neural models or linguistic constraints) on text data, apply them on a corpus, and aggregate their results in a fully unsupervised fashion. skweak is especially designed to facilitate the use of weak supervision for NLP tasks such as text classification and sequence labelling. We illustrate the use of skweak for NER and sentiment analysis. skweak is released under an open-source license and is available at: https://github.com/NorskRegnesentral/skweak
Introduction
Despite ever-increasing volumes of text documents available online, labelled data remains a scarce resource in many practical NLP scenarios. This scarcity is especially acute when dealing with resource-poor languages and/or uncommon textual domains. This lack of labelled datasets is also common in industry-driven NLP projects that rely on domain-specific labels defined in-house and cannot make use of pre-existing resources. Large pretrained language models and transfer learning (Peters et al., 2018(Peters et al., , 2019Lauscher et al., 2020) can to some extent alleviate this need for labelled data, by making it possible to reuse generic language representations instead of learning models from scratch.
Start: corpus of raw (unlabelled) documents from target domain
Step 1: labelling functions (heuristics, gazetteers, etc.) Step 2: aggregation (EM with generative model)
Step 3: Training of final NLP model (on aggregated labels) Figure 1: General overview of skweak: labelling functions are first applied on a collection of texts (step 1) and their results are then aggregated (step 2). A discriminative model is finally trained on those aggregated labels (step 3). The process is illustrated here for NER, but skweak can in principle be applied to any type of sequence labelling or classification task. However, except for zero-shot learning approaches (Artetxe and Schwenk, 2019;Barnes and Klinger, 2019;Pires et al., 2019), they still require some amounts of labelled data from the target domain to fine-tune the neural models to the task at hand.
… … O O B-PER …
The skweak framework (pronounced /skwi:k/) is a new Python-based toolkit that provides solutions to this scarcity problem. skweak makes it possible to bootstrap NLP models without requiring any handannotated data from the target domain. Instead of labelling data by hand, skweak relies on weak supervision to programmatically label data points through a collection of labelling functions Lison et al., 2020;Safranchik et al., 2020a). The skweak framework allows NLP practitioners to easily construct, apply and aggregate such labelling functions for classification and sequence labelling tasks. skweak comes with a robust and scalable aggregation model that extends the HMM model of Lison et al. (2020). As detailed in Section 4, the model now includes a feature weighting mechanism to capture the correlations that may exist between labelling functions. The general procedure is illustrated in Figure 1.
Another novel feature of skweak is the ability to create labelling functions that produce underspecified labels. For instance, a labelling function may predict that a token is part of a named entity (but without committing to a specific label), or that a sentence does not express a particular sentiment (but without committing to a specific sentiment category). This ability greatly extends the expressive power of labelling functions and makes it possible to define complex hierarchies between categoriesfor instance, COMPANY may be a sub-category of ORG, which may be itself a sub-category of ENT. It also enables the expression of "negative" signals that indicate that the output should not be a particular label. Based on our experience applying weak supervision to various NLP tasks, we expect this ability to underspecify output labels to be very useful in NLP applications.
Related Work
Weak supervision aims to replace hand-annotated 'ground truths' with labelling functions that are programmatically applied to data points -in our case, texts -from the target domain (Ratner et al., , 2019Lison et al., 2020;Safranchik et al., 2020b;Fu et al., 2020). Those functions may take the form of rule-based heuristics, gazetteers, annotations from crowd-workers, external databases, data-driven models trained from related domains, or linguistic constraints. A particular form of weak supervision is distant supervision, which relies on knowledge bases to automatically label documents with entities (Mintz et al., 2009;Ritter et al., 2013;Shang et al., 2018). Weak supervision is also related to models for aggregating crowd-sourced annotations (Kim and Ghahramani, 2012;Hovy et al., 2013;Nguyen et al., 2017).
Crucially, labelling functions do not need to provide a prediction for every data point and may "abstain" whenever certain conditions are not met. They may also rely on external data sources that are unavailable at runtime, as is the case for labels obtained by crowd-workers. After being applied to a dataset, the results of those labelling functions are aggregated into a single, probabilistic annotation layer. This aggregation is often implemented with a generative model connecting the latent (un-observed) labels to the outputs of each labelling function Lison et al., 2020;Safranchik et al., 2020a). Based on those aggregated labels, a discriminative model (often a neural architecture) is then trained for the task.
Weak supervision shifts the focus away from collecting manual annotations and concentrates the effort on developing good labelling functions for the target domain. This approach has been shown to be much more efficient than traditional annotation efforts . Weak supervision allows domain experts to directly inject their domain knowledge in the form of various heuristics. Another benefit is the possibility to modify/extend the label set during development, which is a common situation in industrial R&D projects.
Several software frameworks for weak supervision have been released in recent years. One such framework is Snorkel (Ratner et al., , 2019 which combines various supervision sources using a generative model. However, Snorkel requires data points to be independent, making it difficult to apply to sequence labelling tasks as done in skweak. Swellshark is another framework optimised for biomedical NER. Swellshark, is however, limited to classifying already segmented entities, and relies on a separate, ad-hoc mechanism to generate candidate spans.
FlyingSquid (Fu et al., 2020) presents a novel approach based on triplet methods, which is shown to be fast enough to be applicable to structured prediction problems such as sequence labelling. However, compared to skweak, the aggregation model of Fly-ingSquid focuses on estimating the accuracies of each labelling function, and is therefore difficult to apply to problems where labelling sources may exhibit very different precision/recall trade-offs. A labelling function may for instance rely on a pattern that has a high precision but a low recall, while the opposite may be true for other labelling functions. Such difference is lost if accuracy is the only metric associated for each labelling function. Finally Safranchik et al. (2020b) describe a weak supervision model based on an extension of HMMs called linked hidden Markov models. Although their aggregation model is related to skweak, they provide a more limited choice of labelling functions, in particular regarding the inclusion of document-level constraints or underspecified labels. skweak is also more distantly related to ensemble methods (Sagi and Rokach, 2018), as those meth-ods also rely on multiple estimators whose results are combined at prediction time. However, a major difference lies in the fact that labelling functions only need to be aggregated once in skweak, in order to generate labelled training data for the final discriminative model (Step 3 of Figure 1). This difference is important as labelling functions may be computationally costly to run or rely on external resources that are not available at runtime, as is the case for annotations from crowd-workers.
Labelling functions
Labelling functions in skweak can be grouped in four main categories: heuristics, gazetteers, machine learning models, and document-level functions. Each labelling function is defined in skweak as a method that takes SpaCy Doc objects as inputs and returns text spans associated with labels. For text classification tasks, the span simply corresponds to the full document itself.
The use of SpaCy greatly facilitates downstream processing, as it allows labelling functions to operate on texts that are already tokenised and include linguistic features such as lemma, POS tags and dependency relations. 1 skweak integrates several functionalities on top of SpaCy to easily create, manipulate, label and store text documents.
Heuristics
The simplest type of labelling functions integrated in skweak are rule-based heuristics. For instance, one heuristic to detect entities of type COMPANY is to look for text spans ending with a legal company type (such as "Inc."). Similarly, a heuristic to detect named entities of the (underspecified) type ENT is to search for sequences of tokens tagged as NNPs. Section 6 provides further examples of heuristics for NER and Sentiment Analysis.
The easiest way to define heuristics in skweak is through standard Python functions that take a SpaCy Doc object as input and returns labelled spans. For instance, the following function detects entities of type MONEY by searching for numbers preceded by a currency symbol like $ or e: def money_detector(doc):
"""Searches for occurrences of MONEY entities in text"""
for tok in doc[1:]: if (tok.text [0].isdigit() and tok.nbor(-1).is_currency): yield tok.i-1, tok.i+1, "MONEY" skweak also provides functionalities to easily construct heuristics based on linguistic constraints (such as POS patterns or dependency relations) or the presence of neighbouring words within a given context window.
Labelling functions may focus on specific labels and/or contexts and "abstain" from giving a prediction for other text spans. For instance, the heuristic mentioned above to detect companies from legal suffixes will only be triggered in very specific contexts, and abstain from giving a prediction otherwise. More generally, it should be stressed that labelling functions do not need to be perfect and should be expected to yield incorrect predictions from time to time. The purpose of weak supervision is precisely to combine together a set of weaker/noisier supervision signals, leading to a form of denoising (Ratner et al., 2019).
Labelling functions in skweak can be constructed from the outputs of other functions. For instance, the heuristic tagging NNP chunks with the label ENT may be refined through a second heuristic that additionally requires the tokens to be in title casewhich leads to a lower recall but a higher precision compared to the initial heuristic. The creation of such derived labelling functions through the combination of constraints is a simple way to increase the number of labelling sources and therefore the robustness of the aggregation mechanism. skweak automatically takes care of dependencies between labelling functions in the backend.
Machine learning models
Labelling functions may also take the form of machine learning models. Typically, those models will be trained on data from other, related domains, thereby leading to some form of transfer learning across domains. skweak does not impose any constraint on type of model that can be employed.
The support for underspecified labels in skweak greatly facilitates the use of models across datasets, as it makes it possible to define hierarchical relations between distinct label sets -for instance, the coarse-grained LOC label from CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) may be seen as including both the GPE and LOC labels in Ontonotes (Weischedel et al., 2011).
Gazetteers
Another group of labelling functions are gazetteers, which are modules searching for occurrences of a list of words or phrases in the document. For instance, a gazetteer may be constructed using the geographical locations from Geonames (Wick, 2015) or names of persons, organisations and locations from DBPedia (Lehmann et al., 2015) As gazetteers may include large numbers of entries, skweak relies on tries to efficiently search for all possible occurrences within a document. A trie, also called a prefix tree, stores all entries as a tree which is traversed depth-first. This implementation can scale up to very large gazetteers with more than one million entries. The search can be done in two distinct modes: a case-sensitive mode that requires an exact match between the entity in the trie and the occurrence and a case-insensitive mode that relaxes this constraint.
Document-level functions
Unlike previous weak supervision frameworks, skweak also provides functionalities to create document-level labelling functions that rely on the global document context to derive new supervision signals. In particular, skweak includes a labelling function that takes advantage of label consistency within a document. Entities occurring multiple times through a document are highly likely to belong to the same category (Krishnan and Manning, 2006). One can take advantage of this phenomenon by estimating the majority label of each entity in the document and then creating a labelling function that applies this majority label to each mention.
Furthermore, when introduced for the first time in a text, entities are often referred univocally, while subsequent mentions (once the entity is salient) frequently rely on shorter references. For instance, the first mention of a person in a text will often take the form of a full name (possibly complemented with job titles), but mentions that follow will often rely on shorter forms, such as the family name. skweak provides functionalities to easily capture such document-level relations.
Aggregation model
After being applied to a collection of texts, the outputs of labelling functions are aggregated using a generative model. For sequence labelling, this model is expressed as a Hidden Markov Model where the states correspond to the "true" (unob-served) labels, and the observations are the predictions of each labelling function (Lison et al., 2020). For document classification, this model reduces to Naive Bayes since there are no transitions.
This generative model is estimated using the Baum-Welch algorithm (Rabiner, 1990), which a variant of EM that uses the forward-backward algorithm to compute the statistics for the expectation step. For efficient inference, skweak combines Python with C-compiled routines from the hmmlearn package 2 employed for both parameter estimation and decoding.
Probabilistic Model
We assume a list of J labelling functions {λ 1 , ..., λ J }. Each labelling function produces a label for each data point (including a special "void" label denoting that the labelling function abstains from a concrete prediction, as well as underspecified labels). Let {l 1 , ..., l L } be the set of labels that can be produced by labelling functions.
The aggregation model is represented as a hidden Markov model (HMM), in which the states correspond to the true underlying mutually exclusive class labels {l 1 , ..., l S }. 3 This model has multiple emissions (one per labelling function). For the time being, we assume those emissions to be mutually independent conditional on the latent state (see next section for a more refined model).
Formally, for each token i ∈ {1, ..., n} and labelling function λ j , we assume a multinomial distribution for the observed labels Y ij . The parameters of this multinomial are vectors P s i j ∈ R L [0,1] . The latent states are assumed to have a Markovian dependence structure along the tokens {1, ..., n}. As depicted in Figure 2, this results in an HMM expressed as a dependent mixture of multinomials:
p(λ (i) j = Y ij |P s i j ) = Multinomial P s i j ,(1)p(s i = k|s i−1 = l) = τ lk .(2)
where τ lk ∈ R [0,1] are the parameters of the transition matrix controlling for a given state s i−1 = l the probability of transition to state s i = k. The likelihood function includes a constraint that requires latent labels to be observed in at least one labelling function to have a non-zero probability.
s i−1 s i s i+1 s i+2 ... P s i j Y ij
Labelling function j ∈ {1, ..., J} This constraint reduces the search space to a few labels at each step, and greatly facilitates the convergence of the forward-backward algorithm.
To initialise the model parameters, we run a majority voter that predicts the most likely latent labels based on the "votes" for each label (also including underspecified labels), each labelling function corresponding to a voter. Those predictions are employed to derive the initial transition and emission probabilities, which are then refined through several EM passes.
Performance-wise, skweak can scale up to large collections of documents. The aggregation of all named entities from the MUC-6 dataset (see Section 6.1) based on a total of 52 labelling functions only requires a few minutes of computation time, with an average speed of 1000-1500 tokens per second on a modern computing server.
Weighting
One shortcoming of the above model is that it fails to account for the fact that labelling functions may be correlated with one another, for instance when a labelling function is computed from the output of another labeling function. To capture those dependencies, we extend the model with a weighting scheme -or equivalently, a tempering of the densities associated with each labelling function.
Formally, for each labelling function λ j and observed label k we determine weights {w jk } with respect to which the corresponding densities of the labelling functions are annealed. This flattens to different degrees the underlying probabilities for the components of the multinomials. The observed process has then a tempered multinomial distribu-tion with a density of form:
p(λ (i) j = Y ij |P s i j , w j ) ∝ L k=1 P s i jk Y ijk w jk . (3)
The temperatures {w jk } are determined using a scheme inspired by delution priors widely used in Bayesian model averaging (George, 1999;George et al., 2010). The idea relies on redundancy as the measure of prior information on the importance of features. Formally, we define for each λ j a neighbourhood N (λ j ) consisting of labelling functions known to be correlated with λ j , as is the case for labelling functions built on top of another function's outputs. The weights are then specified as:
w jk = exp −γ l∈N (λ j ) R jlk ,(4)
where γ is a hyper-parameter specifying the strength of the weighting scheme, and R jlk is the recall between labelling functions λ j and λ l for label k. Informally, the weight w jk of a labelling function λ j producing the label k will decrease if λ j exhibits a high recall with correlated sources, and is therefore at least partially redundant. Also, the temperatures can be interpreted as weights of the log-likelihood function and Dimitroff et al. (2013) have shown that under some regularity conditions there exist weights that allow to maximize F 1 score when optimising the weighted log-likelihood (Field and Smith, 1994).
Example
With skweak, one can apply and aggregrate labelling functions with a few lines of code: import spacy, re from skweak import heuristics, gazetteers, aggregation, utils # First heuristic (see Section 3) lf1 = heuristics.FunctionAnnotator ("money", money_detector) # Detection of years lf2= heuristics.TokenConstraintAnnotator ("years", lambda tok: re.match ("(19|20)\d{2}$", tok.text), "DATE") # Gazetteer with a few names NAMES = [("Barack", "Obama"), ("Donald", "Trump"), ("Joe", "Biden")] trie = gazetteers.Trie(NAMES) lf3 = gazetteers.GazetteerAnnotator ("presidents", trie, "PERSON") # We create a simple text nlp = spacy.load("en_core_web_md") doc = nlp("Donald Trump paid $750 in federal income taxes in 2016") # apply the labelling functions doc = lf3(lf2(lf1(doc))) # aggregate them hmm = aggregation.HMM("hmm", ["PERSON", "DATE", "
Experimental Results
We describe below two experiments demonstrating how skweak can be applied to sequence labelling and text classification. We refer the reader to Lison et al. (2020) for more results on NER. 4 It should be stressed that the results below are all obtained without using any gold labels.
Named Entity Recognition
We seek to recognise named entities from the MUC-6 corpus (Grishman and Sundheim, 1996), which contains 318 Wall Street Journal articles annotated with 7 entity types: LOCATION, ORGANIZATION, PERSON, MONEY, DATE, TIME, PERCENT.
Labelling functions
We apply the following functions to the corpus:
• Heuristics for detecting dates, times and percents based on handcrafted patterns
• Heuristics for detecting named entities based on casing, NNP part-of-speech tags or compound phrases. Those heuristics produced entities of underspecified type ENT • One probabilistic parser (Braun et al., 2017) • Gazetteers for detecting persons, organisations and locations based on Wikipedia, Geonames (Wick, 2015) and Crunchbase
• Neural models trained on CoNLL 2003 & the Broad Twitter Corpus (Tjong Kim Sang and De Meulder, 2003;Derczynski et al., 2016) • Document-level labelling functions based on (1) majority labels for a given entity or (2) the label of each entity's first mention.
All together (including multiple variants of the functions above, such as gazetteers in both casesensitive and case-insensitive mode), this amounts to a total of 52 labelling functions.
Results
The token and entity-level F 1 scores are shown in Table 1. As baselines, we provide the results obtained by aggregating all labelling functions using a majority voter, along with results using the HMM on various subsets of labelling functions. The final line indicates the results using a neural NER model trained on the HMM-aggregated labels (with all labelling functions). The neural model employed in this particular experiment is a transformer architecture based on a large pretrained neural model, RoBERTa (Liu et al., 2019).
See Lison et al. (2020) for experimental details and results for other aggregation methods.
Sentiment Analysis
We consider the task of three class (positive, negative, neutral) sentiment analysis in Norwegian as a second case study. We use sentence-level annotations 5 from the NoReC f ine dataset (Øvrelid et al., 2020). These are created by aggregating the finegrained annotations for sentiment expressions such that any sentence with a majority of positive sentiment expressions is assumed to be positive, and likewise with negative expressions. Sentences with no sentiment expressions are labelled neutral.
Labelling functions Sentiment lexicons: NorSent is the only available lexicon in Norwegian and contains tokens with their associated polarity. We also use MT-translated English lexicons: SoCal (Taboada et al., 2011), the IBM Debater lexicon (Toledo-Ronen et al., 2018) and the NRC word emotion lexicon (NRC emo.) (Mohammad and Turney, 2010). Automatic translation introduces some noise but has been shown to preserve most sentiment information (Mohammad et al., 2016).
Heuristics: For sentences with two clauses connected by 'but', the second clause is typically more relevant to the sentiment, as for instance in "the food was nice, but I wouldn't go back there". We include a heuristic to reflect this pattern.
Machine learning models: We create a document-level classifier (Doc-level) by training a bag-of-words SVM on the NoReC dataset (Velldal et al., 2018), which contains 'dice labels' ranging from 1 (very negative) to 6 (very positive). We map predictions to positive (>4), negative (<3), and neutral (3 and 4). We also include two multilingual BERT models mBERT-review 6 (trained on reviews from 6 languages) and mBERT-SST (trained on the Stanford Sentiment Treebank). The predictions for both models are again mapped to 3 classes (positive, negative, neutral). Table 2 provides results on the NoReC sentence test split. As baseline, we include a Majority class which always predicts the neutral class. As upper bounds, we include a linear SVM trained on TF-IDF weighted (1-3)-grams (Ngram SVM), along with Norwegian BERT (NorBERT) models (Kutuzov et al., 2021) fine-tuned on the gold training data. Those two models are upper bounds as they have access to in-domain labelled data, which is not the case for the other models.
Results
Again, we observe that the HMM-aggregated labels outperform all individual labelling functions 6 https://huggingface.co/nlptown/ bert-base-multilingual-uncased-sentiment as well as a majority voter that aggregates those functions. The best performance is achieved by a neural model (in this case NorBERT) fine-tuned on those aggregated labels.
Conclusion
The skweak toolkit provides a practical solution to a problem encountered by virtually every NLP practitioner: how can I obtain labelled data for my NLP task? Using weak supervision, skweak makes it possible to create training data programmatically instead of labelling data by hand. The toolkit provides a Python API to apply labelling functions and aggregate their results in a few lines of code. The aggregation relies on a generative model that express the relative accuracy (and redundancies) of each labelling function. The toolkit can be applied to both sequence labelling and text classification and comes along a range of novel functionalities such as the integration of underspecified labels and the creation of document-level labelling functions.
Figure 2 :
2Aggregation model using a hidden Markov model with multiple multinomial emissions.
MONEY"]) hmm.fit_and_aggregate([doc]) # and visualise the result (in Jupyter) utils.display_entities(doc, "hmm") skweak's repository provides Jupyter Notebooks with additional examples and explanations.
Table 1 :
1Micro-averaged F 1 scores on MUC-6.for detecting dates, times, money amounts,
percents, and cardinal/ordinal values
• Heuristics for detecting person names, based
on honorifics (such as Mr. or Dr.) along with
a dictionary of common first names
• One heuristic for detecting company names
with legal suffixes (such as Inc.)
Table 2 :
2Macro F 1 on sentence-level NoReC data.
For languages not yet supported in SpaCy, the multilanguage model from SpaCy can be applied.
https://hmmlearn.readthedocs.io/ 3 Note that the set of observed labels {l1, ..., lL} produced by the labelling functions may be larger than the set of latent labels {l1, ..., lS}, since those observed labels may also include underspecified labels such as ENT.
See also for specific results on applying weak supervision to biomedical NER.
Data: https://github.com/ltgoslo/norec sentence
Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Mikel Artetxe, Holger Schwenk, 10.1162/tacl_a_00288Transactions of the Association for Computational Linguistics. 7Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.
Embedding projection for targeted cross-lingual sentiment: Model comparisons and a real-world study. Jeremy Barnes, Roman Klinger, Journal of Artificial Intelligence Research. 66Jeremy Barnes and Roman Klinger. 2019. Embed- ding projection for targeted cross-lingual sentiment: Model comparisons and a real-world study. Journal of Artificial Intelligence Research, 66:691-742.
Lexicon information in neural sentiment analysis: a multi-task learning approach. Jeremy Barnes, Samia Touileb, Lilja Øvrelid, Erik Velldal, Proceedings of the 22nd Nordic Conference on Computational Linguistics. the 22nd Nordic Conference on Computational LinguisticsTurku, FinlandLinköping University Electronic PressJeremy Barnes, Samia Touileb, Lilja Øvrelid, and Erik Velldal. 2019. Lexicon information in neural sen- timent analysis: a multi-task learning approach. In Proceedings of the 22nd Nordic Conference on Com- putational Linguistics, pages 175-186, Turku, Fin- land. Linköping University Electronic Press.
Evaluating natural language understanding services for conversational question answering systems. Daniel Braun, Adrian Hernandez Mendez, Florian Matthes, Manfred Langen, 10.18653/v1/W17-5522Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueSaarbrücken, GermanyAssociation for Computational LinguisticsDaniel Braun, Adrian Hernandez Mendez, Florian Matthes, and Manfred Langen. 2017. Evaluating natural language understanding services for conver- sational question answering systems. In Proceed- ings of the 18th Annual SIGdial Meeting on Dis- course and Dialogue, pages 174-185, Saarbrücken, Germany. Association for Computational Linguis- tics.
Broad Twitter corpus: A diverse named entity recognition resource. Leon Derczynski, Kalina Bontcheva, Ian Roberts, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeLeon Derczynski, Kalina Bontcheva, and Ian Roberts. 2016. Broad Twitter corpus: A diverse named entity recognition resource. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 1169- 1179, Osaka, Japan. The COLING 2016 Organizing Committee.
Weighted maximum likelihood loss as a convenient shortcut to optimizing the F-measure of maximum entropy classifiers. Georgi Dimitroff, Laura Toloşi, Borislav Popov, Georgi Georgiev, Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013. the International Conference Recent Advances in Natural Language Processing RANLP 2013Shoumen, BULGARIAHissar, Bulgaria. INCOMA LtdGeorgi Dimitroff, Laura Toloşi, Borislav Popov, and Georgi Georgiev. 2013. Weighted maximum like- lihood loss as a convenient shortcut to optimizing the F-measure of maximum entropy classifiers. In Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013, pages 207-214, Hissar, Bulgaria. INCOMA Ltd. Shoumen, BULGARIA.
Robust estimation: A weighted maximum likelihood approach. International Statistical Review/Revue Internationale de Statistique. C Field, Smith, C Field and B Smith. 1994. Robust estimation: A weighted maximum likelihood approach. Interna- tional Statistical Review/Revue Internationale de Statistique, pages 405-424.
Swellshark: A generative model for biomedical named entity recognition without labeled data. Jason Fries, Sen Wu, Alex Ratner, Christopher Ré, Jason Fries, Sen Wu, Alex Ratner, and Christopher Ré. 2017. Swellshark: A generative model for biomedi- cal named entity recognition without labeled data.
Fast and three-rious: Speeding up weak supervision with triplet methods. Y Daniel, Mayee F Fu, Frederic Chen, Sarah M Sala, Kayvon Hooper, Christopher Fatahalian, Ré, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningICML 2020Daniel Y. Fu, Mayee F. Chen, Frederic Sala, Sarah M. Hooper, Kayvon Fatahalian, and Christopher Ré. 2020. Fast and three-rious: Speeding up weak super- vision with triplet methods. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020).
Discussion of "model averaging and model search strategies" by m. clyde. George, Bayesian Statistics 6-Proceedings of the Sixth Valencia International Meeting. E George. 1999. Discussion of "model averaging and model search strategies" by m. clyde. In Bayesian Statistics 6-Proceedings of the Sixth Valencia Inter- national Meeting.
Dilution priors: Compensating for model space redundancy. Edward I George, Borrowing Strength: Theory Powering Applications-A Festschrift for Lawrence D. Brown. Institute of Mathematical StatisticsEdward I George et al. 2010. Dilution priors: Com- pensating for model space redundancy. In Bor- rowing Strength: Theory Powering Applications-A Festschrift for Lawrence D. Brown, pages 158-165. Institute of Mathematical Statistics.
Message understanding conference-6: A brief history. Ralph Grishman, Beth Sundheim, 10.3115/992628.992709Proceedings of the 16th Conference on Computational Linguistics. the 16th Conference on Computational LinguisticsUSAAssociation for Computational Linguistics1COLING '96Ralph Grishman and Beth Sundheim. 1996. Message understanding conference-6: A brief history. In Pro- ceedings of the 16th Conference on Computational Linguistics -Volume 1, COLING '96, page 466-471, USA. Association for Computational Linguistics.
Learning whom to trust with MACE. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, Eduard Hovy, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, GeorgiaAssociation for Computational LinguisticsDirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130, Atlanta, Georgia. Association for Computational Linguistics.
Bayesian classifier combination. Hyun-Chul Kim, Zoubin Ghahramani, PMLRProceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics. the Fifteenth International Conference on Artificial Intelligence and StatisticsLa Palma, Canary Islands22Hyun-Chul Kim and Zoubin Ghahramani. 2012. Bayesian classifier combination. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22 of Proceed- ings of Machine Learning Research, pages 619-627, La Palma, Canary Islands. PMLR.
An effective two-stage model for exploiting non-local dependencies in named entity recognition. Vijay Krishnan, D Christopher, Manning, 10.3115/1220175.1220316Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsSydney, AustraliaAssociation for Computational LinguisticsVijay Krishnan and Christopher D. Manning. 2006. An effective two-stage model for exploiting non-local dependencies in named entity recognition. In Pro- ceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1121-1128, Sydney, Australia. Associa- tion for Computational Linguistics.
Large-scale contextualised language modelling for Norwegian. Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, Stephan Oepen, Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa). the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)Reykjavik, Iceland; SwedenOnline). Linköping University Electronic PressAndrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, and Stephan Oepen. 2021. Large-scale contextualised language modelling for Norwegian. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), pages 30- 40, Reykjavik, Iceland (Online). Linköping Univer- sity Electronic Press, Sweden.
From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. Anne Lauscher, Vinit Ravishankar, Ivan Vulić, Goran Glavaš, 10.18653/v1/2020.emnlp-main.363Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnlineAnne Lauscher, Vinit Ravishankar, Ivan Vulić, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with mul- tilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4483-4499, On- line. Association for Computational Linguistics.
Dbpediaa large-scale, multilingual knowledge base extracted from wikipedia. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Sören Patrick Van Kleef, Christian Auer, Bizer, Semantic Web. 62Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, and Christian Bizer. 2015. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195.
Named entity recognition without labelled data: A weak supervision approach. Pierre Lison, Jeremy Barnes, Aliaksandr Hubin, Samia Touileb, 10.18653/v1/2020.acl-main.139Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsPierre Lison, Jeremy Barnes, Aliaksandr Hubin, and Samia Touileb. 2020. Named entity recognition without labelled data: A weak supervision approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1518-1533, Online. Association for Computational Linguistics.
Roberta: A robustly optimized BERT pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692CoRRYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.
Distant supervision for relation extraction without labeled data. Mike Mintz, Steven Bills, Rion Snow, Daniel Jurafsky, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPSuntec, SingaporeAssociation for Computational LinguisticsMike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Computational Linguistics.
Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon. Saif Mohammad, Peter Turney, Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in TextLos Angeles, CAAssociation for Computational LinguisticsSaif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using Me- chanical Turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text, pages 26-34, Los Angeles, CA. Association for Computational Linguistics.
How translation alters sentiment. M Saif, Mohammad Mohammad, Svetlana Salameh, Kiritchenko, Journal of Artificial Intelligence Research. 551Saif M. Mohammad, Mohammad Salameh, and Svet- lana Kiritchenko. 2016. How translation alters sen- timent. Journal of Artificial Intelligence Research, 55(1):95-130.
Aggregating and predicting sequence labels from crowd annotations. An Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, Matthew Lease, 10.18653/v1/P17-1028Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational LinguisticsLong Papers)An Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annota- tions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 299-309, Vancouver, Canada. Association for Computational Linguistics.
A fine-grained sentiment dataset for Norwegian. Lilja Øvrelid, Petter Maehlum, Jeremy Barnes, Erik Velldal, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationLilja Øvrelid, Petter Maehlum, Jeremy Barnes, and Erik Velldal. 2020. A fine-grained sentiment dataset for Norwegian. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5025- 5033, Marseille, France. European Language Re- sources Association.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
To tune or not to tune? adapting pretrained representations to diverse tasks. Matthew E Peters, Sebastian Ruder, Noah A Smith, 10.18653/v1/W19-4302Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)Florence, ItalyAssociation for Computational LinguisticsMatthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP (RepL4NLP-2019), pages 7-14, Flo- rence, Italy. Association for Computational Linguis- tics.
How multilingual is multilingual BERT?. Telmo Pires, Eva Schlinger, Dan Garrette, 10.18653/v1/P19-1493Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTelmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.
A tutorial on hidden markov models and selected applications in speech recognition. R Lawrence, Rabiner, Readings in Speech Recognition. Alex Waibel and Kai-Fu Lee, editorsSan Francisco, CA, USAMorgan Kaufmann Publishers IncLawrence R. Rabiner. 1990. A tutorial on hidden markov models and selected applications in speech recognition. In Alex Waibel and Kai-Fu Lee, edi- tors, Readings in Speech Recognition, pages 267- 296. Morgan Kaufmann Publishers Inc., San Fran- cisco, CA, USA.
Snorkel: Rapid training data creation with weak supervision. Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, Christopher Ré, 10.14778/3157794.3157797Proc. VLDB Endow. VLDB Endow11Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2017. Snorkel: Rapid training data creation with weak su- pervision. Proc. VLDB Endow., 11(3):269-282.
Snorkel: rapid training data creation with weak supervision. Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, Christopher Ré, 10.1007/s00778-019-00552-1The VLDB Journal. Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2019. Snorkel: rapid training data creation with weak su- pervision. The VLDB Journal.
Modeling missing data in distant supervision for information extraction. Alan Ritter, Luke Zettlemoyer, Mausam , Oren Etzioni, 10.1162/tacl_a_00234Transactions of the Association for Computational Linguistics. 1Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Et- zioni. 2013. Modeling missing data in distant su- pervision for information extraction. Transactions of the Association for Computational Linguistics, 1:367-378.
Weakly supervised sequence tagging from noisy rules. Esteban Safranchik, Shiying Luo, Stephen Bach, 10.1609/aaai.v34i04.6009Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Esteban Safranchik, Shiying Luo, and Stephen Bach. 2020a. Weakly supervised sequence tagging from noisy rules. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):5570-5578.
Weakly supervised sequence tagging from noisy rules. Esteban Safranchik, Shiying Luo, Stephen Bach, 10.1609/aaai.v34i04.6009Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Esteban Safranchik, Shiying Luo, and Stephen Bach. 2020b. Weakly supervised sequence tagging from noisy rules. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):5570-5578.
Ensemble learning: A survey. Omer Sagi, Lior Rokach, 10.1002/widm.1249WIREs Data Mining and Knowledge Discovery. 841249Omer Sagi and Lior Rokach. 2018. Ensemble learn- ing: A survey. WIREs Data Mining and Knowledge Discovery, 8(4):e1249.
Learning named entity tagger using domain-specific dictionary. Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, Jiawei Han, 10.18653/v1/D18-1230Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named entity tagger using domain-specific dictionary. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2054-2064, Brussels, Belgium. Association for Computational Linguistics.
Lexicon-based methods for sentiment analysis. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, Manfred Stede, 10.1162/COLI_a_00049Computational Linguistics. 372Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Lin- guistics, 37(2):267-307.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.
Learning sentiment composition from sentiment lexicons. Orith Toledo-Ronen, Roy Bar-Haim, Alon Halfon, Charles Jochim, Amir Menczel, Ranit Aharonov, Noam Slonim, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, NewOrith Toledo-Ronen, Roy Bar-Haim, Alon Halfon, Charles Jochim, Amir Menczel, Ranit Aharonov, and Noam Slonim. 2018. Learning sentiment com- position from sentiment lexicons. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 2230-2241, Santa Fe, New
Association for Computational Linguistics. Usa Mexico, Mexico, USA. Association for Computational Lin- guistics.
NoReC: The Norwegian review corpus. Erik Velldal, Lilja Øvrelid, Eivind Alexander Bergem, Cathrine Stadsnes, Samia Touileb, Fredrik Jørgensen, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAErik Velldal, Lilja Øvrelid, Eivind Alexander Bergem, Cathrine Stadsnes, Samia Touileb, and Fredrik Jørgensen. 2018. NoReC: The Norwegian review corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
OntoNotes: A large training corpus for enhanced processing. R Weischedel, E Hovy, M Marcus, Palmer M , R Belvin, S Pradhan, L Ramshaw, N Xue, Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. SpringerR. Weischedel, E. Hovy, M. Marcus, Palmer M., R. Belvin, S. Pradhan, L. Ramshaw, and N. Xue. 2011. OntoNotes: A large training corpus for enhanced processing. In Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. Springer.
. Marc Wick, Geonames Ontology. Marc Wick. 2015. Geonames Ontology.
| [
"https://github.com/NorskRegnesentral/skweak",
"https://github.com/ltgoslo/norec"
] |
[
"Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction",
"Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction"
] | [
"Haiyang Yu \nZhejiang University\n\n\nAZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n\n",
"Ningyu Zhang \nZhejiang University\n\n\nAZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n\n",
"Shumin Deng \nZhejiang University\n\n\nAZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n\n",
"Hongbin Ye \nZhejiang University\n\n\nAZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n\n",
"Wei Zhang ",
"Huajun Chen huajunsir@zju.edu.cnlantu.zw@alibaba-inc.com \nZhejiang University\n\n\nAZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n\n"
] | [
"Zhejiang University\n",
"AZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n",
"Zhejiang University\n",
"AZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n",
"Zhejiang University\n",
"AZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n",
"Zhejiang University\n",
"AZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n",
"Zhejiang University\n",
"AZFT Joint Lab for Knowledge Engine 3 Alibaba Group\n"
] | [
"Proceedings of the 28th International Conference on Computational Linguistics"
] | Current supervised relational triple extraction approaches require huge amounts of labeled data and thus suffer from poor performance in few-shot settings. However, people can grasp new knowledge by learning a few instances. To this end, we take the first step to study the few-shot relational triple extraction, which has not been well understood. Unlike previous single-task few-shot problems, relational triple extraction is more challenging as the entities and relations have implicit correlations. In this paper, We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples, namely, entity pairs and corresponding relations. To be specific, we design a hybrid prototypical learning mechanism that bridges text and knowledge concerning both entities and relations. Thus, implicit correlations between entities and relations are injected. Additionally, we propose a prototype-aware regularization to learn more representative prototypes. Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction. * Equal contribution and shared co-first authorship. † Corresponding author This work is licensed under a Creative Commons Attribution 4.0 International Licence.Licence details: | 10.18653/v1/2020.coling-main.563 | [
"https://www.aclweb.org/anthology/2020.coling-main.563.pdf"
] | 226,222,116 | 2010.16059 | ba6ea470bc4523d894a41ba1d942c6e2243a6d9b |
Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction
OnlineCopyright OnlineDecember 8-13, 2020
Haiyang Yu
Zhejiang University
AZFT Joint Lab for Knowledge Engine 3 Alibaba Group
Ningyu Zhang
Zhejiang University
AZFT Joint Lab for Knowledge Engine 3 Alibaba Group
Shumin Deng
Zhejiang University
AZFT Joint Lab for Knowledge Engine 3 Alibaba Group
Hongbin Ye
Zhejiang University
AZFT Joint Lab for Knowledge Engine 3 Alibaba Group
Wei Zhang
Huajun Chen huajunsir@zju.edu.cnlantu.zw@alibaba-inc.com
Zhejiang University
AZFT Joint Lab for Knowledge Engine 3 Alibaba Group
Bridging Text and Knowledge with Multi-Prototype Embedding for Few-Shot Relational Triple Extraction
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20206399
Current supervised relational triple extraction approaches require huge amounts of labeled data and thus suffer from poor performance in few-shot settings. However, people can grasp new knowledge by learning a few instances. To this end, we take the first step to study the few-shot relational triple extraction, which has not been well understood. Unlike previous single-task few-shot problems, relational triple extraction is more challenging as the entities and relations have implicit correlations. In this paper, We propose a novel multi-prototype embedding network model to jointly extract the composition of relational triples, namely, entity pairs and corresponding relations. To be specific, we design a hybrid prototypical learning mechanism that bridges text and knowledge concerning both entities and relations. Thus, implicit correlations between entities and relations are injected. Additionally, we propose a prototype-aware regularization to learn more representative prototypes. Experimental results demonstrate that the proposed method can improve the performance of the few-shot triple extraction. * Equal contribution and shared co-first authorship. † Corresponding author This work is licensed under a Creative Commons Attribution 4.0 International Licence.Licence details:
Introduction
Relational Triple Extraction is an essential task in Information Extraction for Natural Language Processing (NLP) and Knowledge Graph (KG) , which is aimed at detecting a pair of entities along with their relation from unstructured text. For instance, there is a sentence "Paris is known as the romantic capital of France.", and in this example, an ideal relational triple extraction system should extract the relational triple Paris, Capital of, France , in which Capital of is the relation of Paris and France.
Current works in relational triple extraction typically employ traditional supervised learning based on feature engineering (Kambhatla, 2004;Reichartz et al., 2010) and neural networks (Zeng et al., 2014;Bekoulis et al., 2018a). The main problem with supervised learning models is that they can not perform well on unseen entity types or relation categories (e.g., train a model to extract knowledge triples from the economic text, then run this model to work on scientific articles). As a result, supervised relational triple extraction can not extend to the unseen entity or relation types. A trivial solution is to annotate more data for unseen triple types, then retraining the model with newly annotated data (Zhou et al., 2019). However, this method is usually impractical because of the extremely high cost of annotation.
Intuitively, humans can learn about a new concept with limited supervision, e.g., one can detect and classify new entities with 3-5 examples (Grishman et al., 2005). This motivates the setting that we aim at for relational triple extraction: Few-Shot Learning (FSL). In few-shot learning, a trained model rapidly learns a new concept from a few examples while keeping great generalization from observed examples (Vinyals et al., 2016;. Hence, if we need to extend relational triple extraction into a new domain, a few examples are needed to activate the system in the new domain without retraining the model. By formulating this FSL relational triple extraction, we can significantly reduce the annotation cost and training cost while maintaining highly accurate results. Figure 1: Illustration of our proposed model for relational triple extraction in the few-shot setting. The texts marked in red are head entities while in blue are tail entities. Head and tail entity prototypes are connected with the relation prototype.
Though methods of few-shot learning develop fast in recent yeas, most of these works concentrate on single tasks such as relation extraction and text classification (Geng et al., 2019;Ye and Ling, 2019). However, the effect of joint extraction of entities and relations on low-resource scenarios is still not well understood, which are two subtasks belonging to relational triple extraction. Unlike extraction for each single task, joint entity and relation extraction is more challenging, as entity and relations have implicit correlations, which cannot be ignored.
To address this issue, we propose a Multi-Prototype Embedding network (MPE) model to extract the few-shot relational triples, inspired by the prototypical network (Snell et al., 2017). To be specific, we utilize two kinds of prototypes regarding both entities and relations. Note that entity pairs and relations have explicit knowledge constraints (Bordes et al., 2013), such as the Born in relation suggests that the type of head entity must be PERSON, and vice versa. Based on those observations and motivated by the knowledge graph embedding (Xie et al., 2016), we introduce the hybrid prototypical learning to explicitly inject knowledge constraints. We firstly learn entity and relation prototypes and then leverage translation constraint in hyperspace to regularize prototype embedding. Note that such knowledge-aware regularization not only injects prior knowledge from the external knowledge graph, but also leads to a more smooth and representative prototype for few-shot extraction. Moreover, we introduce prototype-regularization considering both intramural and mutual similarities between different prototypes. Experimental results on the FewRel dataset (Han et al., 2018) demonstrate that our approach outperforms baseline models in the few-shot setting.
To summarize, our main contributions include:
• We study the few-shot relational triple extraction problem and provide a baseline for this new research direction. To our best knowledge, this is a new branch of research that has not been explored.
• We propose a novel Multi-Prototype Embedding approach with hybrid prototype learning and prototype-aware regularization, which bridge text and knowledge for few-shot relational extraction.
• Extensive experimental results on the FewRel dataset demonstrate the effectiveness of our method.
Related Work
Two main directions have been proposed for relational triple extraction, which has two subtasks: entity extraction and relation extraction, namely, pipeline (Lin et al., 2016;Trisedya et al., 2019;Wang et al., 2020;Nan et al., 2020) and joint learning methods (Bekoulis et al., 2018b;Nayak and Ng, 2020;Ye et al., 2020). The pipeline model can be more flexible because it extracts entity pairs and relations sequentially, but this design will lead to error propagation . Meanwhile, joint relational triple extraction models can solve this problem well by extracting triples end-to-end, and the interaction between entities and relations can be realized within the model, which makes the performance of the two mutually enhanced. However, due to the "data-hungry" attribute of conventional neural networks, these relational triple extraction models need a large amount of data for training. Thus, lots of efforts Yu et al., 2020; have been devoted to few-shot learning, (Han et al., 2018) presents a few-shot relation extraction datasets to promote the research of information extraction in few-shot scenarios and adapt some few-shot learning methods (Munkhdalai and Yu, 2017;Satorras and Estrach, 2018;Mishra et al., 2017; for this task. Among these models, the prototypical network (Snell et al., 2017) achieves comparable results on several few-shot learning benchmarks; meanwhile, it is simple and effective. This model assumes that each class exists a prototype, and it tries to find the prototypes for classes from supporting instances and compares the distance between the query instance under a particular distance metric. In natural language processing, (Gao et al., 2019) first proposes a hybrid attention-based prototypical network for few-shot relation extraction. (Fritzler et al., 2019) proposes to utilize the prototypical network to tackle the few-shot named entity recognition. (Hou et al., 2020) proposes a collapsed dependency transfer mechanism and a Label-enhanced Task-Adaptive Projection Network (L-TapNet) for few-shot slot filing. However, all previous few-shot works mainly consider single tasks, while relational triple extraction should take both entity and relation into consideration. To the best of our knowledge, we are the first approach for the few-shot relational triple extraction, which addresses both entities and relations.
Our work is motivated by knowledge graph embedding (Xie et al., 2016) such as TransE (Bordes et al., 2013) from Knowledge graph (KG), which is composed of many relational triples like head, relation, tail . TransE is first proposed by (Bordes et al., 2013) to encode triples into a continuous low-dimensional space, which is based on the translation law h + r ≈ t. Many follow-up works like TransH (Wang et al., 2014), DistMult (Yang et al., 2014), and TransR (Lin et al., 2015), propose advanced methods of translation by introducing different embedding spaces. In few-shot settings, it is extremely challenging to inject implicit knowledge constrains in vector space. Such simple yet effective knowledge constraints provide an intuitive solution.
Methodologies
Problem Definition
In few-shot relational triple extraction task, we are given two datesets, D meta−train and D meta−test . Each dataset consists of a set of samples (x, t), where x is a sentence composed of N words, and t indicates relational triple extracted from x. The form of t is head, relation, tail , where head and tail are entity pairs associated with the relation. These two datasets have their own relation domain spaces that are disjoint with each other. In few-shot settings, D meta−test is split into two parts: D test−support and D test−query . Due to entity pair types can be determined by the relation categories, e.g. the Born in relation suggests that the type of head might be PERSON and tail might be LOCATION, we are able to determine the classification of triples only by specifying the categories of the relations. Therefore if D test−support contains K labeled samples for each of N relation classes, this target few-shot problem is named N-way-K-shot. D test−query contains test samples, each should be labeled with one of N relation classes, and associated entity pairs also need to be extracted correctly.
It is non-trivial to train a good model from scratch using D test−support and evaluate its performance on D test−query , limited by the number of test-support samples (i.e.,., N × K). Inspired by an important machine learning principle that test and train conditions must match, we can also split D meta−train into two parts, D train−support and D train−query , and mimic the few-shot settings at the training stage. In each training iteration, N triple categories are randomly selected from D train−support , and K support instances are randomly selected from each of N triple categories. In this way, we construct the train-support set S = {s i k ; i = 1, . . . , N, k = 1, . . . , K}, where s i k is the k-th instance in triple category i. Meanwhile, we randomly select R samples from the remaining samples of those N triple categories and construct the train-query set Q = {(q j , t j ); j = 1, . . . , R}, where t j is the triple extracted from instance q j . Our goal is to optimize the following function:
L = − 1 R (q,t)∈Q P (t|S, q)(1)
Where P (t|S, q) is the probability of gold standard relational triples.
Framework Overview
In this section, we will introduce our proposed Multi-Prototype Embedding (MPE) model for few-shot relational triple extraction. For brevity, we will temporarily study a sentence with one relation and associated entity pairs. The framework of our proposed model is shown in Fig. 2, which has three main modules.
• Instance Encoder. We utilize the pre-trained language model BERT (Devlin et al., 2018) to encode sentence, which adopts multi-head attention to learn contextual representations. Note that any other encoders such Roberta and XLNet (Yang et al., 2019) can also be applied.
• Hybrid Prototype Learning. After obtaining entity pairs representations of each sentence used by sequence labeling methods, we can get entity prototypes in support set, and then construct relation prototype based on knowledge graph constraint, which takes the interaction between entity pairs and relations into account.
• Prototype-Aware Regularization. To further enhance the prototype learning, we optimize the position of prototypes in representation spaces. We make the distance between each prototype and related instances closer and distract those prototypes with different types.
Instance Encoder
For each sentence x = {w 1 , w 2 , . . . , w n } in the support or query dataset, where w i ∈ x is the word token in sentence x, we first construct input sentence in the form: {[CLS], w 1 , w 2 , . . . , w n , [SEP]}, in order to match the input of BERT (Devlin et al., 2018). The pre-trained language model has been shown to be effective in many NLP tasks.
[CLS] token is used to represent the entire sentence information, and [SEP] is the end token of sentence. After multi-head attention (Vaswani et al., 2017) calculation, we can get sentence contextual embeddings B = {h 0 , h 1 , h 2 , . . . , h n , h n+1 }, where B ∈ R d n+2 ×d b , d b is BERT pre-defined hidden size, h 0 is [CLS] token embedding, h n+1 is [SEP] token embedding, and h i , i ∈ [1, n] is each token embedding in sentence. Note that n can be different from input sentence length because of tokenizer (e.g., byte-pair-encoding) might split words into sub-tokens.
Hybrid Prototypical Learning
Entity Prototype Learning. During training stages, sentence representations in support datasets are first used to construct the entity pairs prototypes. We build an entity labeling set S = {B-Head, I-Head, B-Tail, I-Tail, O, X} to label out each token in the sentence, where B-Head, I-Head indicate head entity positions, B-Tail, I-Tail indicate tail entity positions, O means other tagging labels, and X is any remaining fragments of tokens split by the tokenizer. We utilize Conditional Random Field (CRF) (Lafferty et al., 2001) for sequence labeling as it models the constraints between labels, which is more convenient in few-shot learning scenarios. Let y = {y 0 , y 1 , y 2 , . . . , y n , y n+1 }, where y 0 is [CLS] token label which means the start of sentence, y n+1 is [SEP] token label which means the end of sentence, and y i , i ∈ [1, n] is each token label of sentence in entity labelling set. CRF uses emission and transition scores to combine local and global information, in our model, score of this sequence is evaluated as:
Score(x, y) = N +1 i=0 E y i ,i + N j=0 T y i ,y j+1(2)
Let Y X indicate the exponential space of all possible labelings of this sequence x. The probability of a specific labeling y ∈ Y X is evaluated as:
p(y|x) = e Score(x,y)
y∈Y X e Score(x,y)
We name the CRF-based sequence labeling loss as loss crf and minimize it during training stage. After the above instance encoder and sequence labeling, we can obtain the head and tail representation to match the entities between the query and support set. Due to the variable length of entity words, we only use the first token representation of each entity word as head/tail embeddings, which is also used in (Soares et al., 2019). For measuring the distance between samples in query set and support set, we need compute a representative vector, called prototype, for each class t ∈ T in the support set S from its instances' vectors. The original Prototypical Network (Snell et al., 2017) hypothesis that all instance vectors are equally important, so it aggregates all the representation vectors of the instance of class t i , and then perform averaging over all vectors as follows:
head proto = 1 |S k | head i ∈S k head i tail proto = 1 |S k | tail i ∈S k tail i(4)
where head i , tail i are each sentence's entity pairs representations. Intuitively, the instances of a given relation may be quite different. Thus, we propose to adopt weighted sum prototype, named Proto+Att network inspired by (Gao et al., 2019). The weights are obtained by attention mechanism according to the representational vector of the query Q as follow:
head proto = 1 |S k | head i ∈S k α h head i tail proto = 1 |S k | tail i ∈S k α t tail i(5)
where
α h = exp(e h i ) k m=1 exp(e hm ) e h i = head T proto Q α t = exp(e t j ) k n=1 exp(e tn ) e t j = tail T proto Q(6)
Specifically, we use Euclidean distance d(z − z ) = z − z 2 , to calculate the distance between entity prototypes and instances in query set, and minimize this distance as loss entity .
Relation Prototype Learning. This module computes relation prototypes associated with each entity pair. On the one hand, the first token [CLS] in the sentence representation can represent the whole sentence information. So like the above entity prototypes calculation, we can get sentence prototypes sent proto , used by this sentence information in support set.
On the other hand, knowledge graph representation learning inspires us to learn a translation law h + r ≈ t (Bordes et al., 2013) on a continuous low-dimensional space, where h, r, t describe the head entity, the relation and the tail entity respectively. So we use head proto and tail proto to construct knowledge graph prototype kg proto , which takes the interaction between entities and relations into consideration as follows:
kg proto = |head proto − tail proto |W r(7)
Finally, we combine the prototype of sentence represent ions sentproto and prototype from knowledge constrains between entity pairs kg proto to form the relation prototype as follows:
relation proto = [sent proto ; kg proto ],(8)
Where [; ] refers to the feature vector concatenation. Similar to the entity prototype, we use Euclidean distance to calculate the distance between relation prototype relation proto and the sentence in the query set Q, and minimize this distance as loss relation .
Prototype-Aware Regularization
Previous few-shot learning approaches (Ye and Ling, 2019) have shown that if the representations of all support instances in a class are far away from each other, it could become difficult for the derived class prototype to capture the common characteristics of all support instances. Therefore, we propose prototype-aware regularization to optimize prototype learning. Intuitively, we argue that the representational vectors (e.g, sentence representations/prototypes) of the same class should be close to each other; the prototypes of different types should be located far from each other in the prototypical space. Specifically, We use Euclidean and Cosine distance to measure these similarities, and optimize the prototype represetations as follows:
loss intra = 1 N K N i=1 K k=1 x k i − p k i 2 2 loss inter = 1 − 1 N N i=1 N j=i+1 cosine(p i , p j )(9)
where x i is each sentence representation, p i is associated prototypes, loss intra and loss inter are two different prototype-aware regularization functions. The overall regularizationn loss is: loss regular = loss intra + αloss inter , and α is hyperparameter.
The overall objective of the optimization is as follows:
L = loss crf + βloss entity + γloss relation + δloss regular
where β, γ and δ are the trade-off parameters.
Experiments
Datasets
We conduct experiments on the public dataset FewRel 1 (Han et al., 2018), which is derived from Wikipedia and annotated by crowd workers. FewRel releases 80 relation categories, and each relation has 700 samples. We reconstruct the FewRel dataset to satisfy the few-shot relational triple extraction task. Our input information has only one sentence, and the required output is the relation and related entity pairs, which is a complete knowledge triple in the scheme of head, relation, tail . In our experiments, we randomly select 50 relations for training, 15 for validation, and the rest 15 relation types for testing. Note that there are no overlapping types between these three datasets. We implement our approach with Pytorch (Paszke et al., 2019). We employed minibatch stochastic gradient descent (SGD) (Bottou, 2010) with the initial learning rate of 1e −1 . The learning rate was decayed to one third with every 2000 steps, and we train 30,000 iterations. The dropout rate of 0.2 is used to avoid overfitting. Previous study (Snell et al., 2017) found that models trained on more laborious tasks may achieve better performances than using the same configurations at both training and test stages. Therefore, we set N = 20 to construct the trainsupport sets for 5-way and 10-way tasks. Furthermore, in each step, we sample 5 instances for query datasets. We utilize grid-search on valid set to tune hyperparameters. All of the hyperparameters used in our experiments are listed in Table 1. We consider two types of few-shot relational triple extraction tasks in our experiments: 5-way 5-shot and 10-way 10-shot. We evaluate the performance of the entity, relation, and triple with the micro F1 score. To be specific, the entity performance refers to that the entity's span and span type are correctly predicted, the relation performance means that the relation of the entity pairs is correctly classified. Moreover, the triple performance means that the entity pair and associated relation are all matched correctly.
Settings
Component
Baselines
We compared our model with baselines of supervised approaches and few-shot learning methods: Supervised Learning. We utilize BERT (Devlin et al., 2018) with fine-tuning (Finetune) as supervised learning baselines. We finetune BERT with a batch size of 16 for 100 iterations.
Few-shot Leaning. We apply Matching Network (MatchingNet) (Vinyals et al., 2016), Relation Network (RelationNet) (Sung et al., 2018), vanilla Prototypical Network (Proto) (Snell et al., 2017) and Prototypical Network with attention (Proto+Att) (Ye and Ling, 2019) as few-shot baselines. We only utilize the sentence prototypes sen proto in few-shot baselines which do not take the implicit knowledge into consideration.
Overall Evaluation Results
The first line of Table 2 shows the performance of our model on the FewRel test set. From the results, we observe that: 1) Our approach MPE achieve the best performance in few-shot setting compared with all baselines (about absolute 5% improvement than Proto+Att in 5-way-5-shot), which demonstrates that the multiprototype leveraging both text and knowledge is effective.
2) Entity recognition performs much worse than relation extraction in few-shot settings, as sequence labeling is more challenging than classification tasks, and the empirical results also observed by (Hou et al., 2020). More studies need to be taken to handle the challenging few-shot entity recognition task.
3) Proto+Att achieve better performance than Proto, which reveals that different instances have different contribution to prototype learning.
4) The overall performance is far from satisfactory, which need more future works to be taken into consideration.
Ablation Study
Muitl-Proto 10-Way-10-Shot We further analyze the different modules of our approach by taking ablation studies, as shown in Table 3. w/o CRF implied without the CRF decoder; w/o Att implied without the attention in prototypical learning; w/o intra implied without the intra-constrains between instances and prototypes ; w/o inter implied without the interconstrains between prototypes. From Table 3, we observe that: 1) All approaches without different modules obtain performance decays, and w/o CRF has significant performance decay than w/o Att, w/o intra, and w/o inter, which demonstrates that the efficacy of CRF is more critical in few-shot relational triple extraction.
2) w/o intra or w/o inter has more performance drop compared with w/o Att, which also illustrates that prototype-aware regularization benefits the prototype learning.
From Figure 3, we observe that the multi proto achieves better performance than sen proto and kg proto , and kg proto is more advantageous than sen proto for entity extraction, which further indicates that such knowledge constrains is beneficial.
In summary, we observe that entity recognition is more difficult than relation extraction in few-shot settings and the implicit correlation between them contribute to the performance.
Error Analysis
To further analyze the drawbacks of our approach and promote future works of few-shot relational extraction, we random select instances and conduct error analysis, as shown in Table 4: Distract Context. As instance #1 shows, we observe that our approach may fail to those ambiguous contexts that may be expressed in a similar context but differ only in the fine-grained type of entities. We argue that this may be caused by the unbalanced learning problems that models tend to classify the sentence with similar context to high-frequency relations. Wrong Boundaries. As instance #2 shows, we observe that lots of extracted triples have incorrect boundaries, which further demonstrates the difficulty of entity recognition in the few-shot setting. More future works should be focused on the direction of few-shot sequence labeling.
Wrong Triples. As instance #3 shows, we observe that lots of extracted triples have entities that do not exist in the gold standard set. Generally, this is mostly happening in the sentence with multiple triples. Note the FewRel dataset does not have those labeled triples, and part of those cases is correct.
Conclusion and Future Work
In this paper, we study the few-shot relational triple extraction problem and propose a novel multiprototype embedding network that bridge text representation learning and knowledge constraints. Extensive experimental results prove that our model is effective, but remains challenges. Those empirical findings shed light on promising future directions, including 1) enhancing entity recognition with effective sequence decoders; 2) studying few-shot relational triple extraction with more triples in a single sentence; 3) injecting logic rules to enable robust extraction; and 4) developing few-shot relational triple extraction benchmarks.
Figure 3 :
3Evaluation results of models with sent proto , kg proto and multi proto .Query Instance instance #1 Delias mandaya is a species of pierine butterfly endemic to Mindanao, in the Philippines. extracted triplet: Mindanao, contains administrative territorial entities, Philippines ground truth: Mindanao, country, Philippines instance #2 Hamilton Hyde Kellogg was the Fifth Bishop of Minnesota in The Episcopal Church. extracted triplet: Hamilton Hyde, religion, Church ground truth: Hamilton Hyde Kellogg, religion, The Episcopal Church instance #3 His family has roots in the earliest Catholic presence in the United States west of the Appalachian Mountains; among his relatives are Martin John Spalding and John Lancaster Spalding. extracted triplet: John Lancaster Spalding, religion, Catholic ground truth: Martin John Spalding, religion, Catholic
They continued as members of LDS Church, under the direction of Brigham Young.One of their sons, Shimon Peres, became a Catholic priest. Hamilton was the Fifth Bishop of Minnesota in The Episcopal Church.SupportIn 1992 Ed Rendell was elected as the city's first Jewish mayor. The Sultanate of Brunei began after the ruler of Brunei embraced Islam.Brigham Young
Shimon Peres
Hamilton
Person
LDS Church
Catholic
The Episcopal Church
Institute
religion
Head Prototype
Tail Prototype
Relation Prototype
+
≈
Query
CLS Figure 2: Overview of our proposed Multi-Prototype Embedding (MPE) model. Best view in color.Head
Prototype
Tail
Prototype
q
Q
u
e
r
y
E
n
t
i
t
y
P
r
o
t
o
t
y
p
e
KG
Prototype
R
e
l
a
t
i
o
n
P
r
o
t
o
t
y
p
e
Instance
Encoder
s
1
s
i
s
k
S
u
p
p
o
r
t
(
N
c
l
a
s
s
e
s
)
-
≈
CLS
s i
s j
Sent
Prototype
Table 2 :
2F1 score on the FewRel test set.
Table 3 :
3Ablation study.
Table 4 :
4Error analysis.
https://www.zhuhao.me/fewrel/
AcknowledgmentsWe want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future. This work is funded by NSFCU19B2027/91846204/61473260, national key research program SQ2018YFC000004/2018YFB1402800, Alibaba CangJingGe (Knowledge Engine) Research Plan.
Adversarial training for multi-context joint entity and relation extraction. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder, abs/1808.06876ArXiv. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018a. Adversarial training for multi-context joint entity and relation extraction. ArXiv, abs/1808.06876.
Joint entity recognition and relation extraction as a multi-head selection problem. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder, Expert Syst. Appl. 114Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018b. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Syst. Appl., 114:34-45.
Translating embeddings for modeling multi-relational data. Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, Oksana Yakhnenko, NIPS. Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translat- ing embeddings for modeling multi-relational data. In NIPS.
Large-scale machine learning with stochastic gradient descent. Léon Bottou, COMPSTAT. Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In COMPSTAT.
Meta-learning with dynamic-memory-based prototypical network for few-shot event detection. Shumin Deng, Ningyu Zhang, Jiaojian Kang, Yichi Zhang, Wei Zhang, Huajun Chen, Proceedings of the 13th International Conference on Web Search and Data Mining. the 13th International Conference on Web Search and Data MiningShumin Deng, Ningyu Zhang, Jiaojian Kang, Yichi Zhang, Wei Zhang, and Huajun Chen. 2020a. Meta-learning with dynamic-memory-based prototypical network for few-shot event detection. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 151-159.
When low resource nlp meets unsupervised language model: Meta-pretraining then meta-learning for few-shot text classification (student abstract). Shumin Deng, Ningyu Zhang, Zhanlin Sun, Jiaoyan Chen, Huajun Chen, AAAI. Shumin Deng, Ningyu Zhang, Zhanlin Sun, Jiaoyan Chen, and Huajun Chen. 2020b. When low resource nlp meets unsupervised language model: Meta-pretraining then meta-learning for few-shot text classification (student abstract). In AAAI, pages 13773-13774.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Few-shot classification in named entity recognition task. Alexander Fritzler, Varvara Logacheva, Maksim Kretov, Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. the 34th ACM/SIGAPP Symposium on Applied ComputingAlexander Fritzler, Varvara Logacheva, and Maksim Kretov. 2019. Few-shot classification in named entity recog- nition task. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, pages 993-1000.
Hybrid attention-based prototypical networks for noisy few-shot relation classification. Tianyu Gao, Xu Han, Zhiyuan Liu, Maosong Sun, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6407-6414.
Induction networks for few-shot text classification. Ruiying Geng, Binhua Li, Yongbin Li, Xiao-Dan Zhu, Ping Jian, Jian Sun, EMNLP/IJCNLP. Ruiying Geng, Binhua Li, Yongbin Li, Xiao-Dan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. In EMNLP/IJCNLP.
Nyu's english ace 2005 system description. Ralph Grishman, David Westbrook, Adam Meyers, Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyu's english ace 2005 system description.
Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, Maosong Sun, arXiv:1810.10147arXiv preprintXu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. arXiv preprint arXiv:1810.10147.
Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, Ting Liu, arXiv:2006.05702arXiv preprintYutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Few-shot slot tag- ging with collapsed dependency transfer and label-enhanced task-adaptive projection network. arXiv preprint arXiv:2006.05702.
Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward. Luyang Huang, Lingfei Wu, Lu Wang, abs/2005.01159ArXiv. Luyang Huang, Lingfei Wu, and Lu Wang. 2020. Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward. ArXiv, abs/2005.01159.
Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. Nanda Kambhatla, ACL. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In ACL.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Cn Pereira, John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data.
Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu, Twenty-ninth AAAI conference on artificial intelligence. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelligence.
Neural relation extraction with selective attention over instances. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 2124-2133.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
A simple neural attentive meta-learner. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel, arXiv:1707.03141arXiv preprintNikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. 2017. A simple neural attentive meta-learner. arXiv preprint arXiv:1707.03141.
Meta networks. Tsendsuren Munkhdalai, Hong Yu, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70Tsendsuren Munkhdalai and Hong Yu. 2017. Meta networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2554-2563. JMLR. org.
Reasoning with latent structure refinement for document-level relation extraction. Guoshun Nan, Zhijiang Guo, Ivan Sekulić, Wei Lu, ACL. Guoshun Nan, Zhijiang Guo, Ivan Sekulić, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In In ACL.
Effective modeling of encoder-decoder architecture for joint entity and relation extraction. Tapas Nayak, Hwee Tou Ng, abs/1911.09886ArXiv. Tapas Nayak and Hwee Tou Ng. 2020. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. ArXiv, abs/1911.09886.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems, pages 8026-8037.
Semantic relation extraction with kernels over typed dependency trees. Frank Reichartz, Hannes Korte, Gerhard Paass, 10Frank Reichartz, Hannes Korte, and Gerhard Paass. 2010. Semantic relation extraction with kernels over typed dependency trees. In KDD '10.
Few-shot learning with graph neural networks. Garcia Victor, Joan Bruna Satorras, Estrach, Victor Garcia Satorras and Joan Bruna Estrach. 2018. Few-shot learning with graph neural networks.
Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, Advances in Neural Information Processing Systems. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077-4087.
Matching the blanks: Distributional similarity for relation learning. Livio Baldini, Nicholas Soares, Jeffrey Fitzgerald, Tom Ling, Kwiatkowski, arXiv:1906.03158arXiv preprintLivio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. arXiv preprint arXiv:1906.03158.
Learning to compare: Relation network for few-shot learning. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, H S Philip, Timothy M Torr, Hospedales, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionFlood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199-1208.
Neural relation extraction for knowledge base enrichment. Gerhard Bayu Distiawan Trisedya, Jianzhong Weikum, Rui Qi, Zhang, ACL. Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. 2019. Neural relation extraction for knowledge base enrichment. In ACL.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
Matching networks for one shot learning. Oriol Vinyals, Charles Blundell, Timothy P Lillicrap, Koray Kavukcuoglu, Daan Wierstra, NIPS. Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching networks for one shot learning. In NIPS.
Knowledge graph embedding by translating on hyperplanes. Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen, Twenty-Eighth AAAI conference on artificial intelligence. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Twenty-Eighth AAAI conference on artificial intelligence.
Zifeng Wang, Rui Wen, Xi Chen, Shao-Lun, Huang, arXiv:2009.09841Ningyu Zhang, and Yefeng Zheng. 2020. Finding influential instances for distantly supervised relation extraction. arXiv preprintZifeng Wang, Rui Wen, Xi Chen, Shao-Lun Huang, Ningyu Zhang, and Yefeng Zheng. 2020. Finding influential instances for distantly supervised relation extraction. arXiv preprint arXiv:2009.09841.
Representation learning of knowledge graphs with entity descriptions. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, Maosong Sun, Thirtieth AAAI Conference on Artificial Intelligence. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Thirtieth AAAI Conference on Artificial Intelligence.
Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, arXiv:1412.6575arXiv preprintBishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xl- net: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763.
Multi-level matching and aggregation network for few-shot relation classification. Zhen-Hua Zhi-Xiu Ye, Ling, abs/1906.06678ArXiv. Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Multi-level matching and aggregation network for few-shot relation classification. ArXiv, abs/1906.06678.
Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei Huang, Huajun Chen, arXiv:2009.06207Contrastive triple extraction with generative transformer. arXiv preprintHongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei Huang, and Huajun Chen. 2020. Contrastive triple extraction with generative transformer. arXiv preprint arXiv:2009.06207.
Improved neural relation detection for knowledge base question answering. Mo Yu, Wenpeng Yin, Cícero Kazi Saidul Hasan, Nogueira, Bing Santos, Bowen Xiang, Zhou, abs/1704.06194ArXiv. Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cícero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. ArXiv, abs/1704.06194.
The devil is the classifier: Investigating long tail relation classification with decoupling analysis. Haiyang Yu, Ningyu Zhang, Shumin Deng, Zonggang Yuan, Yantao Jia, Huajun Chen, arXiv:2009.07022arXiv preprintHaiyang Yu, Ningyu Zhang, Shumin Deng, Zonggang Yuan, Yantao Jia, and Huajun Chen. 2020. The devil is the classifier: Investigating long tail relation classification with decoupling analysis. arXiv preprint arXiv:2009.07022.
Relation classification via convolutional deep neural network. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, COLING. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolu- tional deep neural network. In COLING.
Attention-based capsule networks with dynamic routing for relation extraction. Ningyu Zhang, Shumin Deng, Zhanlin Sun, Xi Chen, Wei Zhang, Huajun Chen, arXiv:1812.11321arXiv preprintNingyu Zhang, Shumin Deng, Zhanlin Sun, Xi Chen, Wei Zhang, and Huajun Chen. 2018. Attention-based capsule networks with dynamic routing for relation extraction. arXiv preprint arXiv:1812.11321.
Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, Huajun Chen, Proceedings of the NAACL. the NAACLNingyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen. 2019. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. In Proceedings of the NAACL, pages 3016-3025.
Relation adversarial network for low resource knowledge graph completion. Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoyan Chen, Wei Zhang, Huajun Chen, Proceedings of The Web Conference 2020. The Web Conference 2020Ningyu Zhang, Shumin Deng, Zhanlin Sun, Jiaoyan Chen, Wei Zhang, and Huajun Chen. 2020a. Relation adversarial network for low resource knowledge graph completion. In Proceedings of The Web Conference 2020, pages 1-12.
Can fine-tuning pre-trained models lead to perfect nlp? a study of the generalizability of relation extraction. Ningyu Zhang, Luoqiu Li, Shumin Deng, Haiyang Yu, Xu Cheng, Wei Zhang, Huajun Chen, arXiv:2009.06206arXiv preprintNingyu Zhang, Luoqiu Li, Shumin Deng, Haiyang Yu, Xu Cheng, Wei Zhang, and Huajun Chen. 2020b. Can fine-tuning pre-trained models lead to perfect nlp? a study of the generalizability of relation extraction. arXiv preprint arXiv:2009.06206.
Joint entity and relation extraction based on reinforcement learning. Xin Zhou, Luping Liu, Xiaodong Luo, Haiqiang Chen, Linbo Qing, Xiaohai He, IEEE Access. 7Xin Zhou, Luping Liu, Xiaodong Luo, Haiqiang Chen, Linbo Qing, and Xiaohai He. 2019. Joint entity and relation extraction based on reinforcement learning. IEEE Access, 7:125688-125699.
| [] |
[
"ANALYSING SIMILARITIES BETWEEN LEGAL COURT DOCUMENTS USING NATURAL LANGUAGE PROCESSING APPROACHES BASED ON TRANSFORMERS",
"ANALYSING SIMILARITIES BETWEEN LEGAL COURT DOCUMENTS USING NATURAL LANGUAGE PROCESSING APPROACHES BASED ON TRANSFORMERS"
] | [
"Raphael Souza De Oliveira ",
"Erick Giovani ",
"Sperandio Nascimento ",
"\nStricto Sensu Department\nSurrey Institute for People-Centred AI\nSchool of Computer Science and Electronic Engineering\nFaculty of Engineering and Physical Sciences\nTRT5 -Regional Labour Court of the 5th Region\nSENAI CIMATEC University Center\nSalvador, SalvadorBA, BABrazil, Brazil\n",
"\nStricto Sensu Department\nUniversity of Surrey\nGuildfordUK\n",
"\nSENAI CIMATEC University Center\nSalvadorBABrazil\n"
] | [
"Stricto Sensu Department\nSurrey Institute for People-Centred AI\nSchool of Computer Science and Electronic Engineering\nFaculty of Engineering and Physical Sciences\nTRT5 -Regional Labour Court of the 5th Region\nSENAI CIMATEC University Center\nSalvador, SalvadorBA, BABrazil, Brazil",
"Stricto Sensu Department\nUniversity of Surrey\nGuildfordUK",
"SENAI CIMATEC University Center\nSalvadorBABrazil"
] | [] | Recent advances in Artificial Intelligence (AI) have leveraged promising results in solving complex problems in the area of Natural Language Processing (NLP), being an important tool to help in the expeditious resolution of judicial proceedings in the legal area. In this context, this work targets the problem of detecting the degree of similarity between judicial documents that can be achieved in the inference group, by applying six NLP techniques based on the transformers architecture to a case study of legal proceedings in the Brazilian judicial system. The NLP transformer-based models, namely BERT, GPT-2 and RoBERTa, were pre-trained using a general purpose corpora of the Brazilian Portuguese language, and then were fine-tuned and specialised for the legal sector using 210,000 legal proceedings. Vector representations of each legal document were calculated based on their embeddings, which were used to cluster the lawsuits, calculating the quality of each model based on the cosine of the distance between the elements of the group to its centroid. We noticed that models based on transformers presented better performance when compared to previous traditional NLP techniques, with the RoBERTa model specialised for the Brazilian Portuguese language presenting the best results. This methodology can be also applied to other case studies for different languages, making it possible to advance in the current state of the art in the area of NLP applied to the legal sector.Analysing similarities between legal court documents using transformer-based modelsThe recent history of the Brazilian Justice shows relevant transformations regarding having all its procedural documents in digital format. In 2012, the Brazilian Labour Court implemented the Electronic Judicial Process (acronym in Portuguese for "Processo Judicial Eletrônico" -PJe), and since then, all new lawsuits have become completely digital, reaching 99.9% of cases in progress on this platform in 2020[9].Knowing the limitation of human beings analysing, in an acceptable time, a large amount of data, especially when such data appear not to be correlated, it is possible to help them in the patterns' recognition context through data analysis, computational ans statistical methods. Assuming that textual data has been exponentially increasing, patterns' examination in court documents is becoming pronouncedly challenging.To optimise the procedural progress the Brazilian legal system provides, for ways, such as the procedural economy, the principle of speed, due process in order, and the principle of the reasonable duration of a case to ensure the swift handling of judicial proceedings[10]. Hence, one of the major challenges of the Brazilian Justice is swiftly meeting the growing judicial demand. At present, a specialist triages the documents and distributes the lawsuits to be judged among the team members, configuring a deviation from the main activity of the specialist, which is the production of the draft decisions. This occurrence reinforce a further increase in the congestion rate (an indicator that measures the percentage of cases that remain pending solution by the end of the base-year) and to the decrease in the supply of demand index (acronym in Portuguese for "Índice de Atendimento à Demanda" -IAD -an indicator that measures the percentage of downtime of processes compared to the number of new cases)[9]. | null | [
"https://export.arxiv.org/pdf/2204.07182v3.pdf"
] | 258,615,134 | 2204.07182 | e7f86fcea0851451eb1fee3763072a00581ea3f9 |
ANALYSING SIMILARITIES BETWEEN LEGAL COURT DOCUMENTS USING NATURAL LANGUAGE PROCESSING APPROACHES BASED ON TRANSFORMERS
Raphael Souza De Oliveira
Erick Giovani
Sperandio Nascimento
Stricto Sensu Department
Surrey Institute for People-Centred AI
School of Computer Science and Electronic Engineering
Faculty of Engineering and Physical Sciences
TRT5 -Regional Labour Court of the 5th Region
SENAI CIMATEC University Center
Salvador, SalvadorBA, BABrazil, Brazil
Stricto Sensu Department
University of Surrey
GuildfordUK
SENAI CIMATEC University Center
SalvadorBABrazil
ANALYSING SIMILARITIES BETWEEN LEGAL COURT DOCUMENTS USING NATURAL LANGUAGE PROCESSING APPROACHES BASED ON TRANSFORMERS
10.5281/zenodo.7686233legal · natural language processing · clustering · transformers
Recent advances in Artificial Intelligence (AI) have leveraged promising results in solving complex problems in the area of Natural Language Processing (NLP), being an important tool to help in the expeditious resolution of judicial proceedings in the legal area. In this context, this work targets the problem of detecting the degree of similarity between judicial documents that can be achieved in the inference group, by applying six NLP techniques based on the transformers architecture to a case study of legal proceedings in the Brazilian judicial system. The NLP transformer-based models, namely BERT, GPT-2 and RoBERTa, were pre-trained using a general purpose corpora of the Brazilian Portuguese language, and then were fine-tuned and specialised for the legal sector using 210,000 legal proceedings. Vector representations of each legal document were calculated based on their embeddings, which were used to cluster the lawsuits, calculating the quality of each model based on the cosine of the distance between the elements of the group to its centroid. We noticed that models based on transformers presented better performance when compared to previous traditional NLP techniques, with the RoBERTa model specialised for the Brazilian Portuguese language presenting the best results. This methodology can be also applied to other case studies for different languages, making it possible to advance in the current state of the art in the area of NLP applied to the legal sector.Analysing similarities between legal court documents using transformer-based modelsThe recent history of the Brazilian Justice shows relevant transformations regarding having all its procedural documents in digital format. In 2012, the Brazilian Labour Court implemented the Electronic Judicial Process (acronym in Portuguese for "Processo Judicial Eletrônico" -PJe), and since then, all new lawsuits have become completely digital, reaching 99.9% of cases in progress on this platform in 2020[9].Knowing the limitation of human beings analysing, in an acceptable time, a large amount of data, especially when such data appear not to be correlated, it is possible to help them in the patterns' recognition context through data analysis, computational ans statistical methods. Assuming that textual data has been exponentially increasing, patterns' examination in court documents is becoming pronouncedly challenging.To optimise the procedural progress the Brazilian legal system provides, for ways, such as the procedural economy, the principle of speed, due process in order, and the principle of the reasonable duration of a case to ensure the swift handling of judicial proceedings[10]. Hence, one of the major challenges of the Brazilian Justice is swiftly meeting the growing judicial demand. At present, a specialist triages the documents and distributes the lawsuits to be judged among the team members, configuring a deviation from the main activity of the specialist, which is the production of the draft decisions. This occurrence reinforce a further increase in the congestion rate (an indicator that measures the percentage of cases that remain pending solution by the end of the base-year) and to the decrease in the supply of demand index (acronym in Portuguese for "Índice de Atendimento à Demanda" -IAD -an indicator that measures the percentage of downtime of processes compared to the number of new cases)[9].
Introduction
Recent advances obtained in the area of natural language processing (NLP) have encouraged researchers to carry out scientific research which present advances in the use of a specific NLP technique to transform short texts into vector representation in which the context and semantics of the words in the document are considered. In this way, recent studies have shown that machine learning algorithms are critical tools capable of solving high-complexity problems using Natural Language Processing (NLP) [1]. To this end, it is possible to highlight the works of [2,3,4,5,6,7,8], which, taking into account the context of words, apply techniques of word-embeddings generation, a form of vector representation of names, and consequently of documents.
However, no study has been found so far that consolidates a methodology that details the use of various NLP techniques, from the most traditional to the most current ones, using robust texts and that is tested and applied in a real case. In this way, a fertile and unexplored field was found in the legal sector to validate this methodology proposed by this present work. Thus, the use of word embeddings is essential to analyse a large set of unstructured data presented in court.
Thus, using a process grouping mechanism, it is possible to assist with the allocation of work among the advisers of the office for which the process was drawn with a good rate of similarity between the documents analysed. Furthermore, it contribute to the search for case-law 1 for the judgement of the cases in point, guarding the principle of legal certainty. According to Gomes Canotilho [11], the general principle of legal certainty aims to ensure the individual the right to trust that the legal rulings made of their issues are based upon current and valid legal norms.
In this way, it is possible to develop, test and deploy this methodology based on deep learning for grouping judicial processes, consolidating it for the Brazilian Labour Court from the tests and validations applied.
This work aims, therefore, to use as a baseline the results discussed by the research's Oliveira and Nascimento [12] comparing them with the degree of similarity between the judicial documents achieved in the inferred groups through unsupervised learning, through the application of six techniques of Natural Language Processing, which are: (i) BERT (Bidirectional Encoder Representations from Transformers) trained for general purposes for Portuguese (BERT pt-BR); (ii) BERT specialised with the corpus of the Brazilian labour judiciary (BERT Jud); (iii) GPT-2 (Generative Pre-trained Transformer 2) trained for general purposes for Portuguese (GPT-2 pt-BR); (iv) GPT-2 specialised with the corpus of the Brazilian labour judiciary (GPT-2 Jud); (v) RoBERTa (Robustly optimised BERT approach) trained for general purposes for Portuguese (RoBERTa pt-BR); and (vi) RoBERTa specialised with the corpus of the Brazilian labour judiciary (RoBERTa Jud), consolidating a methodology that was tested for Brazilian labour legal documents and making it possible to use it for other fields of justice, Brazilian or international, and who knows, making it possible to apply in documents from other areas of knowledge.
Therefore, as proposed in [12], the degree of similarity indicates the performance of the model and was a result of the average similarity rate of the documents groups, which was based on the cosine similarity between the elements of the group to its centroid and, comparatively, by the average cosine similarity among all the documents of the group.
To delimit the scope of this research and make a coherent comparison, the same data as in [12] was applied. Thus, the data set extracted contained information from the Ordinary Appeal Brought (acronym in Portuguese for "Recurso Ordinário Interposto" -ROI) of approximately 210,000 legal proceedings 2 . The Ordinary Appeal Brought was used as a reference, as it is regularly the type of document responsible for sending the case to trial in a higher court (2nd degree), hence creating the Ordinary Appeal (acronym in Portuguese for "Recurso Ordinário" -RO). It serves as a free plea, an appropriate appeal against final and terminative judgements proclaimed at first instance, which seeks a review of the court decision drawn up by a hierarchically superior body [13].
For the present work, a literature review on unsupervised machine learning algorithms applied to the legal area was performed, using NLP, and an overview of recent techniques that use Artificial Intelligence (AI) algorithms in word embeddings generation. Then, applying some methods until obtaining results, comparing them, and finally, proposing future challenges.
State-of-the-Art Review
More recent research maintain that machine learning algorithms have great potential for high complexity problemsolving. These machine learning algorithms categories can be: (i) supervised; (ii) unsupervised; (iii) semi-supervised; and (iv) via reinforcement [14]. This research context reviewed the literature in search of the most recent productions for the period from 2017 to 2022, through the databases (i) Google Scholar; (ii) Science Direct; and (iii) IEEE Xplorer, on unsupervised machine learning algorithms or clustering applied to the legal area using NLP.
The research revealed that, so far, few productions are dealing with the subject, which proves its complexity. We highlight the research conducted by Oliveira and Nascimento [12] that sought to detect the degree of similarity between the judicial documents of the Brazilian Labour Court through unsupervised learning, using NLP techniques such as (i) inverse frequency of the term document frequency (TF-IDF); (ii) Word2Vec with CBoW (Continuous Bag of Words) trained for general purposes for the Portuguese language in Brazil; and (iii) Word2Vec with Skip-gram trained for general purposes for the Portuguese language in Brazil. [15] made an empirical evaluation of pre-trained language models (PLMs) for legal natural language processing (NLP) in order to verify the effectiveness of the models in this domain, which used up to 57 thousand documents.
Expanding the research for the use of Natural Language Processing applied to the judicial area, a systematic review of the literature of the challenges faced by the system of trial prediction was found, which can assist lawyers, judges and civil servants to predict the rate of profit or loss, time of punishment and articles of law applicable to new cases, using the deep learning model. The researchers describe in detail the Empirical Literature on Methods of Prediction of Legal Judgment, the Conceptual Literature on Text Classification Methods and details of the transformers model [16].
Therefore, we then sought to expand the research by removing the restriction for the legal area, which revealed some publications. [17] Discusses using a content recommendation system based on grouping, with k-means, in similar articles through the vector transformation of the content of documents with the TF-IDF [18]. In [19], the authors performed an automatic summarisation of texts using TF-IDF and k-means to determine the sentence groups of the documents used in creating the summary. It concludes that these studies used TF-IDF as the primary technique to vectorise textual content and that k-means is the most commonly used algorithm for unsupervised machine learning. It's also highlight the research carried out by Santana, Oliveira and Nascimento [20] which proposed the use of a model based on Transformers for Portuguese in the generation of word embeddings of texts published in a Brazilian newspaper, limited to 510 words, for the classification of news.
We assume that choosing the best technique of generating word embeddings requires research, experimentation and comparison of models. Many recent studies prove the feasibility of using word embeddings to improve the quality of the results of AI algorithms for pattern detection and classification, among others. However, most of the searches found use a reduced number of documents and, in addition, limit the content of these documents to a maximum of 510 words.
Mikolov et al. proposed in 2013 Word2Vec
Skip-gram and CBoW, two new architectures to calculate vector representations of words considered, at the time, reference in the subject [3]. Then, Embeddings from Language Models (Elmo) [21], Flair [22] and context2vec [23], libraries based on the Long Short Term Memory Network (LSTM) [24] created distinct word embeddings for each occurrence of the word, context-aware, which allowed the capture of the meaning of the word. The LSTM models were used widely for speech recognition, language modelling, sentiment analysis and text prediction, and, unlike Recurrent Neural Network (RNN), have the ability to forget, remember and update information, thus taking a step ahead of the RNNs [25].
As of 2018, new techniques for generating word embeddings emerged, with emphasis on (i) Bidirectional Encoder Representations from Transformers (BERT) [6], a context-sensitive model with architecture based on a Transformers model [26]; (ii) Sentence BERT (SBERT) [27], a "Siamese" BERT model proposed to improve BERT's performance when seeking to obtain the similarity of sentences; (iii) Text-to-Text Transfer Transformer (T5) [28], a framework for treating NLP issues as a text-to-text problem, i.e. template input as text and template output as text; (iv) Generative Pre-Training Transformer 2 (GPT-2), a Transformers-based model with 1.5 billion parameters [7]; and (v) Robustly optimised BERT approach (RoBERTa), a model based on the BERT model, which was trained longer and used a higher amount of data [8].
With this analysis, it was possible to advance in the current state of the art of NLP applied to the legal sector. By conducting a comparative study and implementation of Transformers techniques (BERT, GPT-2 and RoBERTa), using models for generic purpose in Brazilian Portuguese (pt-BR) and specialised models in the labour judiciary, to carry out the grouping of labour legal processes in Brazil using the k-means algorithm and cosine similarity. In addition to advancing in the consolidation of a methodology, validated for the Brazilian labour legal sector, which can be used in every field of justice, Brazilian and International, and in other areas of knowledge.
Methodology
In this section, the protocol necessary to reproduce the results achieved and to analyse them comparatively is presented. For the implementation of the routines used in this study, we used the Python programming language (version 3.6.9) and the same libraries used in the study by Oliveira and Nascimento [12].
The processing flow (pipeline) was composed of the phases: (i) data extraction; (ii) data cleaning; (iii) generation of word embeddings templates; (iv) calculation of the vector representation of the document; (v) unsupervised learning; and (vi) calculation of the similarity measure, of which phases (iii), (iv) are detailed in the follow sections, and the others phases are summarised below and for more details please refer to the work of Oliveira and Nascimento [12].
• data extraction: a dataset containing information from documents of the Ordinary Appeal Interposed (acronym in Portuguese for "Recurso Ordinário Interposto" -ROI) type was extracted from approximately 210,000 legal proceedings;
• data cleaning: was realised two two forms of preprocessing: (i) detection of the subjects of the Unified Procedural Table 3 (acronym in Portuguese for "Tabela Processual Unificada"-TPU) contained in the extracted documents; and (ii) cleaning the contents of the documents, using a regular expression, for examples, remove the tags HTML, replace the name if the individuals linked to the legal cases by the "tag" "parteprocesso" (part in the process), replace the judging bodies (e.g., "Tribunal Regional do Trabalho" [Regional Labour Court]) by the "orgaojulgador" (organjudge), etc;
• unsupervised learning: the technique adopted was the k-means algorithm [18];
• calculation of the similarity measure: the cosine similarity measure was adopted as tool for the measurement of the quality of inferred groups.
Generation of word embeddings templates
The usage of vector representation of words, whose numerical values indicate some relationship between words in the text, is an essential technique in the machine learning problem-solving process when the data used by the model is textual.
Thus, in this research, word embeddings generated and shared for the Portuguese language were used, such as (i) BERT (large) model generated based on brWaC corpus [29], composed of 2 billion and 700 thousand tokens, and published in the article BERTimbau: Pretrained BERT Models for Brazilian Portuguese [30]; (ii) GPT-2 (Small) model generated based on texts extracted from Wikipedia in Portuguese, and published in article GPorTuguese-2 (Portuguese GPT-2 small): a Language Model for Portuguese text generation (and more NLP tasks...) [31]; and (iii) RoBERTa (Base) model generated based on texts extracted from Wikipedia in Portuguese, entitled roberta-PT-BR and published in Hugging Face [32].
In addition to these pre-trained models in the Portuguese language, the most recent literature suggests that using embeddings adherent to the context of the problem proposed to be solved may bring a better result. Thus, using the 210,000 documents extracted, two embedding generation techniques were applied, namely, (i) specialisation of the BERTimbau model; (ii) specialisation of the GPorTuguese-2 model; and (iii) specialisation of the roberta-pt-br model, which will be detailed below.
Specialisation of Transformers models
Recent studies show the benefits of applying for learning transfer on generalist models, which, in recent years, has significantly improved the results, reaching the state-of-the-art in NLP [33]. For the specialisation of Transformers models, in addition to cleaning the data, it is also necessary to adjust the data to make the most of its benefits. Of the adjustments made, two deserve highlights: (i) definition of the sentence slot; and (ii) definition of the strategy of "disguising" or masking (MASK) of the sentences' tokens, which are detailed below.
Defining the sentence slot is a fundamental step to enable the usage of specialised data in the learning transfer from a pre-trained model. Therefore, inspired by the strategy proposed in the article Transformers: State-of-the-Art Natural Language Processing [34] that, for each batch of 1,000 documents, as presented in Figure 1, all content is concatenated and sentences of 128 tokens created, if the last "sentence" of this lot is less than 128 tokens this "sentence" is disregarded, other detailed approaches have been tested later. In order to reduce the loss of context of words at the edges of the sentence, the proposed approach, entitled Slot N/K, generated "sentences" with N tokens from the concatenation of 1,000 documents, as detailed below and illustrated in Figure 2.
• Initial Slot: "sentence" formed by the first N tokens;
• Intermediate slots: "sentence" formed by N tokens counted from the N-K token of the previous "sentence", where K is the number of return tokens;
• Final Slot: "sentence" formed by the last N tokens. For learning transfer, depending on the Transformers model used, a token masking strategy for each sentence is applied, using Masked Language Models (MLM) for BERT and Causal Language Models (CLM) models for GPT-2 models. While the CLM is trained unidirectionally in order to predict the next word based on the preceding words [35], the MLM has a two-way approach to predict the masked words of the sentence.
Hence, for the transfer of learning of BERT models, inspired by the article Transformers: State-of-the-Art Natural Language Processing [34] that used the masking rate of 15%, simulations were performed using the masking rate of 15% and the masking rate of 25%, reaching, with the rate of 15%, the best result in the specialisation of the BERT model in Portuguese with the corpus of the judiciary.
Calculation of the vector representation of the document
Vector representation techniques of words (word embeddings) such as (i) BERT; (ii) GPT-2; and (iii) RoBERTa need to undergo a transformation in order to, from the word embeddings, calculate the vector representation of the document (document embeddings).
It is initially necessary to detail how to obtain word embeddings for Transformers techniques. One of the advantages of Transformers techniques over previous word embeddings techniques, such as Word2Vec, is the ability to capture the vector representation of the word according to the global context, meaning that the same word can have more than one vector representation. It becomes more evident when highlighting the word "bank" (banco in Pt-BR) in the following two sentences (i) I go to the bank (banco in Pt-BR) to withdraw money; and (ii) I will sit on the bench (banco in Pt-BR) of the square; where, with Word2Vec, the vector representation of the word "bank" is unique regardless of the phrase and with BERT, GPT-2 and RoBERTa word embeddings are different.
Therefore, for Transformers templates, it is necessary to "divide" the entire document into "slots" of sentences.
Considering that, unlike the GPT-2 model, the BERT and RoBERTa models have a limitation of up to 512 tokens per sentence and require that the first and last tokens be special, respectively [CLS] and [SEP], the slot size has been set at 510 tokens per sentence.
As indicated in the Section 2 ( State-of-the-Art Review), unlike the present work, which has very large documents, current research limits the texts used in the vector transformation to up to 510 words. Thus, we developed strategies to obtain all the word embeddings of the document, whose words of the generated sentences kept the context according to the complete file. These approaches consist, similar to that presented in Figure 2, in bringing about sentences with 510 tokens as detailed below:
• Initial Sentence: "sentence" formed by the first 510 tokens;
• Intermediate sentences: "sentence" consisting of 510 tokens counted from token N -K of the previous "sentence", where K was set empirically to value 64;
• Final Sentence: "sentence" formed by the last 510 tokens;
Therefore, the sentences generated from each document have coincident tokens chosen to ensure greater adherence to the token context in the file. To this end, we tested two different approaches: (i) averages of word embeddings of coincident tokens; and (ii) use of the first 32 coincident tokens of the previous sentence and the last 32 coincident tokens of the current sentence, which showed better results in the simulations performed.
Hence, as shown in Figure 4, the return tokens that are coincident between the current and previous sentences are used as follows: (i) the first 32 coincident tokens of the previous sentence (for example, tokens 446 to 477 from Slot 1 exemplified in Figure 4); and (ii) the last 32 coincident tokens of the sentence in question (for example, Tokens 478 to 510 from Slot 2 exemplified in Figure 4). It is worth noting that the last sentence slot must contain 510 tokens, as well as the others, and coincident tokens tapped as follows: (i) the first half of the coincident tokens of the previous sentence (for example, tokens 590 to 773 from Slot 2 exemplified in Figure 4); and (ii) the second half of the coincident tokens of the sentence in question (for example, tokens 774 to 956 from Slot 3 exemplified in Figure 4). After obtaining the word embeddings, the same technique used in the research by Oliveira and Nascimento was chosen to generate the document embeddings, that is, the average of the word embeddings of the words in the document, weighting them with the TF-IDF.
Consequently, to enable an overview, Table 1 summarises the parameters used for training the six models used in this research.
Moreover, after going through the stages of generating the unsupervised machine learning model and calculating the similarity measure, as defined by Oliveira and Nascimento [12] a two-dimensional graphic representation of the vector representation of the documents was generated, using the T-Distributed Stochastic Neighbor Embedding (t-SNE) reduction technique, which minimizes the divergence between two distributions by measuring the similarities between pairs of input objects and the similarities between pairs of corresponding low-dimensional points in the embedding [36].
Results and Discussions
Applying the methodology as previously detailed, this research shows how natural language processing techniques in conjunction with machine learning algorithms are paramount in optimising the operational costs of the judicial process, such as the aid of document screening and procedural distribution. It grants working time optimisation since it allows the experts time to be devoted to their core activity.
In order to use the unsupervised learning algorithm, k-means, it was necessary to define the ideal K to offer to the clustering algorithm, we used the inertia calculation, which measures how well the data set grouped through k-means. The inertia calculation is the sum of the square of the Euclidean distance from each point to its centroid, seeking to reach the K with the lowest inertia. The tendency is that the higher the K, the lower the inertia, then we used the elbow method to find the K where the reduction of inertia begins to decrease. Then, we used 31 values for K within the range from 30 to 61, with intervals per unit, selecting according to the elbow technique the K that generated the best grouping, as showed in Figure 5.
From obtaining the best K, the k-means template was trained and, with the grouping performed by the model, we calculated (i) the average similarity between the documents of each group, thus allowing an overview of the distribution of documents in the groups generated by each NLP technique; and (ii) the mean similarity between the group's documents and their centroid, making possible to indicate which technique achieved the best performance.
To demonstrate the progress brought by this research, Table 2 presents the results extracted from the study [12], which established a baseline for research on the use of NLP techniques applied to the legal environment for the same purpose. We highlight the Word2Vec Skip-gram pt-BR technique, which presented itself, in that research, as the best option for generating word embeddings aiming to group judicial documents of the Ordinary Appeal Brought type. Consequently, the statistical data of the average similarity between the documents of each group and the average similarity of the group documents for their centroid presented respectively in Table 3 and Table 4, highlighted in bold for the best result value of each metric and projected in the comparative distribution chart (Figure 6 and Figure 7), show that the generalist word embeddings in Portuguese (pt-BR) achieved superior results when compared to the specialised legal corpus word embeddings. The proximity of the results among the generalist models is also noteworthy. However, for the expert model, this proximity was observed only between the BERT Jud models and GPT-2 Jud. When comparing the values presented in Table 3 and Table 4, it is noteworthy that the results in Table 3 are slightly lower in all cases. From this, it is noticeable that the measurement of similarity as in Table 3 might reduce the similarity rate since there may be elements positioned altogether opposite in the group. From Figure 6 and Figure 7, it is also possible to verify that the groupings generated by all techniques are very cohesive, especially in the generalist techniques cases, which created fewer groupings in the range of outliers than the expert techniques.
Since most of the techniques achieved results close to each other, we considered it important to present the time spent for the processing of each Transformer technique, with the use of a computer with 40 physical nuclei and 196 GB of memory, in the generation of numerical representation of approximately 210,000 judicial documents of the Ordinary Appeal Brought type. As presented in Table 5, GPT-2 reached an average vectorisation of documents per minute much higher than BERT. However, as expected, RoBERta further outperformed BERT and GPT-2, as [8] performance can be improved when trained for more extended periods, with larger batches, over more data, without using the prediction of the next sentence strategy, in addition to training longer sequences with dynamically changed standard masking. In this way, the performance evaluation is fundamental when it comes to documents in large quantity and whose content has many words, bringing more prominence to this research because of this particularity. It is important to stress that the results of this research (Table 4) showed relevant advances in contrast to the results presented in the previous research (Table 2), in which the best average cosine similarity of the elements of the group to the centroid was, respectively, 0.98 and 0.94. So, as a consolidation of the methodology, it can be seen that with the advance of approximately 4 points with the use of the Transformers architecture when comparing the results presented by the research by Oliveira and Nascimento [12] and the present work, it may indicate that the sequence of techniques presented by this methodology provided subsidies for its use in other areas of Brazilian and international justice, and could also be used in other areas of knowledge.
A fact to be analysed in the results presented is that specialised word embeddings techniques showed slightly worse results. Its occurrence is due to the general techniques in Portuguese being trained with a much larger corpus than the one used to refine the generalist model. This fact is also reported by Ruder et al. [33], featuring a behaviour similar to that found in the present study, in which the corpus of the base model is much larger than the specialised corpus used.
The results achieved by each approach of the entire methodology developed can be visualised in a two-dimensional projection of the groups formed in the nine techniques (i) TF-IDF; (ii) Word2Vec CBoW ptBR; (iii) Word2Vec Skipgram ptBR; (iv) BERT ptBR; (v) BERT Jud.; (vi) GPT-2 ptBR; (vii) GPT-2 Jud.; (viii) RoBERTa ptBR; e (ix) RoBERTa Jud., which are made available as supplemental material for comparison purposes. Thus, after a qualitative analysis, it is evident in the images that the groups formed from the RoBERTa pt-BR ( Figure 8) are much better defined, which corroborates the findings previously explained in this study. Furthermore, it is important to highlight that when we visualise the two-dimensional projection of the best technique presented by Oliveira and Nascimento [12] (Figure 9) and in the present work ( Figure 8) solidifies the importance the consolidation of a methodology for pattern detection using natural language processing applied to thousands of documents with very large contents.
Conclusions and Future Works
Applying AI techniques as a tool for pattern detection in legal documents has been proven as a viable and effective solution, showing satisfactory results that can underpin the practice of legal work, even more in certain circumstances where technicians and analysts are overwhelmed by huge volumes of work. In this way, it was possible to develop, test and deploy this methodology based on deep learning for grouping judicial processes, applying it, as a case study, for the Brazilian Labour Court, enabling the use of this methodology in other languages for the legal sector, and potentially for other areas of research, as well. Figure 9: Groups of documents formed by the Word2Vec Skip-gram technique projected in two dimensions using the test dataset [12].
Results showed this methodology to be very promising, due to the noticeable improvement in the Average Similarity Rate in the groups formed from the use of all NLP techniques applied in this work for clustering legal documents through unsupervised machine learning. In addition, this methodology dealt with documents composed of a lot of contents, which brought a difference to what has been seen in the scientific literature so far. Of all the techniques evaluated, the RoBERTa pt-BR technique stands out as the best option for the generation of vector representations of documents based on the embeddings for the task of clustering legal documents of the Ordinary Appeal Brought type, highlighting that the use of the best NLP technique to obtain word embeddings for finally generating the document embedding ensured a considerable improvement in the groupings. BERT pt-BR technique also presented interested results since its quantitative rates were slightly better than RoBERTa pt-BR, even though it did not reach an execution time as satisfactory as RoBERTa pt-BR. Hence, as detailed in the methodology, this performance characteristic of the RoBERTa models' architecture is key when dealing with thousands of documents with very large contents, which becomes the case when processing legal documents in courts.
On the other hand, the specialised models with the corpus of the judiciary, in general, did not achieve better results than the generalist one. Despite of this, we believe that the specialisation of BERT, GPT-2 and RoBERTa with a more robust legal corpus could achieve even better results, in such a way that the creation of generalist models for a given language for its legal area, that is, second-level foundation NLP models for the target language with a focus on the legal sector, would allow the creation and specialisation of new and more robust NLP models focused on the diverse legal areas, leveraging the results achieved since the language used in the legal environment has its own characteristics.
Furthermore, based on the methodology developed and evaluated in this work, a tool called GEMINI was developed for the Brazilian Labour Court, which allowed to assist in the search for jurisprudence, in the distribution of work among advisers and in the detection of possible opportunities to standardise the interpretation of the legal understanding within the courts, that is, to establish Cases of Uniformity of Jurisprudence. This tool was made available for implementation in all of the twenty-four Brazilian Labour Courts and, based on the suggested groupings, helped to speed up the progress of the resolution of the processes as reported by official websites, in Portuguese, of the Brazilian Labour Justice [37,38,39,40].
Therefore, for future work, we suggest deepening the specialisation of BERT, GPT-2 and RoBERTa for the judiciary and evaluating whether the new embeddings generated would improve the overall performance of clustering. In addition, new possibilities arise, such as validating the word embeddings generated for other types of legal documents and areas, using them in other applications, such as the generation of decision drafts and classification of documents and processes. It is also worth delving into techniques for texts transformation into their vector representations faster in their word embeddings.
Figure 1 :
1Slot N -generation of "sentences" with 128 tokens.
Figure 2 :
2N/K slot. From the above-detailed approach, simulations performed with the settings (i) Slot 128/16; (ii) Slot 128/32; (iii) Slot 128/64; (iv) Slot 256/64; (v) Slot 512/64; and (vi) Slot 64/16, comparing them with each other and with the approach proposed by[34]. The Slot 128/32 approach was selected for achieving the best performance in the specialisation of the Transformers model in Portuguese with the corpus of the judiciary(Figure 3).
Figure 3 :
3Slot 128/32.
Figure 4 :
4Word embeddings generation strategy.
Figure 5 :
5Inertia charts constructed by using the elbow method for determining the best number of clusters for each approach.
3 :Figure 6 :
36Statistics of the cosine similarity between all elements of the group, where the pt-BR models are generalist and the Jud. models are specialised. The best results are highlighted in bold. Comparison chart of the distribution of the average similarity between the group documents. The more cohesive the boxes and the fewer outliers, the better.
Figure 7 :
7Comparison chart of the distribution of the average similarity of the group documents to their centroid. The more cohesive the boxes and the fewer outliers, the better.
Figure 8 :
8Groups of documents formed by the RoBERTa pt-BR technique projected in two dimensions using the test dataset.
Table 1 :
1Parameters used for training the six models.BERT
imbau
BERT
Jud.
GPortu
guese-2
GPT-2
Jud.
roberta
-pt-br
RoBERTa
Jud.
Data for
traning
brWac
corpus
210 K
ROIs
Wikipe-
dia in
Portu-
guese
210 K
ROIs
Wikipe-
dia in
Portu-
guese
210 K
ROIs
Tokenization
type
word-piece
byte-level
BPE
byte-level
BPE
Model
details
24-layer,
1024-hidden,
16-heads,
340M parameters
12-layer,
768-hidden,
12-heads,
117M parameters
12-layer,
768-hidden,
12-heads,
125M parameters
Token
mask type
Masked
Masked
Causal
Table 2 :
2Statistical data extracted from the work "Clustering by Similarity of Brazilian Legal Documents Using NaturalLanguage Processing Approaches" [12]
Type
Groups Mean Std. Min. 25% 50% 75% Max.
TF-IDF
49
0.624 0.172 0.247 0.502 0.586 0.164 0.964
Word2Vec
CBow
ptBR
59
0.947 0.063 0.764 0.935 0.979 0.991 0.999
Word2Vec
Skip-gram
ptBR
34
0.948 0.061 0.796 0.925 0.976 0.992 0.999
Table
Table 4 :
4Statistics of the cosine similarity of the group elements to the centroids, where the pt-BR models are generalist and the Jud. models are specialised. The best results are highlighted in bold.Transformer
Model
Groups Mean Std. Min. 25% 50% 75% Max.
Table 5 :
5Average processed documents per minute for each model highlighted in bold for the best result value.Transformer Model
Average number of documents
processed per minute
BERT ptBR
6.45
BERT Jud
9.62
GPT-2 ptBR
29.40
GPT-2 Jud
29.03
RoBERTa ptBR
55.31
RoBERTa Jud
53.73
Given the above, among all the techniques evaluated, the RoBERTa pt-BR technique was the best option for generating
word embeddings for judicial documents clustering of the Ordinary Appeal Brought type. Although the BERT pt-BR
technique achieved a slightly better result (a difference minor than 0.01), it was computationally inefficient in document
processing gpt-2 pt-BR and RoBERTa pt-BR techniques.
A legal term meaning a set of previous judicial decisions following the same line of understanding. 2 https://www.doi.org/10.5281/zenodo.7686233
Labour Justice Unified Procedural Table. Available at: https://www.tst.jus.br/web/corregedoria/tabelas-processuais
AcknowledgementsThe authors thank the Reference Centre on Artificial Intelligence and the Supercomputing Centre for Industrial Innovation, both from SENAI CIMATEC, as well as the SENAI CIMATEC/NVIDIA AI Joint Lab, and the Surrey Institute for People-Centred AI at the University of Surrey, for all the scientific and technical support. The authors also thank the Regional Labour Court of the 5th Region for making datasets available to the scientific community and contributing to research and technological development.
A survey on machine learning models for natural language processing (nlp). Wahab Khan, Ali Daud, Jamal Nasir, Tehmina Amjad, 432016Wahab Khan, Ali Daud, Jamal Nasir, and Tehmina Amjad. A survey on machine learning models for natural language processing (nlp). 43:95-113, 10 2016.
Using dynamic embeddings to improve static embeddings. Yile Wang, Leyang Cui, Yue Zhang, 11Yile Wang, Leyang Cui, and Yue Zhang. Using dynamic embeddings to improve static embeddings. 11 2019.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, G Corrado, Jeffrey Dean, Proceedings of Workshop at ICLR. Workshop at ICLRTomas Mikolov, Kai Chen, G.s Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. Proceedings of Workshop at ICLR, 2013, 01 2013.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 14Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. volume 14, pages 1532-1543, 01 2014.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5, 07 2016.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. 10 2018.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 07 2019.
Relatório analítico anual da justiça em números 2021. CNJ. 112021CNJ. Relatório analítico anual da justiça em números 2021, 11 2021.
A duração dos processos no judiciário: aplicação dos princípios inerentes e sua eficácia no processo judicial. Gabriela Costa Salum, Direito Processual Civil. Rio Grande do Sul/Brazil145Âmbito JurídicoGabriela Costa Salum. A duração dos processos no judiciário: aplicação dos princípios inerentes e sua eficácia no processo judicial. In Direito Processual Civil, volume 145, Rio Grande do Sul/Brazil, 2016. Âmbito Jurídico.
Direito constitucional e teoria da constituição. José Joaquim Gomes Canotilho, Coimbra, AlmedinaJosé Joaquim Gomes Canotilho. Direito constitucional e teoria da constituição. Coimbra, Almedina, 2003.
Clustering by Similarity of Brazilian Legal Documents Using Natural Language Processing Approaches. Raphael Oliveira, Erick Giovani Sperandio Nascimento, Raphael Oliveira and Erick Giovani Sperandio Nascimento. Clustering by Similarity of Brazilian Legal Documents Using Natural Language Processing Approaches. 09 2021.
Os recursos na justiça do trabalho. Fernando José Vianna Oliveira, Fernando José Vianna Oliveira. Os recursos na justiça do trabalho, 6 2011.
Artificial intelligence and machine learning based legal application: The state-of-the-art and future research trends. Riya Sil, Bharat Bhushan, Arun Majumdar, 102019Riya Sil, Bharat Bhushan, and Arun Majumdar. Artificial intelligence and machine learning based legal application: The state-of-the-art and future research trends. pages 57-62, 10 2019.
On the effectiveness of pre-trained language models for legal natural language processing: An empirical study. Dezhao Song, Sally Gao, IEEE Access. 10Baosheng He, and Frank SchilderDezhao Song, Sally Gao, Baosheng He, and Frank Schilder. On the effectiveness of pre-trained language models for legal natural language processing: An empirical study. IEEE Access, 10:75835-75858, 2022.
A meta analysis of attention models on legal judgment prediction system. G Sukanya, J Priyadarshini, International Journal of Advanced Computer Science and Applications. 1222021G. Sukanya and J. Priyadarshini. A meta analysis of attention models on legal judgment prediction system. International Journal of Advanced Computer Science and Applications, 12(2), 2021.
An Unsupervised Content-Based Article Recommendation System Using Natural Language Processing. S Renuka, G S S Raj Kiran, Palakodeti Rohit, 012021S. Renuka, G. S. S. Raj Kiran, and Palakodeti Rohit. An Unsupervised Content-Based Article Recommendation System Using Natural Language Processing, pages 165-180. 01 2021.
Some methods for classification and analysis of multivariate observations. J B Macqueen, Proc. of the fifth Berkeley Symposium on Mathematical Statistics and Probability. L. M. Le Cam and J. Neymanof the fifth Berkeley Symposium on Mathematical Statistics and ProbabilityLos AngelesUniversity of California Press1J. B. MacQueen. Some methods for classification and analysis of multivariate observations. In L. M. Le Cam and J. Neyman, editors, Proc. of the fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281-297, Los Angeles, 1967. University of California Press.
Unsupervised automatic text summarization of konkani texts using k-means with elbow method. D' Jovi, Uzzal Silva, Sharma, International Journal of Engineering Research and Technology. 132380Jovi D'Silva and Uzzal Sharma. Unsupervised automatic text summarization of konkani texts using k-means with elbow method. International Journal of Engineering Research and Technology, 13:2380, 09 2020.
Text classification of news using transformerbased models for portuguese. Isabel N Santana, Raphael S Oliveira, Erick G Nascimento, Journal of Systemics, Cybernetics and Informatics. 205Isabel N. Santana, Raphael S. Oliveira, and Erick G. Nascimento. Text classification of news using transformer- based models for portuguese. Journal of Systemics, Cybernetics and Informatics, 20(5):33-59, Oct 2022.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 01Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. pages 2227-2237, 01 2018.
Contextual string embeddings for sequence labeling. Alan Akbik, Duncan Blythe, Roland Vollgraf, 08Alan Akbik, Duncan Blythe, and Roland Vollgraf. Contextual string embeddings for sequence labeling. 08 2018.
Learning generic context embedding with bidirectional lstm. Oren Melamud, Jacob Goldberger, Ido Dagan, 2Oren Melamud, Jacob Goldberger, and Ido Dagan. context2vec: Learning generic context embedding with bidirectional lstm. pages 51-61, 01 2016.
Long Short-Term Memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 11 1997.
Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network. Alex Sherstinsky, Physica D: Nonlinear Phenomena. 404132306Alex Sherstinsky. Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network. Physica D: Nonlinear Phenomena, 404:132306, 03 2020.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Lukasz Kaiser, Illia Polosukhin, 06Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. 06 2017.
Sentence-bert: Sentence embeddings using siamese bert-networks. Nils Reimers, Iryna Gurevych, Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. pages 3973-3983, 01 2019.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter Liu, 102019Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 10 2019.
The brwac corpus: A new open resource for brazilian portuguese. Jorge Wagner, Rodrigo Wilkens, Marco Idiart, Aline Villavicencio, Jorge Wagner, Rodrigo Wilkens, Marco Idiart, and Aline Villavicencio. The brwac corpus: A new open resource for brazilian portuguese. 05 2019.
BERTimbau: Pretrained BERT Models for Brazilian Portuguese. Fábio Souza, Rodrigo Nogueira, Roberto Lotufo, 10Fábio Souza, Rodrigo Nogueira, and Roberto Lotufo. BERTimbau: Pretrained BERT Models for Brazilian Portuguese, pages 403-417. 10 2020.
Gportuguese-2 (portuguese gpt-2 small): a language model for portuguese text generation (and more nlp tasks. Pierre Guillou, 2020Pierre Guillou. Gportuguese-2 (portuguese gpt-2 small): a language model for portuguese text generation (and more nlp tasks...). 2020.
. José Nascimento Da Silva, roberta-pt-brJosé Nascimento da Silva. roberta-pt-br, 2021. huggingface https://huggingface.co/josu/ roberta-pt-br.
Transfer learning in natural language processing. Sebastian Ruder, Matthew Peters, Swabha Swayamdipta, Thomas Wolf, 012019Sebastian Ruder, Matthew Peters, Swabha Swayamdipta, and Thomas Wolf. Transfer learning in natural language processing. pages 15-18, 01 2019.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Canwen Xu, Teven Scao, Sylvain Gugger, and Alexander Rush01Sam Shleifer, Patrick Platen, Clara Ma, Yacine Jernite, Julien PluThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Scao, Sylvain Gugger, and Alexander Rush. Transformers: State-of-the-art natural language processing. pages 38-45, 01 2020.
Gradient-based adversarial attacks against text transformers. Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, Douwe Kiela, 042021Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. Gradient-based adversarial attacks against text transformers, 04 2021.
Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, Journal of Machine Learning Research. 9Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, 2008.
Gabinetes do trt5 participam de projeto-piloto que utiliza inteligência artificial. Trt5, Gemini, 03TRT5. Gemini: Gabinetes do trt5 participam de projeto-piloto que utiliza inteligência artificial, 03 2020.
. Csjt, Gemini, 10CSJT. Gemini, 10 2020.
pje trará nova versão com um módulo do projeto gemini, que tem participação do trt5. Trt5, 11TRT5. O pje trará nova versão com um módulo do projeto gemini, que tem participação do trt5, 11 2020.
Projetos desenvolvidos no trt-15 são incorporados ao pje 2. TRT15. 112021TRT15. Projetos desenvolvidos no trt-15 são incorporados ao pje 2.7, 11 2021.
| [] |
[
"Strategize Before Teaching: A Conversational Tutoring System with Pedagogy Self-Distillation",
"Strategize Before Teaching: A Conversational Tutoring System with Pedagogy Self-Distillation"
] | [
"Lingzhi Wang \nThe Chinese University of Hong Kong\nHong KongChina\n\nMoE Key Laboratory of High Confidence Software Technologies\nChina\n",
"Mrinmaya Sachan \nDepartment of Computer Science\nETH Zurich\n\n",
"Xingshan Zeng ",
"Kam-Fai Wong kfwong@se.cuhk.edu.hk3msachan@ethz.ch \nThe Chinese University of Hong Kong\nHong KongChina\n\nMoE Key Laboratory of High Confidence Software Technologies\nChina\n"
] | [
"The Chinese University of Hong Kong\nHong KongChina",
"MoE Key Laboratory of High Confidence Software Technologies\nChina",
"Department of Computer Science\nETH Zurich\n",
"The Chinese University of Hong Kong\nHong KongChina",
"MoE Key Laboratory of High Confidence Software Technologies\nChina"
] | [
"Association for Computational Linguistics: EACL 2023"
] | Conversational tutoring systems (CTSs) aim to help students master educational material with natural language interaction in the form of a dialog. CTSs have become a key pillar in educational data mining research. A key challenge in CTSs is to engage the student in the conversation while exposing them to a diverse set of teaching strategies, akin to a human teacher, thereby, helping them learn in the process. Different from previous work that generates responses given the strategies as input, we propose to jointly predict teaching strategies and generate tutor responses accordingly, which fits a more realistic application scenario. We benchmark several competitive models on three dialog tutoring datasets and propose a unified framework that combines teaching response generation and pedagogical strategy prediction, where a self-distillation mechanism is adopted to guide the teaching strategy learning and facilitate tutor response generation. Our experiments and analyses shed light on how teaching strategies affect dialog tutoring. | 10.48550/arxiv.2302.13496 | [
"https://www.aclanthology.org/2023.findings-eacl.170.pdf"
] | 257,219,356 | 2302.13496 | f084a9278770bbf4a3b927848b5e9a45ffc1d981 |
Strategize Before Teaching: A Conversational Tutoring System with Pedagogy Self-Distillation
May 2-6, 2023
Lingzhi Wang
The Chinese University of Hong Kong
Hong KongChina
MoE Key Laboratory of High Confidence Software Technologies
China
Mrinmaya Sachan
Department of Computer Science
ETH Zurich
Xingshan Zeng
Kam-Fai Wong kfwong@se.cuhk.edu.hk3msachan@ethz.ch
The Chinese University of Hong Kong
Hong KongChina
MoE Key Laboratory of High Confidence Software Technologies
China
Strategize Before Teaching: A Conversational Tutoring System with Pedagogy Self-Distillation
Association for Computational Linguistics: EACL 2023
May 2-6, 2023
Conversational tutoring systems (CTSs) aim to help students master educational material with natural language interaction in the form of a dialog. CTSs have become a key pillar in educational data mining research. A key challenge in CTSs is to engage the student in the conversation while exposing them to a diverse set of teaching strategies, akin to a human teacher, thereby, helping them learn in the process. Different from previous work that generates responses given the strategies as input, we propose to jointly predict teaching strategies and generate tutor responses accordingly, which fits a more realistic application scenario. We benchmark several competitive models on three dialog tutoring datasets and propose a unified framework that combines teaching response generation and pedagogical strategy prediction, where a self-distillation mechanism is adopted to guide the teaching strategy learning and facilitate tutor response generation. Our experiments and analyses shed light on how teaching strategies affect dialog tutoring.
Introduction
Decades of research effort (Carbonell, 1970;Richardson, 1988;Brown, 2009) has been put into building intelligent tutoring systems (ITSs). An important feature of these systems is the ability to customize the instructional activities and strategies based on the learner's characteristics and needs (Keleş et al., 2009). Conversational tutoring systems (CTSs) that aim to offer automated tutoring through natural language dialog is a key pillar of ITS research. Earlier work in CTSs was based on conventional techniques such as Bayesian techniques with rule engines (Jeon and Su, 2010;Weragama and Reye, 2014) and hybrid neural networks (Kose and Arslan, 2017;Stasaski et al., 2020). While various advanced neural approaches have been applied to open-domain (Sordoni et al., 2015;Serban et al., 2016;Xing et al., 2017) and task- oriented dialogue systems (Zhao et al., 2017;Lei et al., 2018;Peng et al., 2020), conversational tutoring systems have not benefited from the development of these technologies (Macina et al., 2023). Human teachers use a number of nuanced teaching strategies in the classroom during interactions with students; these strategies are tailored to keep the students engaged in the conversation and learn knowledge efficiently. We show some examples of teaching strategies and interactions between the tutor and the student in Fig. 1. Previous work has attempted to model these teaching strategies in different ways -e.g., Suresh et al. (2019) contributed a teaching strategy classification model and Stasaski et al. (2020) proposed a response generation model based on given teaching strategies of next response.
In this work, we benchmark several neural dialog models on three conversational tutoring datasets, CIMA (Stasaski et al., 2020), TSCC (Caines et al., 2020) and TalkMoves (Suresh et al., 2019(Suresh et al., , 2022, and contribute a unified framework based on pretrained language models, where teaching strat-egy prediction and response generation are jointly trained. As predicting a teaching strategy merely by the historical context is more difficult than when we are also given the target tutor response, we also propose a pedagogy distillation mechanism that allows teaching strategy prediction to learn from the soft labels which are produced by the prediction with target response. The soft labels learned from the target response provides the model knowledge about various interrelationships between teaching strategies that hard labels lack. This approach is believed to be able to alleviate the learning difficulty (Hinton et al., 2015), which is particularly important, especially when the data imbalance and scarcity issues are severe -often the case in conversational tutoring data.
In summary, we are the first to benchmark 1 several competitive models for conversation tutoring system on all three datasets that are currently available. Besides, we propose a unified framework that can predict teaching strategy and generate tutoring responses accordingly, which is enhanced by a selfdistillation mechanism. Our experiments validate the positive effects of teaching strategy to guide generation and the importance of predicting strategy first and then generate response accordingly.
Related Work
A classical Intelligent Tutoring System generally has three modules (Brown, 2009;Polson and Richardson, 2013): (i) expert module that includes the knowledge that the student wants to learn (Carter, 2014)). (ii) student module that can adjust the level of student (e.g., primary/middle school, non-native/native speaker), student's knowledge deficiency, etc. (iii) pedagogical module that focuses on the strategies of teaching. In expert module, the knowledge is usually domain specific, such as computer programming (Costello, 2012), mathematics (Grawemeyer et al., 2016;Suresh et al., 2022), Italian (Stasaski et al., 2020), English (Caines et al., 2020). Many technologies have been used in the expert module, such as Bayesian techniques with rule engines (Jeon and Su, 2010; Weragama and Reye, 2014) and hybrid neural networks (Kose and Arslan, 2017;Stasaski et al., 2020). For pedagogical module, to our best knowledge, there are only three publicly available datasets that provide the pedagogy information. They are CIMA (Stasaski 1 The code can be found in https://github.com/ Lingzhi-WANG/TutorSystem
Source Sequence Encoder ! " … #! Target Sequence Decoder Source Sequence Decoder … ! " # ! ! " # " $ " % … … MLP MLP & " '
S e l f -D i s t i l l a t i o n et al., 2020), TSCC (Caines et al., 2020) and Talk-Moves (Suresh et al., 2022) datasets and all of them are based on single pedagogy. There has been very little work on neural dialog tutoring. Two exceptions to this are Suresh et al. (2022), who propose a simple BiLSTM-based module to predict the pedagogy of the next sentence that teachers are meant to say, and Stasaski et al. (2020) who use various generative models to generate responses given the pedagogical strategies. In contrast, in this work, we propose a joint approach for modelling the pedagogy and response generation that outperforms the previous approaches using a novel pedagogy distillation mechanism.
3 Our Model
Problem Formulation
Our conversational tutoring system takes conversation context C and teaching strategy list D as input. C is formalized as a sequence of turns {t 1 , t 2 , ..., t nc } where n c represents the number of turns. t i (1 ≤ i ≤ n c ) denotes the i-th turn of the conversation, and we use w i to indicate the word tokens contained in it. The teaching strategy list D covers all the possible strategies and contain n d teaching strategies. Our model will first output one or several strategy labels, each y d ∈ {1, 2, ..., n d }, to indicate what teaching strategy to use. Then the generation module generates a target response y t = (y t 1 , . . . , y t nt ) based on the predicted strategy.
Conversational Tutoring System (CTS)
PLM-based Generation Module. The generation module follows a Transformer (Vaswani et al., 2017) sequence-to-sequence framework. As the currently available tutoring datasets are quite small (containing about 3k conversations), we choose to finetune pretrained language models (PLM) to alleviate data scarcity and enhance context modeling. We finetune BART and multilingual BART(mBART) ) models for our generation module. During finetuning, we concatenate the utterances t i (1 ≤ i ≤ n c ) in context C with appended ⟨eos⟩ tokens in their chronological order as input, and maximize the probability of the ground-truth target sequence. The whole process is summarized as follows:
H c = Transformer_Encoder(w c ) (1) y t k = Transformer_Decoder(y t <k , H c ) (2) L gen target = nt k=1 − log(p(y t k |y t <k , H c ))(3)
where w c = [w 1 ; ⟨eos⟩; w 2 ; ..; w nc ; ⟨mask⟩], and y t <k represents the target tokens before y t k . We add ⟨mask⟩ at the end of context, to simulate the operation in pre-training (Schick and Schütze, 2021).
Besides, to summarize the representation of the conversation context, we employ an additional source sequence decoder as follows:
y s k = Transformer_Decoder(y s <k , H c ) (4) L gen source = ns k=1 − log(p(y s k |y s <k , H c )) (5)
where y s <k represents the source tokens before y s k . Teaching Strategy Prediction Module. We use the representation of the ⟨eos⟩ token (i.e. the final token) produced by the decoder as the representation for teaching strategy prediction, denoted as h ⟨eos⟩ . This is fed into a two-layer MLP for prediction:
r d = W 2 × α(W 1 h ⟨eos⟩ + b 1 ) + b 2(6)
where W 1 , W 2 , b 1 and b 2 are learnable parameters, and α is a non-linear activation function. The output representation r d will be an n d -dimension vector and the probability for each teaching strategy in list D is computed based on r d :
p(y d = j) = softmax(r d ) j(7)
where y d denotes the predicted strategy and j ∈ {1, 2, ..., n d }.
We denote h ⟨eos⟩ produced by source and target generation as h , it means that we predict the teaching strategy without knowing the corresponding content; while with h ⟨eos⟩ t , we summarize the teaching strategy based on the target content. Obviously, predicting with h ⟨eos⟩ s is what we need, but this is quite challenging. Thus we design a selfdistillation mechanism which uses prediction based on h ⟨eos⟩ t for enhancing the generation model.
Teaching Strategy Enhancement with Distillation. We denote the predicted probability for each strategy (derived with Eq. 7) using h ⟨eos⟩ s and h ⟨eos⟩ t as p s (·) and p t (·), respectively. Our self-distillation is defined as guidance from p t (·) to p s (·):
L sd = − n d j=1 p s (y d = j) log p t (y d = j) (8)
where we define p t (·) as teacher distribution and p s (·) as student distribution, and Eq. 8 makes the student distribution similar to the teacher distribution. In this way, our teaching strategy prediction model can also learn from the soft labels produced by the target sequence.
Multiple Teaching Strategies Guided Generation. To guide the response generation with teaching strategy, we regard the teaching strategies as prompt tokens and display them at the beginning of generation. In this way, the target tokens will be generated autoregressively according to the giving teaching strategy. Specifically, during training, we use the ground-truth strategy (denoted as d c , and it will be masked in distillation to avoid information leakage) for teacher forcing (i.e. y t 0 = d c in Eq. 3); during inference, we use the predicted strategies produced by the prediction module as prompt tokens.
To enable multiple teaching strategies guidance, we define a threshold θ, where all the strategies satisfying p s (y d = j) ≥ θ (1 ≤ j ≤ n d ) will be used to guide the response generation. To that end, we weightedly sum over the embeddings of those strategies as prompt based on their predicted probabilities produced by Eq. 7 and then use it to guide the generation.
Learning Objectives
The learning objective for teaching strategy prediction is defined as follows:
L pred = −(log ps(y d = d c ) + log pt(y d = d c )) + λ · L sd(9)
where d c is the ground-truth strategy for context C and λ is a hyper-parameter to control the weights of self-distillation loss. Our model is jointly trained on both generation and prediction, with the overall objective summarized as:
L = L gen + γ · L pred = L gen target + δ · L gen source + γ · L pred(10)
where δ and γ are tradeoff hyper-parameters.
Experimental Setup
Datasets. We use three datasets to do the experiments. They are CIMA (Stasaski et al., 2020), TSCC (Caines et al., 2020) and TalkMoves (Suresh et al., 2019(Suresh et al., , 2022. CIMA contains one-to-one conversations that focus on teaching students to translate a phrase from English to Italian. TSCC focuses on teaching English for eight non-native Englishspeaking students. TalkMoves is constructed by transcripts of math classrooms.
Parameter Setting. Our implementation is based on Fairseq (Ott et al., 2019). We split the data into 8:1:1 for training, validation and test. All the hyperparameters are chosen by grid-search based on the validation performance. We use BART-Base 2 and mBART-Large 3 models to initialize our model, respectively. BART-Base model has 6 layers of encoder and decoder with 768 hidden dimension, while mBART-Large has 12 layers of encoder and decoder with 1024 hidden dimension. The parameter sizes for the two models initialized with BART and mBART are 199M and 816M, respectively.
We use one NVIDIA RTX 3090 GPU to train our model. During training, we set the max tokens of each batch to 1024 (for BART, or 512 for mBART) with an update frequency of 4. We adopt Adam optimizer (Kingma and Ba, 2015) with learning rate selected in {1e-4, 5e-5, 2e-5, 1e-5} and warm-up updates selected in {200, 500, 1000} followed by a polynomial decay scheduler. Dropout strategy (Srivastava et al., 2014) with dropout rate selected in {0.2, 0.4} and L 2 regularization with 0.01 effect value, as well as early stoping based on validation performance, are used to alleviate overfitting. We set the tradeoff values among the losses as λ = 1.0, γ = 1.0 and δ = 0.2. During inference, predicting threshold θ = 0.3 and beam size is set to 5.
Experimental Results
Teaching Strategy Prediction Results
We report the accuracy and Macro F1 scores for teaching strategy prediction task in Table 1. We can find that prediction based on the target tutor response performs much better than merely on source context (comparing BART † and BART), which indicates that prediction with target content is much easier and also validates our motivation of the selfdistillation mechanism. With the help of our proposed distillation mechanism, our models with pretrained BART or mBART achieve the best performance in the prediction based on source context.
Tutor Response Generation Results
We then report case-sensitive detokenized sacre-BLEU (Post, 2018) and BERTScore (Zhang et al., 2019) for tutor response generation in Table 2.
Three Evaluation Settings. We show results in three settings in Table 2. "W/O TS" means we don't include teaching strategy (TS) labels in training and testing. "With Golden TS" means providing ground truth TS labels for training and testing. "Need TS Prediction" means models have to predict TS labels in testing and generate the follow-up tutor responses based on the predicted TS labels.
Analysis on Generation Results. From Table 2, we can draw the following main observations.
• Teaching strategy shows positive effects in generation. By comparing the results in "W/O TS" and "With Golden TS" settings, we observe that guidance from golden teaching strategies improves the generation performance in general, which validates the effects of teaching strategy in guiding generation. Besides, our models further improve their corresponding baselines (e.g. Our Model(BART) v.s. BART), which should result from the joint learning of generation and strategy prediction.
• Successful guidance requires accurate teaching strategies. By comparing results in "With Golden TS" and "Need TS Predict", we can find that most of the models perform worse when they need to predict strategies first, especially for the baselines with poor strategy prediction performance (refer to results of BiLSTM and Transformer in Table 1). This shows that guidance from inappropriate strategies might even hurt performance, which raises the need for accurate prediction in real-world applications and our proposed method can alleviate the gap significantly. "Is under the" is "e sotto il". Do you know how to say box?
Models
CTS:
[Hint] "Is under the" is "e sotto il".
Student:
La pianta e accanto al congilio giallo.
CTS: [Correction]
You're very close. but remember that adjective follows the noun.
CTS:
[Confirmation] Yes, that's right! Figure 3: Our CTS generates different responses when giving different teaching strategies (in red).
Effects of Teaching Strategy
We explore how teaching strategy affects the generation in Fig. 3. We feed our conversational tutoring system (CTS) with different teaching strategies and find that CTS generates totally different responses regarding the same context input. This also validates that teaching strategy is important for a CTS and strategizing before teaching is also essential.
Conclusion
In this work, we benchmarked neural models on various conversational tutoring datasets and proposed a self-distillation based model that jointly trains a teaching strategy prediction model and a response generation model. Experiments on three conversational tutoring datasets show that our model outperforms various standard baselines by a significant margin. Finally, we ended with an interesting case study to demonstrate the importance of strategizing before teaching.
Limitations
There are only three publicly available datasets (CIMA, TSCC and TalkMoves) for conversational tutoring task and they are quite small (less than 10K instances). There are significant data imbalance problems in these datasets -some teaching strategies occur much more frequently than others. These small and imbalanced datasets bring a lot of challenges to this task, but we did not discuss these issues in our paper due to the space limit. Besides, there are no standard teaching strategy annotation schemes, which prevents us from combining these three datasets together for more interesting experimental analyses. Another limitation of our work is that we only evaluate our approaches on automatic generation metrics. In the future, it would be interesting to also evaluate the model on learning related evaluations.
Figure 1 :
1Examples of teaching strategy and interactions between tutor and student. Teaching strategies inFigure 1(b) are in red.
Figure 2 :
2Our overall framework. The self-distillation leverages predictions based on target to improve predictions based on source. The enhanced strategy prediction is further utilized to facilitate the generation.
Table 1: Teaching strategy prediction results (in %). † indicates the prediction is based on the target tutor response. The best and second-best results in each column are in bold and underlined respectively.CIMA
TSCC
TalkMoves
Acc
F1
Acc
F1
Acc
F1
BART
64.3 31.5 59.1 11.6 55.2 31.1
BART †
82.3 57.1 64.4 18.9 75.9 50.5
Frequency
62.7 15.4 58.4
4.1
52.5 11.5
BiLSTM
57.3 30.1 56.5 11.2 50.1 25.6
Transformer
63.3 33.9 57.2 16.2 53.6 30.7
Our Model(BART)
69.7 39.2 60.6 17.4 57.8 35.5
Our Model(mBART) 70.4 39.8 60.4 17.0 59.6 37.6
Models
CIMA
TSCC
TalkMoves
BLEU BERT BLEU BERT BLEU BERT
W/O TS
BiLSTM
9.08
72.6
1.04
69.0
0.43
73.2
Transformer
10.1
72.2
1.53
70.4
0.74
74.9
BART
6.77
71.9
1.27
71.2
0.85
78.0
mBART
10.6
70.9
1.96
68.6
2.95
78.1
With Golden TS
BiLSTM
8.61
71.8
1.32
69.1
1.42
75.8
Transformer
11.2
72.8
1.99
69.9
2.35
77.4
BART
9.17
70.8
1.47
68.6
2.93
78.0
mBART
11.1
72.3
1.57
69.5
3.38
75.7
Our Model(BART)
10.8
71.4
2.02
70.6
3.18
78.0
Our Model(mBART)
12.1
73.8
2.93
72.6
5.47
79.7
Need TS Predict
BiLSTM
7.65
69.8
0.68
68.2
0.48
74.7
Transformer
8.04
68.6
0.79
69.3
2.05
76.8
BART
7.64
69.5
1.13
69.4
1.49
73.8
mBART
7.77
70.2
1.57
69.7
2.44
77.1
Our Model(BART)
8.67
70.8
2.83
70.0
2.22
77.5
Our Model(mBART)
11.9
73.0
2.98
71.9
4.51
78.6
Table 2 :
2Generation results (in %). The best results in each setting are in bold. Our full model achieves significantly better performance than the baselines with the same architecture in the same settings (paired t-test p < 0.05). : how to say under in Italian?StudentCTS:
[Hint]
[Question]
https://github.com/facebookresearch/fairseq/ tree/main/examples/bart 3 https://github.com/facebookresearch/fairseq/ tree/main/examples/mbart
Mobile intelligent tutoring system: moving intelligent tutoring systems off the desktop. Quincy Brown, Quincy Brown. 2009. Mobile intelligent tutoring sys- tem: moving intelligent tutoring systems off the desk- top.
The teacher-student chatroom corpus. Andrew Caines, Helen Yannakoudakis, Helena Edmondson, Helen Allen, Pascual Pérez-Paredes, Bill Byrne, Paula Buttery, Proceedings of the 9th Workshop on NLP for Computer Assisted Language Learning. the 9th Workshop on NLP for Computer Assisted Language LearningAndrew Caines, Helen Yannakoudakis, Helena Edmond- son, Helen Allen, Pascual Pérez-Paredes, Bill Byrne, and Paula Buttery. 2020. The teacher-student chat- room corpus. In Proceedings of the 9th Workshop on NLP for Computer Assisted Language Learning, pages 10-20.
Ai in cai: An artificialintelligence approach to computer-assisted instruction. Jaime R Carbonell, IEEE transactions on man-machine systems. 11Jaime R Carbonell. 1970. Ai in cai: An artificial- intelligence approach to computer-assisted instruc- tion. IEEE transactions on man-machine systems, 11(4):190-202.
An intelligent debugging tutor for novice computer science students. Elizabeth Emily Carter, Elizabeth Emily Carter. 2014. An intelligent debugging tutor for novice computer science students.
Adaptive intelligent personalised learning (AIPL) environment. Robert Costello, University of HullPh.D. thesisRobert Costello. 2012. Adaptive intelligent person- alised learning (AIPL) environment. Ph.D. thesis, University of Hull.
Affecting off-task behaviour: how affect-aware feedback can improve student learning. Beate Grawemeyer, Manolis Mavrikis, Wayne Holmes, Sergio Gutierrez-Santos, Michael Wiedmann, Nikol Rummel, Proceedings of the sixth international conference on learning analytics & knowledge. the sixth international conference on learning analytics & knowledgeBeate Grawemeyer, Manolis Mavrikis, Wayne Holmes, Sergio Gutierrez-Santos, Michael Wiedmann, and Nikol Rummel. 2016. Affecting off-task behaviour: how affect-aware feedback can improve student learn- ing. In Proceedings of the sixth international con- ference on learning analytics & knowledge, pages 104-113.
Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531Distilling the knowledge in a neural network. 2arXiv preprintGeoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7).
Adaptive e-learning using ecpaa rules, bayesian models, and group profile and performance data. S Sanghyun, Jeon, Y W Stanley, Su, International Journal of Learning Technology. 54Sanghyun S Jeon and Stanley YW Su. 2010. Adaptive e-learning using ecpaa rules, bayesian models, and group profile and performance data. International Journal of Learning Technology, 5(4):415-434.
Zosmat: Web-based intelligent tutoring system for teaching-learning process. Aytürk Keleş, Rahim Ocak, Ali Keleş, Aslan Gülcü, Expert Systems with Applications. 362Aytürk Keleş, Rahim Ocak, Ali Keleş, and Aslan Gülcü. 2009. Zosmat: Web-based intelligent tutoring system for teaching-learning process. Expert Systems with Applications, 36(2):1229-1239.
Adam: A method for stochastic optimization. P Diederick, Jimmy Kingma, Ba, International Conference on Learning Representations. ICLRDiederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).
Optimization of self-learning in computer engineering courses: An intelligent software system supported by artificial neural network and vortex optimization algorithm. Utku Kose, Ahmet Arslan, Computer Applications in Engineering Education. 251Utku Kose and Ahmet Arslan. 2017. Optimization of self-learning in computer engineering courses: An intelligent software system supported by artificial neural network and vortex optimization algorithm. Computer Applications in Engineering Education, 25(1):142-156.
Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, Dawei Yin, 10.18653/v1/P18-1133Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Sim- plifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437-1447, Melbourne, Australia. Association for Computational Linguistics.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.
Multilingual denoising pretraining for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, Transactions of the Association for Computational Linguistics. 8Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre- training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.
Iryna Gurevych, and Mrinmaya Sachan. 2023. Opportunities and challenges in neural dialog tutoring. Jakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational Linguistics. the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational LinguisticsJakub Macina, Nico Daheim, Lingzhi Wang, Tanmay Sinha, Manu Kapur, Iryna Gurevych, and Mrinmaya Sachan. 2023. Opportunities and challenges in neural dialog tutoring. In Proceedings of the 17th Confer- ence of the European Chapter of the Association for Computational Linguistics: Main Volume. Associa- tion for Computational Linguistics.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics (Demonstrations), pages 48-53.
Baolin Peng, Chunyuan Li, Jinchao Li, arXiv:2005.05298Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Few-shot task-oriented dialog with a single pretrained auto-regressive model. arXiv preprintBaolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayan- deh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Few-shot task-oriented dialog with a single pre- trained auto-regressive model. arXiv preprint arXiv:2005.05298.
Foundations of intelligent tutoring systems. C Martha, J Jeffrey Polson, Richardson, Psychology PressMartha C Polson and J Jeffrey Richardson. 2013. Foun- dations of intelligent tutoring systems. Psychology Press.
A call for clarity in reporting BLEU scores. Matt Post, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBelgium, BrusselsAssociation for Computational LinguisticsMatt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.
Foundations of intelligent tutoring systems. Jeffrey Ralph James Richardson, Psychology PressJeffrey Ralph James Richardson. 1988. Foundations of intelligent tutoring systems. Psychology Press.
Few-shot text generation with natural language instructions. Timo Schick, Hinrich Schütze, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingTimo Schick and Hinrich Schütze. 2021. Few-shot text generation with natural language instructions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 390- 402.
Building end-to-end dialogue systems using generative hierarchical neural network models. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, Joelle Pineau, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USAAAAI PressIulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using genera- tive hierarchical neural network models. In Proceed- ings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Ari- zona, USA, pages 3776-3784. AAAI Press.
A neural network approach to context-sensitive generation of conversational responses. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan, 10.3115/v1/N15-1020Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsAlessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196-205, Denver, Col- orado. Association for Computational Linguistics.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, J. Mach. Learn. Res. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929- 1958.
CIMA: A large open access dialogue dataset for tutoring. Katherine Stasaski, Kimberly Kao, Marti A Hearst, Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Fifteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsSeattle, WA, USA â †' OnlineAssociation for Computational LinguisticsKatherine Stasaski, Kimberly Kao, and Marti A. Hearst. 2020. CIMA: A large open access dialogue dataset for tutoring. In Proceedings of the Fifteenth Work- shop on Innovative Use of NLP for Building Educa- tional Applications, pages 52-64, Seattle, WA, USA â †' Online. Association for Computational Linguis- tics.
The TalkMoves dataset: K-12 mathematics lesson transcripts annotated for teacher and student discursive moves. Abhijit Suresh, Jennifer Jacobs, Charis Harty, Margaret Perkoff, James H Martin, Tamara Sumner, Abhijit Suresh, Jennifer Jacobs, Charis Harty, Margaret Perkoff, James H. Martin, and Tamara Sumner. 2022. The TalkMoves dataset: K-12 mathematics lesson transcripts annotated for teacher and student discur- sive moves. pages 4654-4662.
Automating analysis and feedback to improve mathematics teachers' classroom discourse. Abhijit Suresh, Tamara Sumner, Jennifer Jacobs, Bill Foland, Wayne Ward, Proceedings of the the Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. the the Ninth AAAI Symposium on Educational Advances in Artificial Intelligence33Abhijit Suresh, Tamara Sumner, Jennifer Jacobs, Bill Foland, and Wayne Ward. 2019. Automating analy- sis and feedback to improve mathematics teachers' classroom discourse. In Proceedings of the the Ninth AAAI Symposium on Educational Advances in Artifi- cial Intelligence, volume 33, pages 9721-9728.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
Analysing student programs in the php intelligent tutoring system. Dinesha Weragama, Jim Reye, International Journal of Artificial Intelligence in Education. 242Dinesha Weragama and Jim Reye. 2014. Analysing student programs in the php intelligent tutoring sys- tem. International Journal of Artificial Intelligence in Education, 24(2):162-188.
Topic aware neural response generation. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, Wei-Ying Ma, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceSan Francisco, California, USAAAAI PressChen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelli- gence, February 4-9, 2017, San Francisco, Califor- nia, USA, pages 3351-3357. AAAI Press.
Bertscore: Evaluating text generation with bert. Tianyi Zhang, Varsha Kishore, Felix Wu, Q Kilian, Yoav Weinberger, Artzi, International Conference on Learning Representations. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Wein- berger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In International Confer- ence on Learning Representations.
Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. Tiancheng Zhao, Allen Lu, Kyusong Lee, Maxine Eskenazi, 10.18653/v1/W17-5505Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueSaarbrücken, GermanyAssociation for Computational LinguisticsTiancheng Zhao, Allen Lu, Kyusong Lee, and Maxine Eskenazi. 2017. Generative encoder-decoder models for task-oriented spoken dialog systems with chat- ting capability. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 27-36, Saarbrücken, Germany. Association for Com- putational Linguistics.
| [
"https://github.com/facebookresearch/fairseq/",
"https://github.com/facebookresearch/fairseq/"
] |
[
"Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support",
"Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support",
"Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support",
"Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support"
] | [
"Charlotte Schluger \nCornell University\nUSA\n",
"Jonathan P Chang \nCornell University\nUSA\n",
"Karen Levy \nCRISTIAN DANESCU-NICULESCU-MIZIL\nCornell University\nUSA\n",
"Charlotte Schluger \nCornell University\nUSA\n",
"Jonathan P Chang \nCornell University\nUSA\n",
"Cristian Danescu-Niculescu-Mizil \nCornell University\nUSA\n",
"Karen 2022 Levy \nCornell University\nUSA\n",
"Proactive \nCornell University\nUSA\n",
"Charlotte Schluger \nCornell University\nUSA\n",
"Jonathan P Chang \nCornell University\nUSA\n",
"Karen Levy \nCRISTIAN DANESCU-NICULESCU-MIZIL\nCornell University\nUSA\n",
"Charlotte Schluger \nCornell University\nUSA\n",
"Jonathan P Chang \nCornell University\nUSA\n",
"Cristian Danescu-Niculescu-Mizil \nCornell University\nUSA\n",
"Karen 2022 Levy \nCornell University\nUSA\n",
"Proactive \nCornell University\nUSA\n"
] | [
"Cornell University\nUSA",
"Cornell University\nUSA",
"CRISTIAN DANESCU-NICULESCU-MIZIL\nCornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"CRISTIAN DANESCU-NICULESCU-MIZIL\nCornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA",
"Cornell University\nUSA"
] | [] | To address the widespread problem of uncivil behavior, many online discussion platforms employ human moderators to take action against objectionable content, such as removing it or placing sanctions on its authors. This reactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation, and has accordingly underpinned many recent efforts at introducing automation into the moderation process. Comparatively less work has been done to understand other moderation paradigms-such as proactively discouraging the emergence of antisocial behavior rather than reacting to it-and the role algorithmic support can play in these paradigms. In this work, we investigate such a proactive framework for moderation in a case study of a collaborative setting: Wikipedia Talk Pages. We employ a mixed methods approach, combining qualitative and design components for a holistic analysis. Through interviews with moderators, we find that despite a lack of technical and social support, moderators already engage in a number of proactive moderation behaviors, such as preemptively intervening in conversations to keep them on track. Further, we explore how automation could assist with this existing proactive moderation workflow by building a prototype tool, presenting it to moderators, and examining how the assistance it provides might fit into their workflow. The resulting feedback uncovers both strengths and drawbacks of the prototype tool and suggests concrete steps towards further developing such assisting technology so it can most effectively support moderators in their existing proactive moderation workflow. | 10.1145/3555095 | [
"https://export.arxiv.org/pdf/2211.16525v1.pdf"
] | 253,460,203 | 2211.16525 | 7a3a58630daa55d87433e351885272719c713ec6 |
Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support
Charlotte Schluger
Cornell University
USA
Jonathan P Chang
Cornell University
USA
Karen Levy
CRISTIAN DANESCU-NICULESCU-MIZIL
Cornell University
USA
Charlotte Schluger
Cornell University
USA
Jonathan P Chang
Cornell University
USA
Cristian Danescu-Niculescu-Mizil
Cornell University
USA
Karen 2022 Levy
Cornell University
USA
Proactive
Cornell University
USA
Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support
10.1145/3555095370 Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 370 (November 2022), 27 pages. https://CCS Concepts: • Human-centered computing → Interactive systems and toolsCollaborative and social computing systems and tools• Computing methodologies → Natural language processing Additional Key Words and Phrases: antisocial behavior, content moderation, algorithmic assistance, hybrid systems ACM Reference Format:
To address the widespread problem of uncivil behavior, many online discussion platforms employ human moderators to take action against objectionable content, such as removing it or placing sanctions on its authors. This reactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation, and has accordingly underpinned many recent efforts at introducing automation into the moderation process. Comparatively less work has been done to understand other moderation paradigms-such as proactively discouraging the emergence of antisocial behavior rather than reacting to it-and the role algorithmic support can play in these paradigms. In this work, we investigate such a proactive framework for moderation in a case study of a collaborative setting: Wikipedia Talk Pages. We employ a mixed methods approach, combining qualitative and design components for a holistic analysis. Through interviews with moderators, we find that despite a lack of technical and social support, moderators already engage in a number of proactive moderation behaviors, such as preemptively intervening in conversations to keep them on track. Further, we explore how automation could assist with this existing proactive moderation workflow by building a prototype tool, presenting it to moderators, and examining how the assistance it provides might fit into their workflow. The resulting feedback uncovers both strengths and drawbacks of the prototype tool and suggests concrete steps towards further developing such assisting technology so it can most effectively support moderators in their existing proactive moderation workflow.
INTRODUCTION
Online discussion platforms enable new forms of interaction and knowledge sharing and have become key in supporting online collaboration. However, incivility challenges their benefits, harming the mental and emotional health of individuals who are exposed to antisocial behavior or targeted by personal attacks [1,53], reducing engagement [14], or distracting from the underlying goals of the discussion [2]. As a result most online platforms use moderation to keep discussions within their community guidelines.
Moderation of public discussion platforms traditionally consists of a process in which human moderators respond to instances of antisocial behavior they discover-either by manually searching themselves or through reports from community members. This approach is labor-intensive, making it difficult to scale to the amount of content generated by modern day platforms, and can be practically and emotionally challenging for the human moderators [23,25]. These challenges have led to increasing interest in the use of algorithmic tools to (partially) automate the process of content moderation. Much of the recent work in this direction has focused on applying machine learning to automatically detect antisocial comments [51,65], a technique which has gone on to see use in industry through tools like the Perspective API. 1 While such tools have the potential to alleviate the labor-intensiveness of content moderation, they do not address one crucial limitation: this entire process is fundamentally reactive, consisting of responding to antisocial content that has already been posted. Even the best reactive moderation can only come after damage has already been done-possibly exposing any number of users to incivility and distracting or preventing productive discussions. Despite the technical attention that has been given to developing automatic classifiers for antisocial behavior, there has been surprisingly little work done to understand how well these tools actually fit into a human moderation workflow [35]: Is locating uncivil comments for the purpose of reactive moderation actually the sole task where human moderators need algorithmic support? Or, are there other aspects of the moderation workflow where an algorithmic tool can be of help?
In this work, we start addressing these questions by studying the potential for algorithmic assistance in a different moderation paradigm-proactive moderation-through a case study of a collaborative setting: Wikipedia Talk Pages. We focus on a specific proactive moderation strategy that prior work has identified as needing algorithmic support: identifying and monitoring conversations that are at-risk of devolving into uncivil behavior, with the goal of intervening early to avoid derailment or to mitigate its effects in a timely manner [39,59]. We interview moderators to understand their proactive moderation practices and identify their needs for algorithmic support. To explore the feasibility of addressing these needs with the technology available today, we build a prototype tool for assisting proactive moderation and observe how moderators engage with it in the context of their existing workflow. In our analysis we pay particular attention to the ethical issues that come to the fore, and discuss their implications for system design and future work.
Concretely, our case study aims to address the following primary research questions:
(1) (How) do moderators act proactively to prevent the derailment of discussions into uncivil behavior? (2) (How) can an algorithmic tool assist moderators in their proactive moderation workflow? Building on the answers to these questions, we identify concrete next steps towards further developing such assisting technology so that it can support moderators in their existing proactive moderation workflow.
BACKGROUND: THE LANDSCAPE OF MODERATION
Our work contributes to an extensive line of research on online discussion moderation which has aimed to both understand existing practices and develop new ones. Thus, to motivate our focus on the proactive paradigm and put our contributions in context, we begin by briefly surveying how prior work has characterized moderation practices, with an emphasis on identifying where existing practices fall on the reactive-proactive spectrum and exploring the role of algorithmic tools.
Why did you revert my edit?
The plot synopsis is too explicit, anyone reading this article will immediately find out the plot twist! Can we please remove the spoilers? Policy aside, I feel like I've been robbed of the enjoyment of watching the series.
You've lost all touch with reality...you have your head up your a**. An encyclopedia is for people to use, a******.
They were just following the Wikipedia policy to not use spoiler warnings, nor to omit significant information about a work of fiction.
No, sorry, but this is an encyclopedia. All articles are there to be full and complete.
Time
Proactive moderation monitors a conversation that seems to be at risk of turning uncivil, with the goal of intervening to avoid derailment or to mitigate its effects in a timely manner.
Pre-screening moderation holds each comment in a queue, requiring a moderator to explicitly approve it before it can be publicly viewed.
Reactive moderation takes action after an uncivil comment has been posted, e.g., by removing the comment or imposing sanctions on its author.
Practices of Moderation
In the popular consciousness, content moderation is often associated with the removal of illegal, hateful, or otherwise objectionable content [23], especially high-profile cases such as Reddit's 2015 mass banning of hate communities [9]. However, removal of content is but one part of content moderation, which in reality involves a wide and diverse range of hidden labor [16,48,64] that, while less visible to the public eye than a high-profile ban, is no less important to maintaining the everyday functioning of an online community [29].
The need to look beyond content removal is particularly salient in collaborative platforms like Wikipedia, which have eschewed top-down, platform-driven governance in favor of a community model of moderation [55], wherein ordinary community members volunteer to take on moderator roles [3,29]. Unlike platform-employed professional moderators, who are often seen as distant and separate from the communities they moderate [55], volunteer moderators are by definition part of their communities, and many will continue to participate in conversations and other informal interactions [47,64]. Volunteer moderators must therefore balance their "dual identities" as both regular community members and authority figures. Different ways of managing this balance will result in different conceptions of one's role and purpose as a moderator; in interviews, volunteer moderators have described their work with metaphors that range from the formal ("police", "governor", "manager") to the informal ("team member", "facilitator", "adult in the room") [57]. This diversity in attitudes towards moderation naturally leads to a diversity in employed methodology. While many platforms that use volunteer moderation offer their moderators formal tools similar to those wielded by their platform-employed professional counterparts, such as the authority to remove comments [22,34] or suspend users [12,47], volunteer moderators often express a preference for softer, social approaches to keeping the community in line [59]. Examples of such approaches include publicly modeling good behavior [58], educating users about the rules [6], and mediating disputes [3].
Given this wide breadth of moderation practices, researchers have sought to group and categorize them as a first step towards studying and understanding moderation [26,57]. In particular, an emerging line of work has proposed to distinguish different practices based on when they happen relative to the objectionable action being moderated: some practices are meant to respond to such behavior after it happens, while others are meant to discourage or prevent such behavior in the first place [40]. The former group has been referred to as reactive [47,59] or ex-post [26] moderation, and the latter as proactive or ex-ante. Additionally, other work has identified a third set of practices in this space, known as pre-screening moderation [40,57], which occupy a middle ground by inserting an extra step of moderator approval in between the creation and public posting of user-created content. In this work, we adopt the proactive/reactive/pre-screening categorization of moderation practices in the context of online discussion moderation. In the following sections (2.2-2.4) we discuss each in the context of online discussion platforms and compare their relative merits. Then, we survey the existing state of algorithmic approaches to moderation (2.5) and identify a key gap in the literature, algorithmic approaches to proactive moderation, which we seek to fill.
Reactive Moderation of Online Discussions
Currently, discussion moderation in online platforms most commonly takes the form of reacting to instances of antisocial behavior or rule violations [23]. The exact response to such content may vary [49]. On most large scale platforms, a standard response is to remove antisocial content that gets discovered by or reported to a centralized team of moderators [10,23]. Other platforms take a slightly softer approach and simply limit the visibility of such content rather than removing it outright [45]; some platforms extend this type of approach and limit content visibility on a user-specific basis, allowing the blocking of content from known bad actors [37]. A further userlevel response is to temporarily or permanently ban the authors of antisocial content from the platform [12,36].
Regardless of the exact nature of the response, however, reactive moderation in general is characterized by the fact that it relies on locating and responding to antisocial content that has already been posted. Moderator responses in this paradigm thus serve as a form of post-hoc damage control, designed both to prevent the harm of the antisocial content from spreading too far, and to signal to the broader community that such behavior is unacceptable [40], thereby discouraging other users from reproducing it [60].
While reactive moderation is widely used and studied, it also comes with an inherent weakness: because it involves taking action against antisocial content that has already been posted, the offending content has an opportunity to be seen and to spread before moderators are able to take action, if they ever do at all. This can harm users exposed to the antisocial content, in addition to harming the platform by preventing or distracting from productive discussions. For this reason, it has been argued that truly limiting the damage of antisocial behavior requires taking action before antisocial content can be seen by general audiences [40].
In addition to potentially harming platforms and their users, reactive moderation strategies can further harm the moderators tasked with regularly viewing and taking action on disturbing content, from the extreme stress of reacting to hateful comments to retribution from users in response to a moderation action threatening their personal safety [4,16,64]. We believe these harms implore us to investigate alternative approaches to moderation that may be able to avoid some of this human suffering.
Pre-screening Moderation of Online Discussion
One approach to taking action before antisocial content can be seen by general audiences is pre-screening: the paradigm by which all content must be reviewed and explicitly approved by moderators before it appears on the platform [40]. This approach, hearkening back to the days of traditional pen-and-paper media, is still employed by a handful of platforms such as the New York Times comment section. 2 Additionally, there has been work on hybrid automatic/human systems for comment pre-screening [52].
However, most platforms avoid this strategy because it raises a host of practical issues. Prescreening is highly labor intensive, and scales poorly as a platform grows: for example, even with the help of algorithmic pre-screening, the New York Times currently only allows comments on top stories for 8 hours during weekdays. Moreover, pre-screening prevents real-time interaction between users on a platform by introducing a delay between users submitting content and that content appearing on the platform while moderators review it. Finally, pre-screening has been subjected to criticism on the grounds of suppressing free speech [23].
While pre-screening may prevent antisocial behavior from reaching general audiences, it still relies on moderators to react to and address all attempts at antisocial behavior users make on their platform-this reaction has just shifted from the public square to the moderators' private space. This shift limits the effectiveness of pre-screening by reproducing the problems described above.
Proactive Moderation of Online Discussion
In online discussion platforms, proactive approaches to moderation can aim to discourage undesirable actions or to encourage prosocial behavior and productive conversations [58,59]. Proactive strategies can range from static design decisions to dynamic interventions in which moderators play a more active role. They can also be broadly applied to the entire community or targeted towards specific situations, users, or conversations.
Static strategies primarily involve deliberate choices in platform design aimed at promoting pro-social behaviors. These have a long and established history in social computing; now-common design choices such as activity indicators [19] and explicitly listed rules [40] were initially developed as measures to encourage the development and adoption of social norms within online communities. More recent developments in this direction include limitations on community size or rate of participation [26] and systems that prompt users to reflect more deeply on comments they have read [33,43,44]. While such design elements have historically been broadly applied at the level of the entire platform, there has been work on UI-level interventions that are more targeted to specific situations [31,47]. In the space of online discussions, such work has included psychological priming strategies that are deployed to users when they are about to comment in a discussion thread [56,61].
As platforms have grown and evolved, they have developed more dynamic strategies for proactive community management, in which moderators take a more direct role. For instance, a natural development from static listing of rules involves moderators personally sending reminders of community rules in high-impact situations, such as when welcoming newcomers [59]. As a further step from this, recent work has looked at how moderators can model good behavior in their own interactions, as a way of implicitly signaling to the community what proper behavior looks like [33,58].
One dynamic strategy in particular has picked up increasing interest: moderators actively monitor ongoing conversations in order to proactively prevent them from derailing into uncivil behavior or, at least, to be in a position that allows them to mitigate the effects of derailment in a timely manner [39,47]. Unlike the more static strategies discussed above, this dynamic and highly-targeted moderation strategy requires substantial time and effort on behalf of the moderators and thus scales poorly. As such, prior work has advocated for offering algorithmic support for this type of strategy, in the form of "predictive suggestions for threads or discussions that might soon devolve" that could help "moderators to focus their attention where it is needed." [59]. This current work aims to develop a system that can provide such algorithmic support and to test its feasibility.
As with pre-screening moderation, proactively preventing conversations from derailing into incivility raises ethical concerns around censorship and free speech. We engage with these issues, assess the risk of potential misuses of a tool for assisting proactive moderation-such as shutting down conversations or blaming participants proactively based only on a prediction of future incivility-and discuss implications on its design. While these concerns are warranted and deserving of a meaningful inquiry, we argue that the well-established harms to users, platforms, and moderators in the popular reactive moderation framework motivate us to explore alternate approaches to moderation. Because the status quo is not a neutral ethical choice, we must investigate such alternative approaches even though it requires a careful ethical analysis. We return to these questions about the ethics of proactive moderation in Section 5.3.2.
Algorithmic Tools for Moderation
Multiple studies on content moderation have identified a problem of scale: even if antisocial behavior is a small fraction of all content that gets posted, the sheer size of modern online platforms, together with the relatively small number of moderators present on most platforms, makes it infeasible for human moderators to keep up with all the content in need of moderation [16,24,47,65]. This has led to mental strain and burnout among moderators [16] and has directly inspired calls for the development of technological assistance to reduce the burden on human moderators [64]. As Gillespie writes, "the strongest argument for the automation of content moderation may be that, given the human costs, there is simply no other ethical way to do it, even if it is done poorly" [24]. Technological responses to this call have ranged in complexity: basic tools include simple word-based filters [7,47,64] and blocklists [37], while more advanced systems attempt to use machine learning and natural language processing techniques to automatically identify antisocial content [20,51,65].
Regardless of the choice of technical backend, most algorithmic tools for moderation are optimized for the reactive and pre-screening paradigms. A common use case is to apply the filter or classifier to content that has already been submitted for public posting; while in rare cases this can be applied as a pre-screening approach where the filter automatically blocks certain submitted content from getting posted (usually involving high-precision filters that look for hand-chosen terms known to cause problems in a specific micro-community) [47,64], the more common application is to use the filter or classifier to flag content for review by human moderators (allowing the content to stay public in the meantime) [8,21,35,47], which results in a reactive moderation workflow.
In comparison, algorithmic assistance for proactive moderation has been understudied. As described in Section 2.4, research in the proactive space has largely focused on user-facing interventions rather than moderator-facing tools. Historically, this could be attributed to a technology gap: prior work has argued that an algorithmic tool for proactive moderation, with comparable scope to tools currently available for reactive and pre-screening moderation, would require technology that can predictively identify discussions that are about to devolve into antisocial behavior [59]. Such technology has not been available until very recently, with a series of systems having been developed recently that are capable of making such ahead-of-time forecasts [13,46,66]. We therefore identify an opportunity to explore the potential of such predictive technology to provide algorithmic assistance for proactive moderation. Our current work aims to explore this promising new direction.
CASE STUDY: WIKIPEDIA TALK PAGES
In order to begin understanding the landscape of proactive moderation and potential for algorithmic assistance within the framework of community moderation, we conduct a case study of moderation on Wikipedia Talk Pages. On Wikipedia, discussions are not the primary content the platform provides; rather, Wikipedia hosts conversations on 'Talk Pages' in order to facilitate discussion around the content of articles or broader policies governing the encyclopedia [41]. 3 In this collaborative, goal-driven discussion environment, antisocial behavior is particularly impactful, threatening the health of the editor community and disrupting productivity [32,42].
Moderation of Talk Page discussions is community driven [55]: the Wikipedia community elects administrators with broad technical powers on the platform such as deleting articles or blocking other users. 4 A subset of these administrators choose to engage in discussion moderation. We note that there is no formal designation distinguishing discussion moderators from the rest of the administrators, and that discussion moderation practices (e.g., when a personal attack is subject for removal) are left largely at the discretion of these administrators. 5 The use of automated tools to assist in moderation has a long history on Wikipedia. Fully automated systems known as "bots" have been used to identify vandalism since as early as 2006 [29], and to this day bots continue to play key roles on Wikipedia, not only in vandalism detection but also in more social aspects of community management such as welcoming and educating new users [50,67]. While bots may be capable of handling mechanical tasks, other aspects of moderation still require a human touch, and for such tasks moderators make use of a different class of tools: "assisted editing programs" which are designed to augment (rather than replace) human moderation work [21]. A common design pattern in this space is to organize moderators' workload into work queues that help moderators prioritize situations in need of attention; this approach is exemplified by the popular tool Huggle [21] which organizes edits based on their algorithmically-determined likelihood of being vandalism. Beyond anti-vandalism, similar algorithmic approaches are used in another widely-used tool, ORES, to detect more types of rule violations in article edits [28].
Taken together, the goal-driven nature of Wikipedia Talk Page discussions, the large degree of discretion given to moderators, and familiarity Wikipedia moderators have with algorithmic tools in their workflow make this a convenient setting for an initial case study of proactive moderation practices. The fact that using automated tools is already a broadly accepted moderation practice on Wikipedia allows our study to focus specifically on the proactive aspect of our experimental tool rather than being confounded by moderators' thoughts on automation in general. The preponderance of existing tools on Wikipedia also gives us a good starting point from which to take design cues. Moreover, we believe the goal-driven nature of the discussions provides moderators with a strong motivation to improve their moderation practices, while the large degree of discretion granted to Wikipedia moderators gives them the freedom to consider and attempt alternative strategies.
It is important to note upfront that the empirical setting for our case study does impose some limitations on our work. Given the unique structure and culture of Wikipedia, our goal is not to report findings that generalize to any type of platform, but rather to begin understanding proactive moderation practices and the potential for algorithmic support in the specific setting of goal-driven online discussions. In the process, we provide a blueprint that other researchers can follow to begin understanding proactive moderation in other types of online communities, both for its own sake and for comparison with this setting.
METHODS
In order to begin to understand the moderators' proactive moderation workflow, as well as to investigate the potential of integrating algorithmic assistance into this workflow, we engaged two methodological approaches: (1) we conducted interviews with moderators of public online discussions, and (2) developed a prototype tool for assisting proactive moderation. The interviews explored moderators' experiences with proactive moderation in general, and also involved putting our prototype tool in their hands to observe how they may use a proactive moderation tool in practice and to inform its design. We engaged and iterated these approaches simultaneously, and each analysis was informed by the other.
Interviews
Following a rich line of prior literature that uses interviews to pull back the curtain on moderation practices [8,16,27,59,64], we conducted semi-structured interviews with nine administrators on Wikipedia who engage in Talk Page moderation. Each interview was conducted over Zoom and lasted approximately one hour; we subsequently produced full de-identified transcripts and coded the data using thematic analysis.
The interviews had two primary goals. The first half of the interview focused on participants' current practices: the role of administrators in moderation on Wikipedia, the goals of moderation, the ways participants moderate proactively, and how they reason about the future of conversations to inform their proactive interventions. The second half of the interview focused on understanding the potential of an algorithmic tool for assisting proactive moderation. By having participants use our prototype tool on real discussions happening live on Wikipedia Talk Pages, we had the opportunity to observe how it fits into their proactive moderation workflow, to what extent it addresses their needs, and to get feedback on design and ideal usage of such a tool directly from its target audience. The generic script of the interviews is included in Appendix A.
We conducted these interviews with Institutional Review Board (IRB) approval and recruited participants through snowball sampling: by asking each participant to recommend any other individuals they know who do discussion moderation on Wikipedia. While our interviews provided invaluable direct access to moderators and their domain specific knowledge, this recruitment procedure does impose a potential limitation on our work by potentially biasing our findings to the one branch of the Wikipedia moderator social graph that our sampling procedure reached.
Prototype Tool for Assisting Proactive Moderation
Our prototype tool is implemented as a password protected website that includes two main features: a ranked view of ongoing conversations ordered according to their likelihood of derailing into future antisocial behavior (Figure 2), and a conversation view giving a comment-by-comment breakdown of risk levels within the discussion (Figure 3). The tool currently works with discussions taking place on two different public discussion platforms (Wikipedia Talk Pages and Reddit, although for the purpose of this work, we focus on the former) and is demonstrated in the Video Figure. 6,7 We now delve into the technical details of this tool.
Backend:
The CRAFT Architecture. Our prototype tool is powered by a recent Natural Language Processing paradigm-conversational forecasting-which can be used to predict the future occurrence of an event in a conversation based on its current state [46,66]. Prior work has applied this paradigm to the task of forecasting future antisocial behavior in online discussions, and found that algorithmic forecasting models are approaching the level of human intuition on this task, albeit with lower accuracy [66]. While algorithmic models are not yet at the level of human performance, they are at a level that is comparable to models already used in existing Wikipedia moderation tools: the current state-of-the-art model for forecasting future antisocial behavior, CRAFT, has a reported F1 score of 69.8% [13], which is close to that of some models used in the popular ORES moderation tool. 8 We therefore believe that such models, while imperfect, are mature enough to be potentially useful to moderators. In light of this, we use the publicly available CRAFT model trained for forecasting future antisocial behavior on Wikipedia 9 to power our prototype tool.
Formally, given a discussion , modeled as a sequence of comments in reply-order, = { 1 , 2 , ..., }, CRAFT is trained to output the probability that a specified conversational event-in this case, antisocial behavior-will occur in the (not-yet-existent) next comment +1 in . In other words, CRAFT( ) = (isAntisocial( +1 )). CRAFT is an online model, in that it updates its forecast for each new comment in the discussion. That is, once an actual comment +1 happens, CRAFT can read the content of +1 to produce an updated forecast (isAntisocial( +2 )), and so on.
Since Wikipedia does not provide any special functionality to track live Talk Page conversationsin fact, Talk Pages are treated equivalently to normal article pages, with no special support for conversations-we implemented a system that parses and monitors changes in the discussions happening on a selected set of Wikipedia Talk pages 10 in real time. To this end, we use the diff functionality from the Wikipedia API and use a set of heuristics to determine whether a change on the Talk Page represents a new comment in a discussion on that page, and which discussion it belongs to. At regular intervals, the backend pulls the latest updates to every talk page being tracked, parses the updates to extract the conversations happening on the page, and runs CRAFT on those conversations to get an updated forecast of the risk of future incivility for each conversation. The tool also keeps track of how the CRAFT forecast for a discussion has changed over time. That is, for a (possibly ongoing) discussion = { 1 , 2 , 3 , . . . }, the tool creates and maintains a history
of CRAFT forecasts {CRAFT({ 1 }), CRAFT({ 1 , 2 }), CRAFT({ 1 , 2 , 3 }), . . . }.
Frontend:
The Moderator's Display. Our prototype frontend consists of two sections: a Ranking View and a Conversation View. The frontend adopts design metaphors used in existing Wikipedia moderation tools, and comes with a broad range of features and parameters in order to engage interview participants in a discussion that can inform future design.
On Wikipedia, assisted editing programs used by moderators often center around the organizational concept of the work queue, in which content that was algorithmically determined to warrant human review is presented to moderators in a centralized, prioritized list [21,29]. This is a proven design metaphor, having been used in empirically-studied moderator tools on Wikipedia [21] and elsewhere [8]. As such, our prototype tool similarly centers around a Ranking View (Figure 2) as its main feature. Based on a list of Talk Pages to include, our prototype tool provides a CRAFT score ranking of all the conversations taking place on any of these pages, sorted in the order of their risk of derailing in the future. Each conversation is represented as a row in the ranking and is color coded to indicate its forecasted risk of derailing (shades of red, with darker red corresponding to higher risk). This reflects the most up to date CRAFT forecast for that conversation, computed based on all the comments posted so far in the conversation, i.e., CRAFT({ 1 , 2 , . . . , }). Additionally, to make it easy to identify rapidly escalating situations, each conversation in the ranking is also decorated with an arrow whose direction and size reflect the gradient and size of the most recent change in CRAFT forecast, i.e., CRAFT({ 1 , 2 , . . . , }) − CRAFT({ 1 , 2 , . . . , −1 }); for example a large red up-facing arrow would signal that the tension is rapidly rising in the respective conversation.
In addition to displaying summary level information about a conversation, each row of the Ranking view is a clickable link that leads to the Conversation View (Figure 3), which displays the entire history of that conversation. The Conversation View presents the text of each comment in the conversation along with the time it was posted, its author, and the CRAFT score (color coded as before) at the time that comment was posted, i.e., taking into account the conversation up to and including that comment. This provides some level of transparency as to why the algorithm placed the conversation at a certain position in the Ranking View, allowing the moderator to observe how the predicted risk evolves as a conversation progresses. This design bears similarities to how algorithm decisions are presented to moderators in the experimental Reddit reactive moderation tool Crossmod [8].
FINDINGS
Moderator Goals: Content and Environment
To contextualize our discussion, we start by understanding the broad goals moderators have in our particular domain of Wikipedia Talk Page discussions. Following from the goal oriented nature of these discussions, as discussed in Section 3, participants highlighted how maintaining civil and productive discussions is not the end goal of their moderation. Rather, keeping discussions civil and functional is a crucial intermediary goal towards their primary goals: maintaining high quality content on the platform-in this case, encyclopedia articles-and maintaining a good environment for editors. As P6 puts it: Discussion moderation is crucial to maintaining these goals: antisocial behavior in discussions contributes directly to a hostile platform environment. Moreover, it can threaten the platform's content when it pushes editors to give up on editing an article or leave Wikipedia altogether [63], or when it prevents or distracts from the conversations necessary for content creation and refinement. This finding corroborates prior work showing how volunteer moderators are motivated by a variety of goals, including supporting a positive social environment and maintaining on-topic discussions relevant to their platform [18,55,64].
A further consideration for moderators is that Wikipedia relies heavily on experienced users to contribute to the articles [29]; when these important content creators act uncivilly in a discussion, moderators are hesitant to sanction them because of their perceived value in writing articles even though this incivility threatens the Wikipedia environment and alienates other users [14,30]. This exposes one way that the dual goals of moderation are in tension on Wikipedia. As P3 explains:
P3: I do believe that the English Wikipedia as a whole has a civility problem. [...] The community as a whole is far too willing to forgive incivility in the name of well-they're an experienced administrator or they're a really good content creator, so we'll just let them get by or say it wasn't that bad. And I think that that is not the path to a healthy community in the long term. I mean we have an editor retention problem, we know that. Everybody knows that. And I do think that the civility of the community is a significant part of that. In their view, moderators' imbalanced approach to the dual goals of moderation threatens the platform overall, and contributes to the difficulty retaining users on Wikipedia. Any tool aimed at assisting moderators must consider both goals and not either in isolation.
Proactive Moderation Practices
Acting Proactively.
Considering the broad goals of moderation on Wikipedia, we move to address one of our main research questions: Do moderators act proactively to prevent the derailment of discussions into antisocial behavior, and if so, what is their workflow?
First, we confirm that moderators on Wikipedia do in fact engage in a variety of proactive moderation strategies. The starting point in their workflow is their ability to foresee whether a conversation is at risk of derailing. If they consider that this risk is elevated, they can further start monitoring it, or even decide to intervene in the discussion to avoid future derailment. For example:
P6: Sometimes I can sit by and see things developing and I might drop by with a comment. I don't tend to get involved in very big issues and charge in but I will go in and say, 'This is becoming an inappropriate way of speaking. Let's talk collaboratively. Let's talk constructively. ' But do I monitor ongoing discussions for it? I suppose I look at some of the administrator notice boards, but I suppose I actually tend to sit more on the sidelines and watch other people engage in things, and only come in if I felt I had something to contribute or something to say like, 'Tone this down. ' And there is a good chance somebody else might too. While moderators' have access to formal administrative tools, called sanctions on Wikipedia 11such as blocking and interaction bans-proactively imposing any formal sanction is not permitted by Wikipedia's moderation guidelines and would raise ethical concerns; sanctions can only be used in response to a tangible offense. Therefore, the proactive interventions that moderators can employ are limited to informal moderation techniques.
Participants identify a variety of informal strategies they use to guide conversations which they assess to be at risk of derailment. For example, moderators will join a discussion as a level-headed participant in order to refocus the discussion on its original topic. P5 explains their strategy:
P5: In some of those cases I just engage as an additional participant rather than in discussion moderation just in order to just try and aid in those methods by bringing the discussion back on to context. A similar strategy is to leave just one comment in a discussion to acknowledge a growing dispute and try to neutralize it before it gets out of hand and irreparably damages the conversation. Prior work has described this as a moderator acting as a "mediator", stepping into a conversation facing rising tensions in order to resolve conflicts between clashing discussion participants [57]. P8 explains their strategy:
P8: I'll just leave a comment being like, 'Hey guys, I think this might be going off topic, ' and then I'll give my version of events. So it will be my opinion on it, in a very neutral way where I address each of their concerns. If I do it in a very polite way I think that typically a third party-especially an admin-does put the conversation back on topic.
A different version of this strategy is to remind users of platform rules when moderators anticipate they will be violated. Prior work has described this mode as a moderator acting as a "referee", working to "resolve disputes by referencing rules or a body of accepted knowledge" [57]. This can be seen as a more targeted version of automatic reminders, such as those triggered when interacting with newcomers [31]. P4 explains this strategy as they apply it:
P4: When we have [discussions in which there seems to be a significant chance of undesirable behavior arising], periodically we'll put up notices like, 'Hey, remember to keep civil, keep your comments about the content of the discussion, not the other editors directly. '
These three interventions show the wide range in the depth of moderator involvement required for different proactive interventions. Joining a discussion as a participant to try to bring it back on track requires contextual knowledge of the conversation topic at hand and continued involvement in a discussion. Similarly, leaving one comment to address the concerns of discussion participants requires contextual knowledge of the conversation and topic at hand, but does not require ongoing engagement. Finally, reminding users of the platform's policies only requires a prediction of which policies may be violated, while the reminder itself can take the same form across discussions on different topics and does not require continued engagement.
While some participants discuss how their proactive interventions can often bring discussions back on topic and avoid severe derailment beyond hope of repair, other participants describe how proactive interventions can backfire. Even when moderators forgo their formal sanctioning powers in favor of a softer approach, some users may react negatively to what they perceive as a threat of future sanctions. This implication may alienate users and limit the effectiveness of any proactive intervention. P1 explains:
P1: I did [proactive interventions in discussions] much more when I was younger. It doesn't work very well, I think because the idea is if you're coming in as sort of like an uninvolved administrator, [...] the assumed context is that you're getting ready to sanction them, which is never as useful as a friendly reminder. If I personally know one of the parties to the dispute, which happens on occasion, I might send them a direct email or a direct message, [...] just to try to hear what's going on. I found it particularly ineffective to post on Wiki to cool down, or something. This highlights one specific challenge moderators face when acting proactively: demonstrating to users that they genuinely want to help the conversation progress free of antisocial behavior rather than arriving early in preparation for future sanctions. This corroborates prior work showing how discussion moderators may shy away from joining discussions despite a desire to do so, because of their role as a moderator [27]. Thus, executing a successful proactive intervention requires a nuanced approach that considers the ways a moderator's actions will be perceived by users.
Benefits of Acting
Proactively. In addition to the established drawbacks of reactive moderationand the respective benefits of the proactive paradigm-discussed in prior work and elaborated in Section 2, our interviews reveal an additional issue: reactive interventions struggle to balance the two goals of moderation we identified above (Section 5.1), high-quality content creation and positive interactional environment.
Since the reactive paradigm is only to act after a clear violation of community norms, in this case moderators can and do impose alienating formal sanctions. So, when experienced users who make otherwise valuable content behave antisocially, actions to sanction them-intended to maintain a healthy environment on the platform-alienate them and hence threaten the further development of the platform. On the other hand, protecting these antisocial users just because they create good content can cause disruption to the platform environment and alienate other users. P7 explains this conundrum:
P7: [When experienced editors clash,] that's where we, as administrators, sometimes have a very difficult task. We don't want to block experienced editors because they are very useful, very valuable. [...] By the same token, we don't want disruption. So, we've walked this very fine line where we try to hold experienced users who are sometimes misbehaving accountable without trying to block. It is a very difficult and fine line to walk and I think it would be nice if we had some way to better keep people civil, and better [...] get people to work together.
Thus, in the reactive paradigm, antisocial content can threaten moderators' goals regardless of whether or not is addressed-disrupting the environment if it is not sanctioned, or alienating high value users if it is. Moreover, moderators face a significant challenge in realizing their dual moderation goals in the face of incivility from established users through the reactive paradigm, threatening their emotional health and consuming a lot of their time. P2 explains:
P2: [When] someone has been incredibly uncivil to lots and lots of people, but he's also an incredibly influential editor, it is an excruciating process to kind of get through the kind of pieces that I need to to try and rein in his incivility. I just have to be patient, [because] it's ongoing and long. Therefore, addressing incivility from valuable content creators through the reactive paradigm threatens moderators themselves, in addition to their goals.
Where reactive moderation faces this dilemma, the proactive paradigm offers a solution. Because proactive interventions come before any tangible incivility in a conversation, they are more well suited to take a softer and less alienating form, as discussed in Section 5.2.2. This allows moderators to support a healthy environment by preventing incivility in discussions while avoiding the drawbacks of reactive strategies. P2 explains their preference for using the proactive paradigm to address rising tensions in a conversation:
P2: I did not become an administrator in order to block people. There are definitely people that became administrators because that's what they want to do, they wanted to police behavior. I actually spend a fair amount of time policing behavior in terms of my overall workload, but like I said, I try to operate in the social sphere and really kind of have conversations rather than using that.
While not all moderators share this preference, proactive moderation offers those who do use it a more nuanced approach to moderation, better suited to balance their multiple moderation goals, rather than appeal directly to one or the other.
5.2.3
Foreseeing Future Derailment. One crucial prerequisite of proactive moderation is identifying which conversations are at risk of derailing into incivility. We find that moderators on Wikipedia use their own intuition about the future trajectory of conversations towards this end, considering a variety of factors to internally reason about the future of the conversations they see. For example: Q: Given a civil conversation, do you think it is possible to predict if it will eventually derail into uncivil comments? P7: Yes. Not always but yes. I would say, certainly with experience, you get a feel for it where if a discussion has started off on the wrong foot, maybe someone got [their edits] reverted and then they opened, you know, maybe not an uncivil but kind of a terse message like, "Hey, why did you undo my edit?, " that's not uncivil but...It started things off on a bit of a wrong foot. I could guess that some of those things might get uncivil.
Moderators use a variety of factors to make predictions about the future of conversations. Five participants report using direct observations from the conversation, like the conversation content or tone, to do forecasting. Using these direct features allows moderators to update their predictions over time as the conversation develops, whenever they check in on the conversation. On the other hand, the other four participants report forecasting solely based on metadata, including features of the conversation and of the interlocutors. Salient conversation properties identified by participants include the ostensible conversation topic (as indicated by the conversation header) and the number of participants in the conversation. Salient interlocutor metadata include level of experience on the platform, identity, and usernames. Drawing on their past experiences, participants consider such features to estimate the risk that a conversation is likely to derail in the future.
Evaluating the Feasibility of Algorithmic Assistance
Equipped with an understanding of moderators' goals and practices, we now proceed to explore concrete ways in which an algorithmic tool can assist with their proactive moderation workflow. We consider components of the workflow where moderators suggest that technical support is needed, and assess the feasibility of offering this support algorithmically with existing technology in an ethical and efficient manner. We show the feasibility of algorithmic assistance with one crucial aspect of the workflow-discovering at risk conversations-and discuss design and ethical implications that arise from observing how moderators engage with our prototype tool. We also discuss other needs that emerge from our interviews, but that do not lend themselves to algorithmic assistance due to both technical and ethical challenges, such as assigning blame for derailment or profiling users based on their prior behavior.
Discovery of At-risk Conversations:
Need and Support. We previously uncovered how moderators use their own intuition to decide which conversations to proactively moderate; now, we turn to the challenges moderators face in this crucial process and the resulting need for additional support.
One idealized form of proactive moderation that all participants found appealing is to identify conversations that they suspect are highly likely to derail and monitor them so that they can intervene proactively at an opportune moment or to react immediately to any uncivil behavior that does arise. However, moderators' ability to identify at-risk conversations to monitor is limited by the scale of the platform. P9 explains how even within the subset of topics they are interested in and engage in, their ability to effectively proactively monitor conversations is limited by their sheer number, which forces them to use only simplistic strategies, such as random discovery, to identify at-risk conversations to monitor:
P9: There are too many [potentially at-risk conversations] to proactively monitor. I know there's about 65 or 60 ongoing ones which are certainly always going to be at risk. [...] So I usually either wait until I'm asked, or I happen to see something, or I skip around and happen to notice something. The problem of scale is exacerbated by the inherent difficulty of determining when a conversation is in need of a proactive intervention. While every participant we interviewed believes there are some contexts in which they can foresee derailment, as described in Section 5.2.3, there is a wide range in how broad this context is and how confident participants are in their forecasts. Four participants believe that they can confidently forecast antisocial behavior in any Wikipedia context, but four others believe that they can only do so in very specific contexts with low confidence, and the last participant believes they can only make such forecasts in conversations on a handful of specific topics among discussion participants they know personally.
Given that moderators are often uncertain about their forecasts of a conversation's future trajectory, they may hesitate to intervene immediately, and instead desire to keep an eye on the situation to see how it develops. P3 explains: As P3 goes on to elaborate, however, this idealized notion of monitoring a conversation as it develops in real time is impractical in reality:
P3: There are some technical challenges to [monitoring a discussion] just because of the way the Wikipedia software works. There isn't an easy way to say, 'Give me updates for any changes in this discussion. ' And, in fact, you can't even say, 'Give me an update every time this page is changed, ' which is a perennial source of annoyance. But on the other hand, the resulting gap in attention could cause the moderator to miss out on key developments in the conversation, and thereby lose an opportunity to intervene. P6 explains this dilemma:
P6: I think I am okay at gauging if things are going to go pear-shaped, but do I always stick around to even find out if I am not interested in the topic? I may just move on and it blows up behind me. The hand grenade has gone off and I didn't even hear it because I've gone down the street. We therefore find that proactive moderation practices are difficult to scale up manually, both because of the size of the platform itself and because monitoring conversations-a necessary step given the uncertainty of moderators' forecasts-is time-consuming and impractical. Algorithmic solutions have the potential to address both challenges: by using a forecasting algorithm rather than random discovery or other limited methods to more efficiently identify conversations that are at risk, and by automatically monitoring them for relevant changes as opposed to requiring manual, repeated checks. This can potentially help moderators engage in proactive monitoring at a larger scale and dedicate more time to addressing potential issues.
How our prototype tool can address moderators' needs. Having identified the potential ways an algorithmic tool could help scale up the process of proactive discussion moderation, we now investigate the extent to which our prototype tool meets this potential, as well as aspects in which it might still fall short and thereby provide directions for future work. Concretely, we analyze moderators' feedback from their hands-on interaction with our prototype tool, with a focus on understanding which features moderators found the most useful and why, as well as what moderators found lacking in the prototype tool.
Moderators' feedback on the prototype tool suggests that information presented in the tool's Ranking View is helpful in discovering at-risk conversations, although individual moderators differed in their evaluation of exactly which pieces of information were most useful. For example, P4 reported that they would mainly use the CRAFT score to decide which conversations were worth monitoring:
P4: [For monitoring] I would just pick the ones with the highest score 'cause it seems to be somewhat accurate. Meanwhile, other participants highlighted the score change representation (i.e., the colored arrows) as providing an easy way to get a sense of when a monitored conversation needs to be further inspected. P7 reports:
P7: I like the score change indicator. That is useful. From a cursory glance, it looks like if the score is going up, I would inspect it, if the score was going down, maybe it is not worth inspecting. All together, five participants described how both the score and score change representation would be useful towards discovering these at-risk conversations.
However, moderators also identified several aspects of conversations that play into their existing intuitions about whether to monitor a conversation, but are not captured by the prototype tool. In particular, three participants mentioned wanting to know the ages of conversations, since their intuition is that if a conversation has been inactive for an extended period of time, it is unlikely to pick up again at all, much less turn uncivil. P7 expresses this view:
P7: That is very useful. That is probably all I would really need, too, except for the age of the conversation would also be another useful column because if the conversation was old, it wouldn't be worth my time to investigate it anyway but if I see the last comment was added within a day or two, I would then check it out. If it was more than, like, 2 or 3 days old, I mean, the conversation is probably dead anyway so it is not worth my time. Additionally, three participants wanted to see a summary of the end of the conversation-calling attention to the way that conversations on a path to incivility often stray far from the ostensible conversation topic, and how knowing the actual topic of discussion at hand is crucial for moderators to plan interventions. Both these suggestions could be taken into consideration in a future iteration of this tool. 12 On the other hand, five participants reported wanting to see data about discussion participants such as their usernames or age on the platform or their prior activity-features that could raise moral concerns and whose inclusion should thus be carefully weighed, as we will discuss in Section 5.3.2.
The feedback discussed thus far suggests that moderators would find the Ranking View useful in identifying conversations that might be at risk. However, as discussed above, an important additional part of the proactive moderation workflow is continuing to monitor such conversationssuch that the "grenade" does not "blow up" behind the moderator, to follow P6's metaphor. While we believe the comment-by-comment information given by the Conversation View could be helpful for this, 13 that would only be the case if this information aligns with how the moderator would intuitively judge the conversation.
To assess this, we selected several conversations from different positions in the ranking and invited the moderators to first examine them raw (i.e., without added information), allowing them to make intuitive judgments, and then to re-examine them in the Conversation View. Overall, moderators reported that the displayed per-comment CRAFT scores matched their own initial intuitive judgments. For instance, while looking at an example conversation predicted to be heading for future incivility, P2 describes: The most notable exception is that some participants disagreed with the final CRAFT score of a conversation because they thought the conversation was unlikely to continue, and thus in a trivial sense unlikely to see any future incivility. P8 explains:
P8: I didn't think [the last comment has] that high [chance of seeing an antisocial reply]. I mean, in most cases this person [...] will rage quit. That's typically in my experience what happened. That's interesting. I didn't think it was going to be that high [of a score].
While the prototype tool may be correctly capturing the intuition that the conversation is very tense and likely to see future antisocial behavior, in practice this may result in a user leaving the platform rather than leaving an antisocial comment in the conversation. Given that editor retention is also an issue of concern to Wikipedia moderators [14], this finding suggests that a future iteration of the tool could focus on more outcomes than just antisocial comments, as these other outcomes are equally part of the proactive moderation workflow.
Taking the design implications of a proactive moderation tool gleaned from this feedback together with the fact that the conversational forecasting algorithm underlying our prototype proactive moderation tool generally agrees with moderators' intuitions, we conclude that it is at least feasible to support moderators in identifying and monitoring at-risk conversations. However, this conclusion does not necessarily imply that moderators would accept and use such a tool, nor does it guarantee that the tool would be used in an ethical way. P3 explains some hesitations:
P3: I think an algorithm could be a useful indicator for flagging, 'Hey, this seems like a topic or a conversation that might be a problem down the line. ' But on its own I don't think an algorithm could actually be trusted to make the decision. A nice little browser plugin that highlights a section for me that says, 'This discussion looks like it's getting heated, you might want to take a look at it, ' that's something I would trust. A browser plug in telling me or a pile of machine learning telling me, 'Block this person, they're making everything uncivil wherever they go, ' not as inclined to trust it.
As P3 exemplifies, moderators are rightfully hesitant to put their full faith in an algorithmic tool, preferring to only use such a tool under their watch. Therefore, despite the agreement between state of the art forecasting methods and moderators' intuitions, these considerations motivate the need for follow-up work to conduct a large scale user study to analyze the performance of the tool and analyze use patterns in a more systematic way.
Other Needs and Ethical
Challenges. Thus far we showed how our prototype tool addresses some proactive moderation needs, while leaving others unanswered. In the latter category, some gaps could be straightforwardly addressed by future iterations of the tool (e.g., considering the age of the conversation). Addressing others, however, could have significant ethical implications. We now consider such cases in more detail.
In particular, one commonly reported complication in moderating antisocial behavior is how to properly assign blame in multi-user interactions. P8 points out that oftentimes, instances of antisocial behavior might arise in response to prior provocation:
P8: [When considering taking moderation action] I have to read the whole [discussion] in order to understand where they're coming from. I'm looking to see how the other person [is] responding to them, because it's not really fair for me to stick to talk[ing] to one person if they're being egged on. So, I tried to see where it got derailed and how to bring it back to a proper discussion. As P8 highlights, moderators do not want to treat a user being provoked and the user provoking them in the same way, and thus try to understand the root cause of a derailment in order to make a useful intervention and bring the discussion back on topic. This need is heightened by the phenomenon of Civil POV Pushers-users who keep their language within community guidelines but come with an intention to only push their point of view, often enraging other users. While the term 'Civil POV Pusher' is specific to Wikipedia, similar issues of bad-faith but superficially civil behavior could arise on any platform [38]. This makes it important for a moderator to know the history of a conversation to take action. P3 explains:
P3: You need to look back: is somebody provoking them into this? Because that's always a possibility. [...] We actually have a term kind of related to that. It's called civil POV pushing, for point of view, which is basically: you are here because you are on a mission. You want your opinion to be the correct one, but you are going to follow all the rules. You're not going to swear at people, but you are just going to keep bugging people or keep pressing your agenda until everybody else just gets tired of arguing with you. [...] In that case it might not be justified that someone cussed them out, but it's more that they were provoked in a way. And so it probably not be as appropriate to take such a harsh sanction against the person who flew off the handle. As P3 highlights, moderators use their discretion when sanctioning incivility if they believe the uncivil user was provoked by a bad-faith "Civil POV pusher." This indicates the need to identify civil POV pushing in discussions, especially when it leads to that discussion derailing.
In light of the complications introduced by the phenomenon of civil POV pushing, several moderators we spoke to expressed a desire for a further level of foresight in a proactive moderation tool: not just identifying which conversations are at risk of derailing, but also why those conversations might derail-even down to the level of exactly which individual comment might trigger the derailment. Besides tying back to a general desire among moderators to address the root cause of antisocial behavior, such information could also have the practical benefit of helping identify civil POV pushers-a task that is currently acknowledged by Wikipedia as being one of the most challenging and patience-testing ones facing moderators. 14 However, we caution that algorithmic tools, at least in their current state, are not a good framework to support moderators in this need. The current forecasting models-including CRAFT, the one we use in our prototype tool-are based on statistical machine learning methods, which notably do not explicitly model causality. In addition to being a key technical limitation of such models, this property can also introduce ethical hazards, as rather than making predictions based on the actual causal mechanisms, models might make predictions based on correlations arising from bias in the training data, a problem that has been previously observed in models for antisocial content detection [54,62]. This limitation implies that, at least given current technologies, automated tools cannot be used to determine the root cause of conversational derailment, and by extension cannot be used to distinguish the provoker from the provoked. Therefore, despite the depth of the challenge moderators face, algorithmic tools based on the statistical machine learning framework may not be an appropriate choice to assist moderators for this task given current limitations.
These biases embedded into any moderation tool raise further questions about their use. For example, participants identified how they use information about the participants in a discussion to inform their perceptions of which conversations are at-risk of future incivility (Section 5.3.1), and further that this use is a potential point for algorithmic support. However, automating the use of the identities of discussion participants for identifying at-risk conversations has questionable ethics. P3 explains:
P3: I think one thing that actually could be potentially useful for this is, though it also gets into some questionable territory is: who is in the discussion. Either just a breakdown of the top five contributors to the discussion. Or even, if we want to go into more Big Brother territory, [a summary of] how this person's comments usually score.
The biases likely present within the underlying CRAFT conversational forecasting model, and any statistical machine learning model, suggest that profiling users based on how such an algorithm typically scores their comments is problematic, and may facilitate discrimination against certain users. Moreover, because statistical machine learning algorithms' errors tend to be most concentrated around marginalized groups [5,24], profiling users based on their CRAFT scores over time holds the potential to reinforce social inequities and further harm those already facing marginalization.
Therefore, we caution against the deployment of any proactive moderation tools without implementing design features to discourage these fraught uses, in addition to informing moderators about the abilities and limitations of the tool. As mentioned in Section 5.3.1, these dangers warrant closer investigation through a user study, to better understand how to best inform moderators about the tool's limitations and ethical hazards.
DISCUSSION
Motivated by the gap between the goals of moderation and the reality of reactive moderation strategies, this work seeks to deepen our understanding of the proactive moderation paradigm. Through a case study of moderation on Wikipedia Talk Pages, we uncover the workflow through which moderators proactively monitor and intervene in discussions they deem to be at risk of derailing into uncivil behavior. We identify a specific component of this workflow that lends itself to algorithmic assistance and test the feasibility of such assistance by observing how moderators in this community engage with a tool running live on their platform.
Based on our interviews with moderators of Wikipedia Talk Pages, we reveal a delicate balance between two moderation goals in this collaborative setting: maintaining a civil and welcoming environment, while trying not to alienate otherwise valuable content creators. Reactive moderation tends to put these goals at odds: imposing harsh sanctions against uncivil behavior from otherwise valuable contributors can alienate them, but leaving such behavior alone creates a less civil and less welcoming environment. Proactive moderation offers an alternative pathway, by preventing sanctionable actions from occurring in the first place. Moreover, whereas reactive interventions tend to be strict formal sanctions such as a block, proactive interventions better lend themselves to more nuanced, informal actions. In interviews, moderators discuss how they employ proactive moderation strategies to prevent incivility without needing to remove any content or alienate users.
The interviews further shed light on aspects of the proactive workflow that moderators currently find challenging, and which may therefore benefit from algorithmic assistance. Moderators reported that there are too many ongoing conversations for them to reasonably inspect and that they would benefit from tools that can help them identify those that are at-risk, echoing suggestions from prior work [39,56]. Additionally, moderators indicated once at-risk conversations are identified, monitoring them is cumbersome and time consuming, and expressed a need for tools that can support this process.
We design a prototype tool to examine how algorithmic assistance could help moderators overcome these difficulties. In their feedback, moderators indicated that they found the prototype tool's Ranking View, and in particular its CRAFT score and score change information, to be helpful in discovering potentially at-risk conversations. Furthermore, moderators' feedback on the Conversation View shows that CRAFT scores within a conversation tend to match moderators' intuitions regarding derailment, thus providing a helpful visual aid for monitoring at-risk conversations. Overall, these findings suggest that using an algorithmic tool to quickly identify candidate conversations for monitoring could therefore improve the scalability of proactive moderation by mitigating the need for labor-intensive manual searching and tracking of Talk Page discussions.
These findings motivate further steps towards developing and testing technology for assisting proactive moderation:
Quantitative analysis. While we have observed our prototype tool in the hands of moderators during the course of each hour long interview, a full quantitative analysis of the usage and utility of the tool would require observations of its use over a longer time period, running on a broad selection of discussions. Such larger scale studies need to be set up with care, potentially using a sandbox approach [8], avoiding the disruption of the broader Wikipedia environment.
Potential misuses and ethical considerations. Any new technology that engages so directly with human communication requires thorough ethical scrutiny. We have argued (Section 2) that existing reactive and pre-screening moderation frameworks come with a number of ethical downsides, and that this justifies at least looking into alternative possibilities such as our proposal of algorithmically-assisted proactive moderation. Even so, we must remain conscious that our proposed methodology still falls under the broad umbrella of automated approaches to moderation, a subject area that in general raises hard moral questions [24]. Though we have already identified and discussed some ethical considerations that arose from the interviews (Section 5.3.2), this is by no means comprehensive. At a high level, further moral questions may arise from two sources. The first is technological: while the flaws of toxicity-classification algorithms that underpin reactive moderation tools are now well-studied and documented [15,17], conversational forecasting algorithms like CRAFT are a much newer technology and have yet to receive similar levels of scrutiny. The second is social: though human moderators already engage in proactive moderation on a small scale, if tools like ours succeed in their goal of scaling up such practices, there may be unforeseen consequences, as past work has demonstrated in other situations where human moderators augmented their existing practices with automated tools [29].
The ethical considerations listed above provide further inspiration for the design of a followup study. Such work should, in addition to evaluating the technical merits of algorithmically assisted proactive moderation, explore how to effectively inform moderators about the limitations and dangers of such algorithmic tools. Another important step is error analysis: the proactive moderation paradigm is more prone to error than the reactive moderation paradigm, as predicting the future is inherently harder than detecting something that already happened. These errors can be more consequential in the proactive paradigm-a proactive intervention based on a false positive prediction could in the worst case shut down a perfectly civil conversation before it even has a chance to progress, with the potential for a chilling effect on speech if these errors are common. As such, future work needs to evaluate and characterize the types of errors made by forecasting algorithms, and observe how moderators might be influenced by these errors. A particular focus should be placed on transparency and explainability to ensure that moderators can better understand the suggestions of the algorithms and identify erroneous ones.
Other domains and additional uses. While our focus has been on the collaborative domain of Wikipedia Talk Pages, future work should investigate the proactive moderation paradigm in other settings, such as Reddit and Facebook Groups. Even though our methodology can be easily translated to such domains, we do not expect the needs and dispositions of the moderators to be entirely echoed in other platforms, and thus the design of the tool might need to be adapted. In particular, prior studies have revealed that Reddit moderators might be more hands off and thus less likely to engage in proactive strategies [59].
Finally, our findings regarding how moderators can benefit from predictions about the future of a conversation invite the question of what other ways this information can benefit the broader community. For example, future work could investigate the potential of empowering regular usersthat is, the participants within online discussions-with this information through a user facing tool. By demonstrating the benefits of proactive moderation, and showing the feasibility of algorithmic assistance in this process, our present work lays a strong foundation for such future work, which constitutes a promising new direction in research on online content moderation.
-Q: How much time do you spend moderating each day? Each week?
* First ask about time spent as an administrator, then ask about moderation. -Q: How satisfied are you with your current moderation practices? Do you see room for improvement? -Q: When doing your job as a moderator, would you rather:
* (a) only [remove/take moderation action] very flagrant rule violations, and potentially miss some rule-breaking comments, or * (b) [remove/take moderation action] all comments that could be rule violations, potentially [remove/take moderation action] some comments that don't deserve to be. -Explanation: Here is some terminology that we will use for the rest of the interview: * We'll say that a comment is civil if it follows all the rules of your community, and that it is uncivil if it violates a community rule. * We will also say that a conversation eventually derails if it is civil right now, but in the future an uncivil comment gets posted to the conversation. -Q: Given a civil conversation, do you think it is possible to foretell if a conversation will eventually derail into uncivil comments? -Q: Do you think you yourself are able to do this prediction?
* If yes: · Q: Roughly how often do you think your prediction would be correct? That is, can you estimate what portion of the conversations you think will derail actually do end up derailing? · Q: What clues from a conversation do you use to inform your prediction? -Q: Do you think other moderators would be able to do this type of prediction? -Q: Do you think an algorithm might be able to do this type of prediction?
* Q: Do you think it would be better or worse than humans? • Monitoring derailing conversations:
-Q: Assume you would know for sure that an ongoing conversation will turn uncivil in the future. Would you like to monitor new comments that are posted in this conversation? -Q: Now consider a more realistic scenario, where you cannot know for sure what the future of a conversation will be. Now, say we have a conversation that is predicted to derail; we will go through various levels of confidence in this prediction, and I want you to tell me if you would want to monitor new comments in the conversation for each level of prediction confidence. * Would you want to monitor new comments if you had low certainty in the prediction (i.e., 20% of the conversations that are predicted to derail will eventually actually end up derailing)? * Would you want to monitor new comments if you had 50-50 certainty in the prediction (i.e., 50% of the conversations that are predicted to derail will eventually actually end up derailing)? * Would you want to monitor new comments if you had high certainty in the prediction (i.e., 80% of the conversations that are predicted to derail will eventually actually end up derailing)? • Taking proactive steps for derailing conversations -Q: Assuming you would know for sure that a (currently civil) conversation will turn uncivil and violate the rules of the community, what proactive steps do you, as a moderator, see yourself taking in order to prevent uncivil behavior (if any)? -Q: Now-as before-consider a more realistic scenario, where you cannot know for sure what the future of a conversation will be. Now, say we have a conversation that is predicted to derail; we will go through various levels of confidence in this prediction, and I want you to tell me which of the proactive steps you just mentioned you would still take for each level of prediction confidence. * What proactive steps would you take if you had low certainty in the prediction (i.e., 20% of the conversations that are predicted to derail will eventually actually end up derailing)? * What proactive steps would you take if you had 50-50 certainty in the prediction (i.e., 50% of the conversations that are predicted to derail will eventually actually end up derailing)? * What proactive steps would you take if you had high certainty in the prediction (i.e., 80% of the conversations that are predicted to derail will eventually actually end up derailing)? -Q: Have you ever taken any of these proactive steps in the past?
Topic 3: Analyzing a Mockup Conversation
[The participant is shown a conversation from their community in the Conversation View, with the CRAFT score annotations removed.]
• Q: Do you think any comments in this conversation are uncivil?
• Q: How likely do you think this conversation is to eventually derail into uncivil behavior (breaking the rules of the community)? What made you think this way (point to specific behaviors)? • Q: Would you want to monitor new comments in this conversation? • Q: Would you consider taking any proactive steps to prevent uncivil behavior? [The participant is shown the CRAFT scores for this conversation.]
• Ask the same questions again.
Topic 4: Analyzing a Mockup Ranking
[The moderator is shown a ranking of conversations from their community in the Ranking View.]
• Q: Which conversations do you think would be worth monitoring for uncivil behavior?
• Q: On the main page for the listing, how relevant is the information displayed about each thread? • Q: What other information would you find useful in deciding whether to inspect or monitor a conversation?
Fig. 1 .
1Three types of moderation paradigms exemplified in the context of a conversation between two Wikipedia editors that eventually derails into a personal attack (orange).
Fig. 2 .
2The Ranking View of our prototype tool, showing a list of live conversations on Talk Pages, sorted by their predicted risk of derailing into antisocial behavior.
Fig. 3 .
3The Conversation View of our prototype tool, showing a conversation with CRAFT scores alongside each comment. Each score represents the predicted risk of derailment at the time the corresponding comment was posted (taking into account the entire preceding context).
P6: [When I find a conversation headed downhill] I would not really care about the threads as having the thing go on, I'd care about the article and the environment of Wikipedia. I think those are the two things that I care about.
P3 :
P3From time to time I do see a discussion I think that I want to monitor, and I'm like 'Yeah, I probably should be keeping an eye on this. ' [. . . ] I might leave a tab open on it and come back to it just in case.
P2: [The escalating comment] definitely took it to a whole new level-and then having the third person come in, right? So, I feel like [the conversation view] is backing up what intuitively I had said. [...] I feel like that's very much in line with my experience and makes a lot of sense.
Topic 2 :
2Potential Use of Conversational Forecasting • Understanding what moderators would do without time constraints: -Q: If you had more time for your job as a moderator, what actions would you want to do? -Q: If you had more time for your job as a moderator, when you make a moderation decision about a comment, would you read more of the context around the comment to inform your decision? • Can moderators tell if a conversation is going awry? Can anyone?
https://www.perspectiveapi.com
https://help.nytimes.com/hc/en-us/articles/115014792387-Comments Proc. ACM Hum.-Comput. Interact., Vol. 6, No. CSCW2, Article 370. Publication date: November 2022.
See the Wikipedia talk page guidelines: https://en.wikipedia.org/wiki/Wikipedia:Talk_page 4 https://en.wikipedia.org/wiki/Wikipedia:Administrators5 In addition, the community grants an elected committee of arbitrators even broader powers to impose binding resolutions in order to resolve particularly severe disputes on Wikipedia, including but not limited to disputes in discussions (https: //en.wikipedia.org/wiki/Wikipedia:Arbitration).
We use password protection to avoid any potential misuses of this technology.7 A link to the Video Figure can be found at https://www.cs.cornell.edu/~cristian/Proactive_Moderation.html. 8 https://ores.wikimedia.org/v3/scores/enwiki/?model_info9 Provided through the ConvoKit package (https://convokit.cornell.edu)[11].10 For this project, we choose pages that we reasoned are likely to have conflict and need moderation: Barack_Obama, Bernie_Sanders, Coronavirus_disease_2019, COVID-19_pandemic, Donald_Trump, Joe_Biden, Kim_Jong-un, and Global_warming.
Proc. ACM Hum.-Comput. Interact., Vol. 6, No. CSCW2, Article 370. Publication date: November 2022.
From the Wikipedia:Sanctions page: "Sanctions are restrictions on editing Wikipedia that are applied to users or topic areas by the Wikipedia community and the Arbitration Committee in order to resolve disputes and curtail disruptive behaviour. " (https://en.wikipedia.org/wiki/Wikipedia:Sanctions)
It should be noted that the prototype tool already only includes "live" conversations and thus probably excludes most, if not all, "dead" conversations from the ranking, a fact that is not made apparent to the moderators. This issue notwithstanding, it is still possible that information about the age of conversations would still be useful to moderators even for more recently active conversations, so such a feature is still worth considering for future development.
In addition to just providing an augmented interface to follow the unfolding conversation, in a future iteration of the tool we can envision additional affordances, such as allowing the moderator to request notifications based on specific CRAFT thresholds.
https://en.wikipedia.org/wiki/Wikipedia:Civil_POV_pushing
ACKNOWLEDGEMENTSWe would like to thank Lillian Lee, Cecelia Madsen, the 2021-2022 cohort of fellows at the Center for Advanced Study in the Behavioral Sciences at Stanford, and all the reviewers for the enlightening discussions and helpful suggestions. We additionally recognize everyone who helped with the implementation of the prototype tool, particularly Lucas Van Bramer and Oscar So for their contributions to the codebase, Todd Cullen for his help in setting up the backend server configuration, and Caleb Chiam, Liye Fu, Khonzoda Umarova, and Justine Zhang for their extensive testing and generous feedback. Finally, we are grateful to all the Wikipedia Administrators who participated in our interviews, as well as to Daniel Glus who guided our efforts by offering key starting insights into the Wikipedia moderation workflow, and who kickstarted the recruitment process by connecting us to the broader community. This research was supported in part by an NSF CAREER award IIS-1750615; Jonathan P. Chang was supported in part by a fellowship with the Cornell Center for Social Sciences and Cristian Danescu-Niculescu-Mizil was supported in part by fellowships with the Cornell Center for Social Sciences and with the Center for Advanced Study in the Behavioral Sciences at Stanford.A INTERVIEW QUESTIONSThis appendix shows the general outline we followed for all moderator interviews. Note that this only served as a general guide; as the interview process is semi-structured we let the conversation flow naturally, so the exact order and wording of questions varied in practice.Topic 1: Current Discussion Moderation Practices• Understanding comment removal practices: -Q: How do you select comments to inspect for incivility and community rule violations? -Q: Do you ever proactively monitor ongoing conversations that you consider to be at risk of derailing into uncivil behavior? -Q: Say that you have a potentially problematic comment. Please describe your typical process for determining whether or not this comment needs moderation action. * (Optional) Q: Can you think of a specific example of a comment you took action on, and describe the process of determining whether or not that comment needed moderation action? • Understanding how moderators use context:
Cyberbullying Victimization among Turkish Online Social Utility Members. Yavuz Akbulut, Yusuf Levent Sahin, Bahadir Eristi, Educational Technology & Society. 13Yavuz Akbulut, Yusuf Levent Sahin, and Bahadir Eristi. 2010. Cyberbullying Victimization among Turkish Online Social Utility Members. Educational Technology & Society 13, 4 (Oct. 2010).
Ofer Arazy, Lisa Yeo, Oded , Stay on the Wikipedia Task: When Task-related Disagreements Slip Into Personal and Procedural Conflicts. 64Ofer Arazy, Lisa Yeo, and Oded Nov. 2013. Stay on the Wikipedia Task: When Task-related Disagreements Slip Into Personal and Procedural Conflicts. Journal of the American Society for Information Science and Technology 64, 8 (Aug. 2013).
Understanding Dispute Resolution Online: Using Text to Reflect Personal and Substantive Issues in Conflict. Matt Billings, Leon A Watts, Proceedings of CHI. CHIMatt Billings and Leon A. Watts. 2010. Understanding Dispute Resolution Online: Using Text to Reflect Personal and Substantive Issues in Conflict. In Proceedings of CHI.
When Online Harassment Is Perceived as Justified. Lindsay Blackwell, Tianying Chen, Sarita Schoenebeck, Cliff Lampe, Proceedings of ICWSM. ICWSMLindsay Blackwell, Tianying Chen, Sarita Schoenebeck, and Cliff Lampe. 2018. When Online Harassment Is Perceived as Justified. In Proceedings of ICWSM.
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Joy Buolamwini, Timnit Gebru, Proceedings of FAccT. FAccTJoy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of FAccT.
What Are Effective Strategies of Handling Harassment on Twitch? Users' Perspectives. Jie Cai And Donghee Yvette Wohn, Proceedings of CSCW. CSCWJie Cai and Donghee Yvette Wohn. 2019. What Are Effective Strategies of Handling Harassment on Twitch? Users' Perspectives. In Proceedings of CSCW.
#thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities. Stevie Chancellor, Jessica Annette Pater, Trustin Clear, Eric Gilbert, Munmun De Choudhury, Proceedings of CSCW. CSCWStevie Chancellor, Jessica Annette Pater, Trustin Clear, Eric Gilbert, and Munmun De Choudhury. 2016. #thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities. In Proceedings of CSCW.
Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators. Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, Eric Gilbert, Proceedings of CSCW. CSCWEshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A Cross- Community Learning-based System to Assist Reddit Moderators. In Proceedings of CSCW.
You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert, Proceedings of CSCW. CSCWEshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech. In Proceedings of CSCW.
The Internet's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales. Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, Eric Gilbert, Proceedings of CSCW. CSCWEshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales. In Proceedings of CSCW.
ConvoKit: A Toolkit for the Analysis of Conversations. Jonathan P Chang, Caleb Chiam, Liye Fu, Andrew Wang, Justine Zhang, Cristian Danescu-Niculescu-Mizil, Proceedings of SIGDIAL. SIGDIALJonathan P. Chang, Caleb Chiam, Liye Fu, Andrew Wang, Justine Zhang, and Cristian Danescu-Niculescu-Mizil. 2020. ConvoKit: A Toolkit for the Analysis of Conversations. In Proceedings of SIGDIAL.
Trajectories of Blocked Community Members: Redemption, Recidivism and Departure. Jonathan P Chang, Cristian Danescu-Niculescu-Mizil, Proceedings of WWW. WWWJonathan P. Chang and Cristian Danescu-Niculescu-Mizil. 2019. Trajectories of Blocked Community Members: Redemption, Recidivism and Departure. In Proceedings of WWW.
Trouble on the Horizon: Forecasting the Derailment of Online Conversations as They Develop. Jonathan P Chang, Cristian Danescu-Niculescu-Mizil, Proceedings of EMNLP. EMNLPJonathan P. Chang and Cristian Danescu-Niculescu-Mizil. 2019. Trouble on the Horizon: Forecasting the Derailment of Online Conversations as They Develop. In Proceedings of EMNLP.
Conflict, Criticism, or Confidence: An Empirical Examination of the Gender Gap in Wikipedia Contributions. Benjamin Collier, Julia Bear, Proceedings of CSCW. CSCWBenjamin Collier and Julia Bear. 2012. Conflict, Criticism, or Confidence: An Empirical Examination of the Gender Gap in Wikipedia Contributions. In Proceedings of CSCW.
Automated Hate Speech Detection and the Problem of Offensive Language. Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber, Proceedings of ICWSM. ICWSMThomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of ICWSM.
Moderation Practices as Emotional Labor in Sustaining Online Communities: The Case of AAPI Identity Work on Reddit. Bryan Dosono, Bryan Semaan, Proceedings of CHI. CHIBryan Dosono and Bryan Semaan. 2019. Moderation Practices as Emotional Labor in Sustaining Online Communities: The Case of AAPI Identity Work on Reddit. In Proceedings of CHI.
Mixed Messages? The Limits of Automated Social Media Content Analysis. Natasha Duarte, Emma Llansó, Proceedings of FAccT. FAccTNatasha Duarte and Emma Llansó. 2018. Mixed Messages? The Limits of Automated Social Media Content Analysis. In Proceedings of FAccT.
The Magic Sauce: Practices of Facilitation in Online Policy Deliberation. Dmitry Epstein, Gilly Leshed, Journal of Deliberative Democracy. 121Dmitry Epstein and Gilly Leshed. 2020. The Magic Sauce: Practices of Facilitation in Online Policy Deliberation. Journal of Deliberative Democracy 12, 1 (May 2020).
Social Translucence: An Approach to Designing Systems That Support Social Processes. Thomas Erickson, Wendy A Kellogg, ACM Transactions on Computer-Human Interaction. 71Thomas Erickson and Wendy A. Kellogg. 2000. Social Translucence: An Approach to Designing Systems That Support Social Processes. ACM Transactions on Computer-Human Interaction 7, 1 (March 2000).
Using Convolutional Neural Networks to Classify Hate-Speech. Björn Gambäck, Utpal Kumar Sikdar, Proceedings of ALW. ALWBjörn Gambäck and Utpal Kumar Sikdar. 2017. Using Convolutional Neural Networks to Classify Hate-Speech. In Proceedings of ALW.
The Work of Sustaining Order in Wikipedia: The Banning of a Vandal. R , Stuart Geiger, David Ribes, Proceedings of CSCW. CSCWR. Stuart Geiger and David Ribes. 2010. The Work of Sustaining Order in Wikipedia: The Banning of a Vandal. In Proceedings of CSCW.
I Run the World's Largest Historical Outreach Project and It's on a Cesspool of a Website. Sarah A Gilbert, Proceedings of CSCW. CSCWModerating a Public Scholarship Site on Reddit: A Case Study of r/AskHistoriansSarah A. Gilbert. 2020. "I Run the World's Largest Historical Outreach Project and It's on a Cesspool of a Website." Moderating a Public Scholarship Site on Reddit: A Case Study of r/AskHistorians. In Proceedings of CSCW.
Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Tarleton Gillespie, Yale University PressNew HavenTarleton Gillespie. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press, New Haven.
Tarleton Gillespie, Content Moderation, AI, and the Question of Scale:. Big Data & Society. Tarleton Gillespie. 2020. Content Moderation, AI, and the Question of Scale:. Big Data & Society (July 2020).
Expanding the Debate about Content Moderation: Scholarly Research Agendas for the Coming Policy Debates. Tarleton Gillespie, Patricia Aufderheide, Elinor Carmi, Ysabel Gerrard, Robert Gorwa, Ariadna Matamoros-Fernández, Sarah T Roberts, Aram Sinnreich, Sarah Myers West, Internet Policy Review. 9Tarleton Gillespie, Patricia Aufderheide, Elinor Carmi, Ysabel Gerrard, Robert Gorwa, Ariadna Matamoros-Fernández, Sarah T. Roberts, Aram Sinnreich, and Sarah Myers West. 2020. Expanding the Debate about Content Moderation: Scholarly Research Agendas for the Coming Policy Debates. Internet Policy Review 9, 4 (Oct. 2020).
The Virtues of Moderation. James Grimmelmann, Yale Journal of Law and Technology. 17James Grimmelmann. 2015. The Virtues of Moderation. Yale Journal of Law and Technology 17, 1 (Sept. 2015).
A View from Mount Olympus: The Impact of Activity Tracking Tools on the Character and Practice of Moderation. David Gurzick, Kevin F White, Wayne G Lutters, Lee Boot, Proceedings of GROUP. GROUPDavid Gurzick, Kevin F. White, Wayne G. Lutters, and Lee Boot. 2009. A View from Mount Olympus: The Impact of Activity Tracking Tools on the Character and Practice of Moderation. In Proceedings of GROUP.
ORES: Lowering Barriers with Participatory Machine Learning in Wikipedia. Aaron Halfaker, R. Stuart Geiger, Proceedings of CSCW. CSCWAaron Halfaker and R. Stuart Geiger. 2020. ORES: Lowering Barriers with Participatory Machine Learning in Wikipedia. In Proceedings of CSCW.
The Rise and Decline of an Open Collaboration System. Aaron Halfaker, R Stuart Geiger, Jonathan T Morgan, John Riedl, American Behavioral Scientist. 57Aaron Halfaker, R. Stuart Geiger, Jonathan T. Morgan, and John Riedl. 2013. The Rise and Decline of an Open Collaboration System. American Behavioral Scientist 57, 5 (May 2013).
Don't Bite the Newbies: How Reverts Affect the Quantity and Quality of Wikipedia Work. Aaron Halfaker, Aniket Kittur, John Riedl, Proceedings of WikiSym. WikiSymAaron Halfaker, Aniket Kittur, and John Riedl. 2011. Don't Bite the Newbies: How Reverts Affect the Quantity and Quality of Wikipedia Work. In Proceedings of WikiSym.
NICE: Social Translucence Through UI Intervention. Aaron Halfaker, Bryan Song, D Alex Stuart, Aniket Kittur, John Riedl, Proceedings of WikiSym. WikiSymAaron Halfaker, Bryan Song, D. Alex Stuart, Aniket Kittur, and John Riedl. 2011. NICE: Social Translucence Through UI Intervention. In Proceedings of WikiSym.
Christophe Henner, Maria Sefidari, Wikimedia Foundation Board on Healthy Wikimedia Community Culture, Inclusivity, and Safe Spaces -Wikimedia Blog. 624Article. Publication dateChristophe Henner and Maria Sefidari. 2016. Wikimedia Foundation Board on Healthy Wikimedia Community Culture, Inclusivity, and Safe Spaces -Wikimedia Blog. https://blog.wikimedia.org/2016/12/08/board-culture-inclusivity-safe- Proc. ACM Hum.-Comput. Interact., Vol. 6, No. CSCW2, Article 370. Publication date: November 2022. 370:24
. Charlotte Schluger, Charlotte Schluger et al. spaces/
Designing for Promoting Conflict-Resolution Skills in Youth on a Moderated Minecraft Server. Krithika Jagannath, Katie Salen, Petr Slovàk, Proceedings of CSCW. CSCW(We) Can Talk It OutKrithika Jagannath, Katie Salen, and Petr Slovàk. 2020. "(We) Can Talk It Out...": Designing for Promoting Conflict- Resolution Skills in Youth on a Moderated Minecraft Server. In Proceedings of CSCW.
Did You Suspect the Post Would Be Removed?": Understanding User Reactions to Content Removals on Reddit. Shagun Jhaver, Darren Scott Appling, Eric Gilbert, Amy Bruckman, Proceedings of CSCW. CSCWShagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. "Did You Suspect the Post Would Be Removed?": Understanding User Reactions to Content Removals on Reddit. In Proceedings of CSCW.
Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. Shagun Jhaver, Iris Birman, Eric Gilbert, Amy Bruckman, ACM Transactions on Computer-Human Interaction. 26Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM Transactions on Computer-Human Interaction 26, 5 (July 2019).
Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter. Shagun Jhaver, Christian Boylston, Diyi Yang, Amy Bruckman, Proceedings of CSCW. CSCWShagun Jhaver, Christian Boylston, Diyi Yang, and Amy Bruckman. 2021. Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter. In Proceedings of CSCW.
Online Harassment and Content Moderation: The Case of Blocklists. Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, Eric Gilbert, ACM Transactions on Computer-Human Interaction. 252Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, and Eric Gilbert. 2018. Online Harassment and Content Moderation: The Case of Blocklists. ACM Transactions on Computer-Human Interaction 25, 2 (March 2018).
The Multiple Harms of Sea Lions. Amy Johnson , Perspectives on Harmful Speech Online. Berkman Klein Center for Internet & SocietyAmy Johnson. 2017. The Multiple Harms of Sea Lions. In Perspectives on Harmful Speech Online. Berkman Klein Center for Internet & Society.
A Just and Comprehensive Strategy for Using NLP to Address Online Abuse. David Jurgens, Libby Hemphill, Eshwar Chandrasekharan, Proceedings of ACL. ACLDavid Jurgens, Libby Hemphill, and Eshwar Chandrasekharan. 2019. A Just and Comprehensive Strategy for Using NLP to Address Online Abuse. In Proceedings of ACL.
Regulating Behavior in Online Communities. Sara Kiesler, Robert Kraut, Paul Resnick, Aniket Kittur, Building Successful Online Communities. Paul Resnick and Robert KrautMIT PressSara Kiesler, Robert Kraut, Paul Resnick, and Aniket Kittur. 2012. Regulating Behavior in Online Communities. In Building Successful Online Communities: Evidence-Based Social Design, Paul Resnick and Robert Kraut (Eds.). MIT Press.
Harnessing the Wisdom of Crowds in Wikipedia: Quality Through Coordination. Aniket Kittur, Robert E Kraut, Proceedings of CSCW. CSCWAniket Kittur and Robert E. Kraut. 2008. Harnessing the Wisdom of Crowds in Wikipedia: Quality Through Coordina- tion. In Proceedings of CSCW.
He Says, She Says: Conflict and Coordination in Wikipedia. Aniket Kittur, Bongwon Suh, Bryan A Pendleton, Ed H Chi, Proceedings of CHI. CHIAniket Kittur, Bongwon Suh, Bryan A. Pendleton, and Ed H. Chi. 2007. He Says, She Says: Conflict and Coordination in Wikipedia. In Proceedings of CHI.
Supporting Reflective Public Thought with Considerit. Travis Kriplean, Jonathan Morgan, Deen Freelon, Alan Borning, Lance Bennett, Proceedings of CSCW. CSCWTravis Kriplean, Jonathan Morgan, Deen Freelon, Alan Borning, and Lance Bennett. 2012. Supporting Reflective Public Thought with Considerit. In Proceedings of CSCW.
Is This What You Meant?: Promoting Listening on the Web with Reflect. Travis Kriplean, Michael Toomim, Jonathan Morgan, Alan Borning, Andrew Ko, Proceedings of CHI. CHITravis Kriplean, Michael Toomim, Jonathan Morgan, Alan Borning, and Andrew Ko. 2012. Is This What You Meant?: Promoting Listening on the Web with Reflect. In Proceedings of CHI.
Slash(Dot) and Burn: Distributed Moderation in a Large Online Conversation Space. Cliff Lampe, Paul Resnick, Proceedings of CHI. CHICliff Lampe and Paul Resnick. 2004. Slash(Dot) and Burn: Distributed Moderation in a Large Online Conversation Space. In Proceedings of CHI.
Forecasting the Presence and Intensity of Hostility on Instagram Using Linguistic and Social Features. Ping Liu, Joshua Guberman, Libby Hemphill, Aron Culotta, Proceedings of ICWSM. ICWSMPing Liu, Joshua Guberman, Libby Hemphill, and Aron Culotta. 2018. Forecasting the Presence and Intensity of Hostility on Instagram Using Linguistic and Social Features. In Proceedings of ICWSM.
When All You Have Is a Banhammer : The Social and Communicative Work of Volunteer Moderators. Claudia (claudia Wai Yu, ) Lo, Massachusetts Institute of TechnologyClaudia (Claudia Wai Yu) Lo. 2018. When All You Have Is a Banhammer : The Social and Communicative Work of Volunteer Moderators. Thesis. Massachusetts Institute of Technology.
The Civic Labor of Volunteer Moderators Online. J , Nathan Matias, Social Media + Society. 52J. Nathan Matias. 2019. The Civic Labor of Volunteer Moderators Online. Social Media + Society 5, 2 (April 2019).
Controlling Bad Behavior in Online Communities: An Examination of Moderation Work. R Aiden, Jean-Gregoire Mcgillicuddy, Jocelyn Ann Bernard, Cranefield, Proceedings of ICIS. ICISAiden R McGillicuddy, Jean-Gregoire Bernard, and Jocelyn Ann Cranefield. 2016. Controlling Bad Behavior in Online Communities: An Examination of Moderation Work. In Proceedings of ICIS.
Evaluating the Impact of the Wikipedia Teahouse on Newcomer Socialization and Retention. Jonathan T Morgan, Aaron Halfaker, Proceedings of OpenSym. OpenSymJonathan T. Morgan and Aaron Halfaker. 2018. Evaluating the Impact of the Wikipedia Teahouse on Newcomer Socialization and Retention. In Proceedings of OpenSym.
Abusive Language Detection in Online User Content. Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, Yi Chang, Proceedings of WWW. WWWChikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive Language Detection in Online User Content. In Proceedings of WWW.
Supporting Comment Moderators in Identifying High Quality Online News Comments. Deokgun Park, Simranjit Sachar, Nicholas Diakopoulos, Niklas Elmqvist, Proceedings of CHI. CHIDeokgun Park, Simranjit Sachar, Nicholas Diakopoulos, and Niklas Elmqvist. 2016. Supporting Comment Moderators in Identifying High Quality Online News Comments. In Proceedings of CHI.
Involvement in Traditional and Electronic Bullying Among Adolescents. Juliana Raskauskas, Ann D Stoltz, Developmental Psychology. 43Juliana Raskauskas and Ann D. Stoltz. 2007. Involvement in Traditional and Electronic Bullying Among Adolescents. Developmental Psychology 43, 3 (May 2007).
The Risk of Racial Bias in Hate Speech Detection. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A Smith, Proceedings of ACL. ACLMaarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The Risk of Racial Bias in Hate Speech Detection. In Proceedings of ACL.
Reconsidering Self-Moderation: The Role of Research in Supporting Community-Based Models for Online Content Moderation. Joseph Seering, Proceedings of CSCW. CSCWJoseph Seering. 2020. Reconsidering Self-Moderation: The Role of Research in Supporting Community-Based Models for Online Content Moderation. In Proceedings of CSCW.
Designing User Interface Elements to Improve the Quality and Civility of Discourse in Online Commenting Behaviors. Joseph Seering, Tianmi Fang, Luca Damasco, Likang Mianhong 'cherie' Chen, Geoff Sun, Kaufman, Proceedings of CHI. CHIJoseph Seering, Tianmi Fang, Luca Damasco, Mianhong 'Cherie' Chen, Likang Sun, and Geoff Kaufman. 2019. Designing User Interface Elements to Improve the Quality and Civility of Discourse in Online Commenting Behaviors. In Proceedings of CHI.
Joseph Seering, Geoff Kaufman, Stevie Chancellor, Metaphors in Moderation. New Media & Society. Joseph Seering, Geoff Kaufman, and Stevie Chancellor. 2020. Metaphors in Moderation. New Media & Society (Oct. 2020).
Shaping Pro and Anti-Social Behavior on Twitch Through Moderation and Example-Setting. Joseph Seering, Robert Kraut, Laura Dabbish, Proceedings of CSCW. CSCWJoseph Seering, Robert Kraut, and Laura Dabbish. 2017. Shaping Pro and Anti-Social Behavior on Twitch Through Moderation and Example-Setting. In Proceedings of CSCW.
Moderator Engagement and Community Development in the Age of Algorithms. Joseph Seering, Tony Wang, Jina Yoon, Geoff Kaufman, New Media & Society. 217Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator Engagement and Community Development in the Age of Algorithms. New Media & Society 21, 7 (July 2019).
Content Removal as a Moderation Strategy: Compliance and Other Outcomes in the ChangeMyView Community. Kumar Srinivasan, Cristian Danescu-Niculescu-Mizil, Lillian Lee, Chenhao Tan, Proceedings of CSCW. CSCWKumar Srinivasan, Cristian Danescu-Niculescu-Mizil, Lillian Lee, and Chenhao Tan. 2019. Content Removal as a Moderation Strategy: Compliance and Other Outcomes in the ChangeMyView Community. In Proceedings of CSCW.
Accountability and Empathy by Design: Encouraging Bystander Intervention to Cyberbullying on Social Media. Dominic Samuel Hardman Taylor, Yoon Hyung Difranzo, Shruti Choi, Natalya N Sannon, Bazarova, Proceedings of CSCW. CSCWSamuel Hardman Taylor, Dominic DiFranzo, Yoon Hyung Choi, Shruti Sannon, and Natalya N. Bazarova. 2019. Accountability and Empathy by Design: Encouraging Bystander Intervention to Cyberbullying on Social Media. In Proceedings of CSCW.
Detection of Abusive Language: The Problem of Biased Datasets. Michael Wiegand, Josef Ruppenhofer, Thomas Kleinbauer, Proceedings of NAACL. NAACLMichael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: The Problem of Biased Datasets. In Proceedings of NAACL.
Wikimedia Support and Safety Team. 2015. Harassment Survey. Wikimedia Support and Safety Team. 2015. Harassment Survey. https://upload.wikimedia.org/wikipedia/commons/5/ 52/Harassment_Survey_2015_-_Results_Report.pdf
Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience. Donghee Yvette Wohn, Proceedings of CHI. CHIDonghee Yvette Wohn. 2019. Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience. In Proceedings of CHI.
Ex Machina: Personal Attacks Seen at Scale. Ellery Wulczyn, Nithum Thain, Lucas Dixon, Proceedings of WWW. WWWEllery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex Machina: Personal Attacks Seen at Scale. In Proceedings of WWW.
Conversations Gone Awry: Detecting Early Signs of Conversational Failure. Justine Zhang, Jonathan P Chang, Cristian Danescu-Niculescu-Mizil, Lucas Dixon, Nithum Thain, Yiqing Hua, Dario Taraborelli, Proceedings of ACL. ACLJustine Zhang, Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil, Lucas Dixon, Nithum Thain, Yiqing Hua, and Dario Taraborelli. 2018. Conversations Gone Awry: Detecting Early Signs of Conversational Failure. In Proceedings of ACL.
The Roles Bots Play in Wikipedia. ) Lei (nico, Christopher M Zheng, Neev M Albano, Feng Vora, Jeffrey V Mai, Nickerson, Proceedings of CSCW. CSCWLei (Nico) Zheng, Christopher M. Albano, Neev M. Vora, Feng Mai, and Jeffrey V. Nickerson. 2019. The Roles Bots Play in Wikipedia. In Proceedings of CSCW.
| [] |
[
"Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation",
"Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation"
] | [
"Bryan Eikema b.eikema@uva.nl \nUniversity of Amsterdam\nUniversity of Amsterdam\n\n",
"Wilker Aziz w.aziz@uva.nl \nUniversity of Amsterdam\nUniversity of Amsterdam\n\n"
] | [
"University of Amsterdam\nUniversity of Amsterdam\n",
"University of Amsterdam\nUniversity of Amsterdam\n"
] | [
"Proceedings of the 28th International Conference on Computational Linguistics"
] | Recent studies have revealed a number of pathologies of neural machine translation (NMT) systems. Hypotheses explaining these mostly suggest there is something fundamentally wrong with NMT as a model or its training algorithm, maximum likelihood estimation (MLE). Most of this evidence was gathered using maximum a posteriori (MAP) decoding, a decision rule aimed at identifying the highest-scoring translation, i.e. the mode. We argue that the evidence corroborates the inadequacy of MAP decoding more than casts doubt on the model and its training algorithm. In this work, we show that translation distributions do reproduce various statistics of the data well, but that beam search strays from such statistics. We show that some of the known pathologies and biases of NMT are due to MAP decoding and not to NMT's statistical assumptions nor MLE. In particular, we show that the most likely translations under the model accumulate so little probability mass that the mode can be considered essentially arbitrary. We therefore advocate for the use of decision rules that take into account the translation distribution holistically. We show that an approximation to minimum Bayes risk decoding gives competitive results confirming that NMT models do capture important aspects of translation well in expectation.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/. | 10.18653/v1/2020.coling-main.398 | [
"https://www.aclweb.org/anthology/2020.coling-main.398.pdf"
] | 218,763,425 | 2005.10283 | 07540ac76219a80719d28b74f40071b2279fc635 |
Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation
OnlineCopyright OnlineDecember 8-13, 2020
Bryan Eikema b.eikema@uva.nl
University of Amsterdam
University of Amsterdam
Wilker Aziz w.aziz@uva.nl
University of Amsterdam
University of Amsterdam
Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20204506
Recent studies have revealed a number of pathologies of neural machine translation (NMT) systems. Hypotheses explaining these mostly suggest there is something fundamentally wrong with NMT as a model or its training algorithm, maximum likelihood estimation (MLE). Most of this evidence was gathered using maximum a posteriori (MAP) decoding, a decision rule aimed at identifying the highest-scoring translation, i.e. the mode. We argue that the evidence corroborates the inadequacy of MAP decoding more than casts doubt on the model and its training algorithm. In this work, we show that translation distributions do reproduce various statistics of the data well, but that beam search strays from such statistics. We show that some of the known pathologies and biases of NMT are due to MAP decoding and not to NMT's statistical assumptions nor MLE. In particular, we show that the most likely translations under the model accumulate so little probability mass that the mode can be considered essentially arbitrary. We therefore advocate for the use of decision rules that take into account the translation distribution holistically. We show that an approximation to minimum Bayes risk decoding gives competitive results confirming that NMT models do capture important aspects of translation well in expectation.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/.
Introduction
Recent findings in neural machine translation (NMT) suggest that modern translation systems have some serious flaws. This is based on observations such as: i) translations produced via beam search typically under-estimate sequence length (Sountsov and Sarawagi, 2016;Koehn and Knowles, 2017), the length bias; ii) translation quality generally deteriorates with better approximate search (Koehn and Knowles, 2017;Murray and Chiang, 2018;Ott et al., 2018;Kumar and Sarawagi, 2019), the beam search curse; iii) the true most likely translation under the model (i.e., the mode of the distribution) is empty in many cases (Stahlberg and Byrne, 2019) and a general negative correlation exists between likelihood and quality beyond a certain likelihood value (Ott et al., 2018), we call this the inadequacy of the mode problem.
A number of hypotheses have been formulated to explain these observations. They mostly suggest there is something fundamentally wrong with NMT as a model (i.e., its factorisation as a product of locally normalised distributions) or its most popular training algorithm (i.e., regularised maximum likelihood estimation, MLE for short). These explanations make an unspoken assumption, namely, that identifying the mode of the distribution, also referred to as maximum a posteriori (MAP) decoding (Smith, 2011), is in some sense the obvious decision rule for predictions. While this assumption makes intuitive sense and works well in unstructured classification problems, it is less justified in NMT, where oftentimes the most likely translations together account for very little probability mass, a claim we shall defend conceptually and provide evidence for in experiments. Unless the translation distribution is extremely peaked about the mode for every plausible input, criticising the model in terms of properties of its mode can at best say something about the adequacy of MAP decoding. Unfortunately, as previous research has pointed out, this is seldom the case (Ott et al., 2018). Thus, pathologies about the mode cannot be unambiguously ascribed to NMT as a model nor to MLE, and inadequacies about the mode cannot rule out the possibility that the model captures important aspects of translation well in expectation.
In this work, we criticise NMT models as probability distributions estimated via MLE in various settings: varying language pairs, amount of training data, and test domain. We observe that the induced probability distributions represent statistics of the data well in expectation, and that some length and lexical biases are introduced by approximate MAP decoding. We demonstrate that beam search outputs are rare events, particularly so when test data stray from the training domain. The empty string, shown to often be the true mode (Stahlberg and Byrne, 2019), too is an infrequent event. Finally, we show that samples obtained by following the model's own generative story are of reasonable quality, which suggests we should base decisions on statistics gathered from the distribution holistically. One such decision rule is minimum Bayes risk (MBR) decoding (Goel and Byrne, 2000;Kumar and Byrne, 2004). We show that an approximation to MBR performs rather well, especially so when models are more uncertain.
To summarise: we argue that i) MAP decoding is not well-suited as a decision rule for MLE-trained NMT; we also show that ii) pathologies and biases observed in NMT are not necessarily inherent to NMT as a model or its training objective, rather, MAP decoding is at least partially responsible for many of these pathologies and biases; finally, we demonstrate that iii) a straight-forward approximation to a sampling-based decision rule known as minimum Bayes risk decoding gives good results, showing promise for research into decision rules that take into account the distribution holistically.
Observed Pathologies in NMT
Many studies have found that NMT suffers from a length bias: NMT underestimates length which hurts the adequacy of translations. Cho et al. (2014a) already demonstrate that NMT systematically degrades in performance for longer sequences. Sountsov and Sarawagi (2016) identify the same bias in a chat suggestion task and argue that sequence to sequence models underestimate the margin between correct and incorrect sequences due to local normalisation. Later studies have also confirmed the existence of this bias in NMT (Koehn and Knowles, 2017;Stahlberg and Byrne, 2019;Kumar and Sarawagi, 2019).
Notably, all these studies employ beam search decoding. In fact, some studies link the length bias to the beam search curse: the observation that large beam sizes hurt performance in NMT (Koehn and Knowles, 2017). Sountsov and Sarawagi (2016) already note that larger beam sizes exacerbate the length bias. Later studies have confirmed this connection (Blain et al., 2017;Murray and Chiang, 2018;Yang et al., 2018;Kumar and Sarawagi, 2019). Murray and Chiang (2018) attribute both problems to local normalisation which they claim introduces label bias (Lafferty et al., 2001) to NMT. Yang et al. (2018) show that likelihood negatively correlates with translation length. These findings suggest that the mode might suffer from length bias, likely thereby failing to sufficiently account for adequacy. In fact, Stahlberg and Byrne (2019) show that oftentimes the true mode is the empty sequence.
The connection with the length bias is not the only reason for the beam search curse. Ott et al. (2018) find that the presence of copies in the training data cause the model to assign too much probability mass to copies of the input, and that with larger beam sizes this copying behaviour becomes more frequent. Cohen and Beck (2019) show that translations obtained with larger beam sizes often consist of an unlikely prefix with an almost deterministic suffix and are of lower quality. In open-ended generation, Zhang et al. (2020) correlate model likelihood with human judgements for a fixed sequence length, thus eliminating any possible length bias issues. They find that likelihood generally correlates positively with human judgements, up until an inflection point, after which the correlation becomes negative. An observation also made in translation with BLEU rather than human judgements (Ott et al., 2018). We call this general failure of the mode to represent good translations in NMT the inadequacy of the mode problem.
NMT and its Many Biases
MT systems are trained on sentence pairs drawn from a parallel corpus. Each pair consists of a sequence x in the source language and a sequence y in the target language. Most NMT models are conditional models (Cho et al., 2014b;Bahdanau et al., 2015;Vaswani et al., 2017), 1 that is, only the target sentence is given random treatment. Target words are drawn in sequence from a product of locally normalised Categorical distributions without Markov assumptions: Y j |θ, x, y <j ∼ Cat(f (x, y <j ; θ)). At each step, a neural network f (·; θ) maps from the source sequence x and the prefix sequence y <j to the parameters of a Categorical distribution over the vocabulary of the target language. These models are typically trained via regularised maximum likelihood estimation, MLE for short, where we search for the parameter θ MLE that assigns maximum (regularised) likelihood to a dataset of observations D. A local optimum of the MLE objective can be found by stochastic gradient-based optimisation (Robbins and Monro, 1951;Bottou and Cun, 2004). For a trained model with parameters θ MLE and a given input x, a translation is predicted by searching for the mode of the distribution: the sequence y that maximises log p(y|x, θ MLE ). This is a decision rule also known as maximum a posteriori (MAP) decoding (Smith, 2011). 2 Exact MAP decoding is intractable in NMT, and the beam search algorithm (Sutskever et al., 2014) is employed as a viable approximation.
It has been said that due to certain design decisions NMT suffers from a number of biases. We review those biases here and then discuss in Section 4 one bias that has received very little attention and which, we argue, underlies many biases in NMT and explains some of the pathologies discussed in Section 2.
Exposure bias. MLE parameters are estimated conditioned on observations sampled from the training data. Clearly, those are not available at test time, when we search through the learnt distribution. This mismatch between training and test, known as exposure bias (Ranzato et al., 2016), has been linked to many of the pathologies of NMT and motivated modifications or alternatives to MLE aimed at exposing the model to its own predictions during training Ranzato et al., 2016;Shen et al., 2016;Wiseman and Rush, 2016;Zhang et al., 2019). While exposure bias has been a point of critique mostly against MLE, it has only been studied in the context of approximate MAP decoding. The use of MAP decoding and its approximations shifts the distribution of the generated translations away from data statistics (something we provide evidence for in later sections), thereby exacerbating exposure bias.
Non-admissible heuristic search bias. In beam search, partial translations are ranked in terms of loglikelihood without regards to (or with crude approximations of) their future score, which may lead to good translations being pruned too early. This corresponds to searching with a non-admissible heuristic (Hart et al., 1968), that is, a heuristic that may underestimate the likelihood of completing a translation. This biased search affects statistics of beam search outputs in unknown ways and may well account for some of the pathologies of Section 2, and has motivated variants of the algorithm aimed at comparing partial translations more fairly (Huang et al., 2017;Shu and Nakayama, 2018). This problem has also been studied in parsing literature, where it's known as imbalanced probability search bias (Stanojević and Steedman, 2020).
Label bias. Where a conditional model makes independence assumptions about its inputs (i.e., variables the model does not generate), local normalisation prevents the model from revising its decisions, a problem known as label bias (Bottou, 1991;Lafferty et al., 2001). This is a model specification problem which limits the distributions a model can represent (Andor et al., 2016). While this is the case in incremental parsing (Stern et al., 2017) and simultaneous translation (Gu et al., 2017), where inputs are incrementally available for conditioning, this is not the case in standard NMT (Sountsov and Sarawagi, 2016, Section 5), where inputs are available for conditioning in all generation steps. It is plausible that local normalisation might affect the kind of local optima we find in NMT, but that is orthogonal to label bias.
Biased Statistics and the Inadequacy of the Mode
In most NMT research, criticisms of the model are based on observations about the mode, or an approximation to it obtained using beam search. The mode, however, is not an unbiased summary of the probability distribution that the model learnt. That is, properties of the mode say little about properties of the learnt distribution (e.g., a short mode does not imply the model underestimates average sequence length). MAP decoding algorithms and their approximations bias the statistics by which we criticise NMT. They restrict our observations about the model to a single or a handful of outcomes which on their own can be rather rare. To gain insight about the model as a distribution, it seems more natural to use all of the information available to us, namely, all samples we can afford to collect, and search for frequent patterns in these samples. Evidence found that way better represents the model and its beliefs.
On top of that, the sample space of NMT is high-dimensional and highly structured. NMT models must distribute probability mass over a massive sample space (effectively unbounded). While most outcomes ought to be assigned negligible mass, for the total mass sums to 1, the outcomes with non-negligible mass might still be too many. The mode might only account for a tiny portion of the probability mass, and can actually be extremely unlikely under the learnt distribution. Using the mode for predictions makes intuitive sense in unstructured problems, where probability distributions are likely very peaked, and in models trained with large margin methods (Vapnik, 1998), since those optimise a decision boundary directly. With probability distributions that are very spread out, and where the mode represents only a tiny bit of probability mass, targeting at the mode for predictions is much less obvious, an argument that we shall reinforce with experimental results throughout this analysis. 3 At the core of our analysis is the concept of an unbiased sample from the model, which we obtain by ancestral sampling: iteratively sampling from distributions of the form Cat(f (x, y <j ; θ)), each time extending the generated prefix y <j with an unbiased draw, until the end-of-sequence symbol is generated. By drawing from the model's probability distribution, unlike what happens in MAP decoding, we are imitating the model's training procedure. Only we replace samples from the data by samples from the model, thus shedding light onto the model's fit. That is, if these samples do not reproduce statistics of the data, we have an instance of poor fit. 4 Crucially, ancestral sampling is not a pathfinding algorithm, thus the non-admissible heuristic search bias it not a concern. Ancestral sampling is not a decision rule either, thus returning a single sample as a prediction is not expected to outperform MAP decoding (or any other rule). Samples can be used to diagnose model fit, as we do in Section 6, and to approximate decision rules, as we do in Section 7.4. In sum, we argue that MAP decoding is a source of various problems and that it biases conclusions about NMT. Next, we provide empirical evidence for these claims.
Data & System
We train our systems on German-English (de-en), Sinhala-English (si-en), and Nepali-English (ne-en), in both directions. For German-English we use all available WMT'18 parallel data, except for Paracrawl, amounting to about 5.9 million sentence pairs, and train a Transformer base model (Vaswani et al., 2017). For Sinhala and Nepali, for which very little parallel data are available, we mimic the data and system setup of Guzmán et al. (2019). As we found that the data contained many duplicate sentence pairs, we removed duplicates, but left in those where only one side (source or target) of the data is duplicate to allow for paraphrases. For all language pairs, we do keep a portion of the training data (6, 000 sentence pairs) separate as held-out data for the analysis. In this process we also removed any sentence that corresponded exactly to either the source or target side of a held-out sentence from the training data. To analyse performance outside the training domain, we use WMT's newstest2018 for German-English, and the FLORES datasets collected by Guzmán et al. (2019) for the low-resource pairs. Our analysis is focused on MLE-trained NMT systems. However, as Transformers are commonly trained with label smoothing (LS) (Szegedy et al., 2016), we do additionally report automatic quality assessments of beam search outputs on LS-trained systems. Figure 1: A comparison using hierarchical Bayesian models of statistics extracted from beam search outputs, samples from the model and gold-standard references. We show the posterior density on the y-axis, and the mean Poisson rate (length) and agreement with training data (unigrams, bigrams, skipbigrams) on the x-axis for each group and language pair.
Assessing the Fit of MLE-Trained NMT
We investigate the fit of the NMT models of Section 5 on a held-out portion of the training data. This allows us to criticise MLE without confounders such as domain shift. We will turn to data in the test domain (newstest2018, FLORES) in Section 7. We compare unbiased samples from the model to goldstandard references and analyse statistics of several aspects of the data. If the MLE solution is good, we would expect statistics of sampled data to closely match statistics of observed data.
We obtain statistics from reference translations, ancestral samples, and beam search outputs and model them using hierarchical Bayesian models. For each type of statistic, we formulate a joint model over these three groups and inspect the posterior distribution over the parameters of the analysis model. We also include statistics extracted from the training data in our analysis, and model the three test groups as a function of posterior inferences based on training data statistics. Our methodology follows that advocated by Gelman et al. (2013) andBlei (2014). In particular, we formulate separate hierarchical models to inspect length, lexical, and word order statistics: sequence length, unigram and bigram counts, and skip-bigram counts, respectively. 5 In Appendix A, we describe in detail all analysis models, inference procedures, and predictive checks that confirm their fit.
For length statistics, we look at the expected posterior Poisson rate for each group, each rate can be interpreted as that group's average sequence length. Ideally, the expected Poisson rates of predicted translations are close to those of gold-standard references. Figure 1 (top row) shows the inferred posterior distributions for all language pairs. We observe that samples generated by NMT capture length statistics reasonably well, overlapping a fair amount with the reference group. In almost all cases we observe that beam search outputs stray away from data statistics, usually resulting in shorter translations.
For unigrams, bigrams, and skip-bigrams, we compare the posterior agreement with training data of each group (this is formalised in terms of a scalar concentration parameter whose posterior we can plot).
Higher values indicate a closer resemblance to training data statistics. For each statistic, the posterior distribution for gold-standard references gives an indication of ideal values of this agreement variable. Figure 1 (rows 2-4) show all posterior distributions. In most cases the gold-standard references agree most with the training data, as expected, followed by samples from the model, followed by beam search outputs. For nearly all statistics and language pairs beam search outputs show least agreement with the training data, even when samples from the model show similar agreement as references do. Whereas samples from the model do sometimes show less similarity than references, in most cases they are similar and thus lexical and word order statistics are captured reasonably well by the NMT model. Beam search on the other hand again strays from training data statistics, compared to samples from the model.
Examining the Translation Distribution
The NMT models of Section 5 specify complex distributions over an unbounded space of translations.
Here we examine properties of these distributions by inspecting translations in a large set of unbiased samples. To gain further insight we also analyse our models in the test domain (newstest2018, FLORES).
Number of Likely Translations
NMT, by the nature of its model specification, assigns probability mass to each and every possible sequence consisting of tokens in its vocabulary. Ideally, however, a well-trained NMT model assigns the bulk of its probability mass to good translations of the input sequence. We take 1, 000 unbiased samples from the model for each input sequence and count the cumulative probability mass of the unique translations sampled. Figure 2 shows the average cumulative probability mass for all test sentences with 1 standard deviation around it, as well as the final cumulative probability values for each input sequence.
For the held-out data we observe that, on average, between 16.4% and 57.8% of the probability mass is covered. The large variance around the mean shows that in all language pairs we can find test sentences for which nearly all or barely any probability mass has been covered after 1, 000 samples. That is, even after taking 1, 000 samples, only about half of the probability space has been explored. The situation is much more extreme when translating data from the test domain (see bottom half of Figure 2). 6 Naturally, the NMT model is much more uncertain in this scenario, and this is very clear from the amount of probability mass that has been covered by 1, 000 samples: on average, only between 0.2% and 0.9% for the low-resource pairs and between 6.9% and 9.1% for English-German of the probability space has been explored. This shows that the set of likely translations under the model is very large and the probability distribution over those sentences mostly quite flat, especially so in the test domain. In fact, if we look at each input sequence individually, we see that for 37.0% (en-de), 35.5% (de-en), 18.5% (en-ne), 15.7% (ne-en), 9.2% (en-si), and 3.3% (si-en) of them all 1, 000 samples are unique. On the test domain data these numbers increase to 46.7% (en-de), 41.5% (de-en), 52.1% (en-ne), 86.8% (ne-en), 84.6% (en-si), and 87.3% (si-en). For these input sequences, the translation distributions learnt are so flat that in these 1, 000 samples no single translation stands out over the others.
Sampling the Mode
As the predominant decision rule in NMT is MAP decoding, which we approximate via beam search, it is natural to ask how frequently it is that the beam search output is observed amongst unbiased samples. We find that the beam search output is contained within 1, 000 unbiased samples for between 54.3% and 92.2% of input sequences on the held-out data. In the test domain, we find that on English-German for between 44.3% and 49.3%, and in the low-resource setting for between 4.8% and 8.4% of the input sequences the beam search output is contained in the set. This shows that for a large portion of the input sequences, the beam search solution is thus quite a rare outcome under the model. Recently, Stahlberg and Byrne (2019) showed that oftentimes the true mode of a trained NMT system is the empty sequence. This is worrying since NMT decoding is based on mode-seeking search. We find that for between 7.2% and 29.1% of input sequences for held-out data and between 2.8% and 33.3% of input sequences in the test domain an empty sequence is sampled at least once in 1, 000 samples. When an empty sequence is sampled it only occurs on average 1.2 ± 0.5 times. Even though it could well be, as the evidence that Stahlberg and Byrne (2019) provide is strong, that often the true mode under our models is the empty sequence, the empty string remains a rather unlikely outcome under the models.
Sample Quality
The number of translations that an NMT model assigns non-negligible mass to can be very large as we have seen in Section 7.1. We now investigate what the average quality of these samples is. For quality assessments, we compute METEOR (Denkowski and Lavie, 2011) using the mteval-v13a tokeniser. 7 We translate the test sets using a single ancestral sample per input sentence and repeat the experiment 30 times to report the average in Table 1 (sample). We also report beam search scores (beam). We see that, on average, samples of the model always perform worse than beam search translations. This is no surprise, of course, as ancestral sampling is not a fully fledged decision rule, but simply a technique to unbiasedly explore the learnt distribution. Moreover, beam search itself does come with some adjustments to perform well (such as a specific beam size and length penalty). The gap between sampling and beam search is between 0 and 14.4 METEOR. The gap can thus be quite large, but overall the quality of an average sample is reasonable compared to beam search. We also observe that the variance of the sample scores is small with standard deviations below 0.2.
Next, we investigate the performance we would achieve if we could select the best sample from a set. For that, we employ an oracle selection procedure using sentence-level METEOR with the reference translation to select the best sample from a set of samples. We vary sample size from 5 to 30 samples and repeat each experiment four times. Figure 3 plots the results in terms of corpus-level METEOR. Average METEOR scores for oracle selection out of 30 samples are shown in Table 1. METEOR scores steadily increase with sample size. For a given sample size we observe that variance is generally very small. Only between 5 and 10 samples are required to outperform beam search in low-resource language pairs and English-German in the training domain, but surprisingly 15 to 25 samples are necessary for English-German in the test domain. Still, this experiment shows that samples are of reasonable and consistent quality with respect to METEOR. For fewer than 30 random samples the model could meet or outperform beam search performance in most cases, if we knew how to choose the best sample from the set. This is a motivating result for looking into sampling-based decision rules.
Minimum Bayes Risk Decoding
We have seen that translation distributions spread mass over a large set of likely candidates, oftentimes without any clear preference for particular translations within the set (Section 7.1). Yet, this set is not arbitrary, it captures various statistics of the data well (Section 6) and holds potentially good translations (Section 7.3). Even though the model does not single out one clear winner, the translations it does assign non-negligible mass to share statistics that correlate with the reference translation. This motivates a decision rule that exploits all information we have available about the distribution. In this section we explore one such decision rule: minimum Bayes risk (MBR) decoding.
For a given utility function u(y, h), which assesses a hypothesis h against a reference y, statistical decision theory (Bickel and Doksum, 1977) prescribes that the optimum decision y is the one that maximises expected utility (or minimises expected loss) under the model: y = argmax h∈H(x) E p(y|x,θ) [u(y, h)], where the maximisation is over the entire set of possible translations H(x). Note that there is no need for a human-annotated reference, expected utility is computed by having the model fill in reference translations. This decision rule, known as MBR decoding in the NLP literature (Goel and Byrne, 2000), is especially suited where we trust a model in expectation but not its mode in particular (Smith, 2011, Section 5.3). 8 MBR decoding, much like MAP decoding, is intractable. We can at best obtain unbiased estimates of expected utility via Monte Carlo (MC) sampling, and we certainly cannot search over the entirety of H(x). Still, a tractable approximation can be designed, albeit without any optimality guarantees. We use MC both to approximate the support H(x) of the distribution and to estimate the expected utility of a given hypothesis. In particular, we maximise over the supportH(x) of the empirical distribution obtained by ancestral sampling:
y = argmax h∈H(x) 1 S S s=1 u(y (s) , h) for y (s) ∼ p(y|x, θ) ,(1)
which runs in time O(S 2 ). Though approximate, this rule has interesting properties: MC improves with sample size, occasional pathologies in the set pose no threat, and there is no need for incremental search. Note that whereas our translation distribution might be very flat over a vast number of translations, not showing a clear ordering in terms of relative frequency within a large set of samples, this need not be the case under expected utility. For example, in Section 7.2 we found that for some input sequences the empty sequence is contained within the 1,000 samples in our set and appears in there roughly once on average. If all the 1,000 samples are unique (as we found to often be the case in Section 7.1), we cannot distinguish the empty sequence from the other 999 samples in terms of relative frequency. However, under most utilities the empty sequence is so unlike the other sampled translations that it would score very low in terms of expected utility.
We chose METEOR as utility function for it, unlike BLEU, is well-defined at the sentence level. 9 We estimate expected utility using S = 30 ancestral samples, and use the translations we sample to make up an approximation to H(x). Results are shown in Table 1. As expected, MBR considerably outperforms the average single sample performance by a large margin and in many cases is on par with beam search, consistently outperforming it in low-resource pairs. For English-German in the test domain, we may need more samples to close the gap with beam search. Whereas an out-of-the-box solution based on MBR requires further investigation, this experiment shows promising results. Crucially, it shows that exploring the model as a probability distribution holds great potential.
Related Work
Some of our observations have been made in previous work. Ott et al. (2018) observe that unigram statistics of beam search stray from those of the data, while random samples do a better job at reproducing them. Holtzman et al. (2020) find that beam search outputs have disproportionately high token probabilities compared to natural language under a sequence to sequence model. Our analysis is more extensive, we include richer statistics about the data, more language pairs, and vary the amount of training resources, leading to new insights about MLE-trained NMT and the merits of mode-seeking predictions. Ott et al. (2018) also observe that NMT learns flat distributions, they analyse a high-resource English-French system trained on 35.5 million sentence pairs from WMT'14 and find that after drawing 10, 000 samples from the WMT'14 validation set less than 25% of the probability space has been explored. Our analysis shows that even though NMT distributions do not reveal clear winners, they do emphasise translations that share statistics with the reference, which motivates us to look into MBR.
MBR decoding is old news in machine translation (Kumar and Byrne, 2004;Tromble et al., 2008), but it has received little attention in NMT. Previous approximations to MBR in NMT employ beam search to define the support and to evaluate expected utility (with probabilities renormalised to sum to 1 in the beam), these studies report the need for very large beams (Stahlberg et al., 2017;Blain et al., 2017;Shu and Nakayama, 2017). They claim the inability to directly score better translations higher is a deficiency of the model scoring function. We argue this is another piece of evidence for the inadequacy of the mode: by using beam search, they emphasise statistics of high-scoring translations, potentially rare and inadequate ones. Very recently, Borgeaud and Emerson (2020) present a voting-theory perspective on decoding for image captioning and machine translation. Their proposal is closely-related to MBR, but motivated differently. Their decision rule too is guided by beam search, which may emphasise pathologies of highest-likelihood paths, but they also propose and investigate stronger utility functions which lead to improvements w.r.t. length, diversity, and human judgements.
The only instance that we are aware of where unbiased samples from an NMT model support a decision rule is the concurrent work by Naskar et al. (2020). The authors make the same observation that we make in Section 7.3, namely that an oracle selection from a small set of samples of an NMT model shows great potential, greatly outperforming beam search. Motivated by this observation, the authors propose a reranking model that learns to rank sampled translations according to their oracle BLEU. Using the trained model to re-rank a set of 100 samples from the NMT model they find strong improvements over beam search of up to 3 BLEU points, again showing the potential of sampling-based decision rules.
Conclusion
In this work, we discuss the inadequacy of the mode in NMT and question the appropriateness of MAP decoding. We show that for such a high dimensional problem as NMT, the probability distributions obtained with MLE are spread out over many translations, and that the mode often does not represent any significant amount of probability mass under the learnt distribution. We therefore argue that MAP decoding is not suitable as a decision rule for NMT systems. Whereas beam search performs well in practice, it suffers from biases of its own (i.e., non-admissible heuristic search bias), it shifts statistics away from those of the data (i.e., exposure bias and other lexical and length biases), and in the limit of perfect search it falls victim to the inadequacy of the mode. Instead, we advocate for research into decision rules that take into account the probability distribution more holistically. Using ancestral sampling we can explore the learnt distribution in an unbiased way and devise sampling-based decision rules. Ancestral sampling does not suffer from non-admissibility, and, if the model fit is good, there is no distribution shift either.
We further argue that criticisms about properties of the mode of an NMT system are not representative of the probability distributions obtained from MLE training. While this form of criticism is perfectly reasonable where approximations to MAP decoding are the only viable alternative, there are scenarios where we ought to criticise models as probability distributions. For example, where we seek to correlate an observed pathology with a design decision, such as factorisation, or training algorithm. In fact, we argue that many of the observed pathologies and biases of NMT are at least partially due to the use of (approximate) MAP decoding, rather than inherent to the model or its training objective.
Even though NMT models spread mass over many translations, we find samples to be of decent quality and contain translations that outperform beam search outputs even for small sample sizes, further motivating the use of sampling-based decision rules. We show that an approximation to one such decision rule, MBR decoding, shows competitive results. This confirms that while the set of likely translations under the model is large, the translations in it share many statistics that correlate well with the reference.
MLE-trained NMT models admit probabilistic interpretation and an advantage of the probabilistic framework is that a lot of methodology is already in place when it comes to model criticism as well as making predictions. We therefore advocate for criticising NMT models as probability distributions and making predictions using decision rules that take into account the distributions holistically. We hope that our work paves the way for research into scalable sampling-based decision rules and motivates researchers to assess model improvements to MLE-trained NMT systems from a probabilistic perspective.
A Analysis Models
A.1 Length Analysis
We model length data from the training group using a hierarchical Gamma-Poisson model. Each target sequence length is modelled as being a draw from a Poisson distribution with a Poisson rate parameter specific to that sequence. All Poisson rates share a common population-level Gamma prior with population-level parameters α and β. The population-level parameters are given fixed Exponential priors set to allow for a wide but reasonable range of Poisson rates a priori.
α ∼ Exp(1) β ∼ Exp(10) λ i ∼ Gamma(α, β) y i ∼ Poisson(λ i )
Here, i indexes one particular data point. This model is very flexible, because we allow the model to assign each datapoint its own Poisson rate. We model test groups as an extension of the training group. Test group data points are also modelled as draws from a Gamma-Poisson model, but parameterised slightly differently.
µ = E [Gamma(α, β|D T )] η ∼ Exp(1.) s g ∼ Exp(η) t g = 1/µ λ gi ∼ Gamma(s g , t g ) y gi ∼ Poisson(λ gi )
Here, i again indexes a particular data point, g a group in {reference, sampling, beam}, and D T denotes the data of the training group. All Poisson rates are individual to each datapoint in each group. The Poisson rates do share a group-level Gamma prior, whose parameters are s g and t g . s g shares a prior among all test groups and therefore ties all test groups together. t g is derived from posterior inferences on the training data by taking the expected posterior Poisson rate in the training data and inverting it. This is done such that the mean Poisson rate for each test group is s g · µ, where s g can be seen as a parameter that scales the expected posterior training rate for each test group individually. We infer Gamma posterior approximations for all unknowns using stochastic variational inference (SVI). After inferring posteriors, we compare predictive samples to the observed data in terms of first to fourth order moments to verify that the model fits the observations well.
A.2 Lexical & Word Order Analyses
We model unigram and (skip-)bigram data from the training group using a hierarchical Dirichlet-Multinomial model as shown below:
α ∼ Gamma(1, 1) β ∼ Gamma(1, 1) θ ∼ Dir(α) ψ u ∼ Dir(β) u ∼ Multinomial(θ) b|u ∼ Multinomial(ψ u )
Here, we have one Gamma-Dirichlet-Multinomial model to model unigram counts u, and a separate Dirichlet-Multinomial model for each u (the first word of a bigram) that b (the second word of a bigram) conditions on, sharing a common Gamma prior that ties all bigram models. This means that we effectively have V + 1 Dirichlet-Multinomial models (where V is BPE vocabulary size) in total to model the training group, where the V bigram models share a common prior.
We model the three test groups using the inferred posterior distributions on the data of the training group D T . We compute the expected posterior concentration of the Dirichlets in the training group models and normalise it such that it sums to 1 over the entire vocabulary. The normalisation has the effect of spreading the unigram and bigram distributions. The test groups are modelled by scaling this normalised concentration parameter using a scalar. In order for test-groups to recover the training distribution the scaling variable needs to be large to undo the normalisation. This scalar, s g for unigrams or m g for bigrams, can be interpreted as the amount of agreement of each test group with the training group. The higher this scalar is, the more peaked the test group Multinomials will be about the training group lexical distribution.
µ(α) = E [α|D T ] . µ(β) = E [β|D T ]
η s ∼ Gamma(1, 0.2) η m ∼ Gamma(1, 0.2) s g ∼ Gamma(1, η s ) m g ∼ Gamma(1, η m ) θ g ∼ Dir(s g · µ(α)) ψ g ∼ Dir(m g · µ(β)) u g ∼ Multinomial(θ g ) b g |u g ∼ Multinomial(ψ g ) g ∈ {reference, sampling, beam}
We do collapsed inference for each Dirichlet-Multinomial (as we are not interested in assessing θ g or φ g ), and infer posteriors approximately using SVI with Gamma approximate posterior distributions. To confirm the fit of the analysis model, we compare posterior predictive samples to the observed data in terms of absolute frequency errors of unigrams and bigrams as well as ranking correlation.
Figure 2 :
2Cumulative probability of the unique translations in 1,000 ancestral samples on the held-out (top), and newstest2018 / FLORES (bottom) test sets. The dark blue line shows the average cumulative probability over all test sentences, the shaded area represents 1 standard deviation away from the average. The black dots to the right show the final cumulative probability for each individual test sentence.
Figure 3 :
3METEOR scores for oracle-selected samples as a function of sample size on the held-out data (top) and newstest2018 / FLORES (bottom) test sets. For each sample size we repeat the experiment 4 times and show a box plot per sample size. Dashed blue lines show beam search scores.
Though fully generative accounts do exist(Shah and Barber, 2018;Eikema and Aziz, 2019).2 The term MAP decoding was coined in the context of generative classifiers and their structured counterparts, where the posterior probability p(y|x, θ) ∝ p(y|θ)p(x|y, θ) updates our prior beliefs about y in light of x. This is not the case in NMT, where we do not express a prior over target sentences, and p(y|x, θ) is a direct parameterisation of the likelihood, rather than a posterior probability inferred via Bayes rule. Nonetheless, we stick to the conventions used in the MT literature.
This perhaps non-intuitive notion that the most likely outcomes are rare and do not summarise a model's beliefs well enough is related to an information-theoretic concept, that of typicality(MacKay, 2003, Section 4.4).4 Where one uses (approximate) MAP decoding instead of ancestral sampling this is known as exposure bias.
Skip-bigrams are pairs of tokens drawn in the same order as they occur in a sentence, but without enforcing adjacency.
For English-German and German-English the test domain would not be considered out-of-domain here, as both training and test data concern newswire.
For our analysis, it is convenient to use a metric defined both at the corpus and at the segment level. We use METEOR, rather than BLEU(Papineni et al., 2002), for it outperforms (smoothed) BLEU at the segment-level.
MAP decoding is in fact MBR with a very strict utility function which evaluates to 1 if a translation exactly matches the reference, and 0 otherwise. As a community, we acknowledge by means of our evaluation strategies (manual or automatic) that exact matching is inadequate for translation, unlike many unstructured classification problems, admits multiple solutions.9 Even though one can alter BLEU such that it is defined at the sentence level (for example, by adding a small positive constant to n-gram counts), this "smoothing" in effect biases BLEU's sufficient statistics. Unbiased statistics are the key to MBR, thus we opt for a metric that is already defined at the sentence level.
AcknowledgementsThis project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825299 (GoURMET). We also thank Khalil Sima'an, Lina Murady, Miloš Stanojević, and Lena Voita for comments and helpful discussions. A Titan Xp card used for this research was donated by the NVIDIA Corporation.
Globally normalized transition-based neural networks. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Proceedings of the 54th. the 54thSlav Petrov, and Michael CollinsDaniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th
Annual Meeting of the Association for Computational Linguistics. Berlin, GermanyAssociation for Computational Linguistics1Long Papers)Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442-2452, Berlin, Germany, August. Association for Computational Linguistics.
Neural Machine Translation by Jointly Learning to Align and Translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, ICLR. San Diego, USADzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR, 2015, San Diego, USA.
Scheduled sampling for sequence prediction with recurrent neural networks. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence predic- tion with recurrent neural networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1171-1179. Curran Associates, Inc.
Mathematical statistics: basic ideas and selected topics. J Peter, Bickel, Doksum, Holden-Day IncOakland, CA, USAPeter J Bickel and Kjell A Doksum. 1977. Mathematical statistics: basic ideas and selected topics. Holden-Day Inc., Oakland, CA, USA.
Exploring hypotheses spaces in neural machine translation. Asia-Pacific Association for Machine Translation (AAMT), editor, Machine Translation Summit XVI. Frédéric Blain, Lucia Specia, Pranava Madhyastha, Nagoya, JapanFrédéric Blain, Lucia Specia, and Pranava Madhyastha. 2017. Exploring hypotheses spaces in neural machine translation. Asia-Pacific Association for Machine Translation (AAMT), editor, Machine Translation Summit XVI. Nagoya, Japan.
Build, compute, critique, repeat: Data analysis with latent variable models. M David, Blei, Annual Review of Statistics and Its Application. 1David M Blei. 2014. Build, compute, critique, repeat: Data analysis with latent variable models. Annual Review of Statistics and Its Application, 1:203-232.
Findings of the 2018 conference on machine translation (WMT18). Ondřej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, Christof Monz, Proceedings of the Third Conference on Machine Translation: Shared Task Papers. the Third Conference on Machine Translation: Shared Task PapersBelgium, BrusselsAssociation for Computational LinguisticsOndřej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272-303, Belgium, Brussels, October. Association for Computational Linguistics.
Leveraging sentence similarity in natural language generation: Improving beam search using range voting. Sebastian Borgeaud, Guy Emerson, Proceedings of the Fourth Workshop on Neural Generation and Translation. the Fourth Workshop on Neural Generation and TranslationAssociation for Computational LinguisticsSebastian Borgeaud and Guy Emerson. 2020. Leveraging sentence similarity in natural language generation: Improving beam search using range voting. In Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 97-109, Online, July. Association for Computational Linguistics.
Large scale online learning. Léon Bottou, Yann L Cun, Advances in Neural Information Processing Systems. S. Thrun, L. K. Saul, and B. SchölkopfMIT Press16Léon Bottou and Yann L. Cun. 2004. Large scale online learning. In S. Thrun, L. K. Saul, and B. Schölkopf, editors, Advances in Neural Information Processing Systems 16, pages 217-224. MIT Press.
Une approche théorique de lapprentissage connexioniste. Bottou, applicationsà la reconnaissance de la paroleBottou. 1991. Une approche théorique de lapprentissage connexioniste; applicationsà la reconnaissance de la parole.
On the properties of neural machine translation: Encoder-decoder approaches. Kyunghyun Cho, Dzmitry Bart Van Merriënboer, Yoshua Bahdanau, Bengio, Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical TranslationDoha, QatarAssociation for Computational LinguisticsKyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111, Doha, Qatar, October. Association for Computational Linguistics.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Caglar Bart Van Merriënboer, Dzmitry Gulcehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsKyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical ma- chine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1724-1734, Doha, Qatar, October. Association for Computational Linguistics.
Empirical analysis of beam search performance degradation in neural sequence models. Eldan Cohen, J. Christopher Beck, ICML. Eldan Cohen and J. Christopher Beck. 2019. Empirical analysis of beam search performance degradation in neural sequence models. In ICML.
Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. Michael Denkowski, Alon Lavie, Proceedings of WMT. WMTEdinburgh, ScotlandMichael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of WMT, 2011, pages 85-91, Edinburgh, Scotland, July.
Auto-encoding variational neural machine translation. Bryan Eikema, Wilker Aziz, Proceedings of the 4th Workshop on Representation Learning for NLP. the 4th Workshop on Representation Learning for NLPFlorence, Italy, AugustAssociation for Computational LinguisticsRepL4NLP-2019Bryan Eikema and Wilker Aziz. 2019. Auto-encoding variational neural machine translation. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 124-141, Florence, Italy, August. Association for Computational Linguistics.
Bayesian Data Analysis. Andrew Gelman, John B Carlin, Hal S Stern, Donald B Rubin, Chapman and Hall/CRC3rd ed. editionAndrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. 2013. Bayesian Data Analysis. Chapman and Hall/CRC, 3rd ed. edition.
Minimum bayes-risk automatic speech recognition. Vaibhava Goel, William J Byrne, Comput. Speech Lang. 142Vaibhava Goel and William J. Byrne. 2000. Minimum bayes-risk automatic speech recognition. Comput. Speech Lang., 14(2):115-135.
Learning to translate in real-time with neural machine translation. Jiatao Gu, Graham Neubig, Kyunghyun Cho, O K Victor, Li, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, SpainAssociation for Computational Linguistics1Long PapersJiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1053-1062, Valencia, Spain, April. Association for Computational Linguistics.
The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc'aurelio Ranzato, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsFrancisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 6100-6113, Hong Kong, China, November. Association for Computational Linguistics.
A formal basis for the heuristic determination of minimum cost paths. P E Hart, N J Nilsson, B Raphael, IEEE Transactions on Systems Science and Cybernetics. 42P. E. Hart, N. J. Nilsson, and B. Raphael. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100-107.
The curious case of neural text degeneration. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, International Conference on Learning Representations. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degener- ation. In International Conference on Learning Representations.
When to finish? optimal beam search for neural text generation (modulo beam size). Liang Huang, Kai Zhao, Mingbo Ma, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsLiang Huang, Kai Zhao, and Mingbo Ma. 2017. When to finish? optimal beam search for neural text genera- tion (modulo beam size). In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2134-2139, Copenhagen, Denmark, September. Association for Computational Linguistics.
Six challenges for neural machine translation. Philipp Koehn, Rebecca Knowles, Proceedings of the First Workshop on Neural Machine Translation. the First Workshop on Neural Machine TranslationVancouver, AugustAssociation for Computational LinguisticsPhilipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28-39, Vancouver, August. Association for Computa- tional Linguistics.
Minimum Bayes-risk decoding for statistical machine translation. Shankar Kumar, William Byrne, Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004Boston, Massachusetts, USAAssociation for Computational LinguisticsShankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 169-176, Boston, Massachusetts, USA, May 2 -May 7. Association for Computational Linguistics.
Calibration of encoder decoder models for neural machine translation. Aviral Kumar, Sunita Sarawagi, arXiv:1903.00802arXiv preprintAviral Kumar and Sunita Sarawagi. 2019. Calibration of encoder decoder models for neural machine translation. arXiv preprint arXiv:1903.00802.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning, ICML 01. the Eighteenth International Conference on Machine Learning, ICML 01San Francisco, CA, USAMorgan Kaufmann Publishers Inc282289John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML 01, page 282289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance. Qingsong Ma, Ondřej Bojar, Yvette Graham, Proceedings of the Third Conference on Machine Translation: Shared Task Papers. the Third Conference on Machine Translation: Shared Task PapersBelgium, BrusselsAssociation for Computational LinguisticsQingsong Ma, Ondřej Bojar, and Yvette Graham. 2018. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671-688, Belgium, Brussels, October. Association for Computational Linguistics.
Information Theory, Inference & Learning Algorithms. J C David, Mackay, Cambridge University PressDavid J. C. MacKay. 2003. Information Theory, Inference & Learning Algorithms. Cambridge University Press.
Correcting length bias in neural machine translation. Kenton Murray, David Chiang, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsKenton Murray and David Chiang. 2018. Correcting length bias in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 212-223, Brussels, Belgium, October. Association for Computational Linguistics.
Energybased reranking: Improving neural machine translation using energy-based models. Subhajit Naskar, Amirmohammad Rooshenas, Simeng Sun, Mohit Iyyer, Andrew Mccallum, abs/2009.13267Subhajit Naskar, Amirmohammad Rooshenas, Simeng Sun, Mohit Iyyer, and Andrew McCallum. 2020. Energy- based reranking: Improving neural machine translation using energy-based models. CoRR, abs/2009.13267.
Analyzing uncertainty in neural machine translation. Myle Ott, Michael Auli, David Grangier, Marc'aurelio Ranzato, PMLRProceedings of the 35th International Conference on Machine Learning. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine LearningStockholmsmssan, Stockholm Sweden80Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Con- ference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3956-3965, Stockholmsmssan, Stockholm Sweden, 10-15 Jul. PMLR.
BLEU: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of ACL. ACLPhiladelphia, USAKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL, 2002, pages 311-318, Philadelphia, USA, July.
Sequence level training with recurrent neural networks. Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, 4th International Conference on Learning Representations. San Juan, Puerto RicoConference Track ProceedingsMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
A stochastic approximation method. Herbert Robbins, Sutton Monro, Ann. Math. Statist. 223Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. Ann. Math. Statist., 22(3):400- 407.
Generative neural machine translation. Harshil Shah, David Barber, Advances in Neural Information Processing Systems. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Harshil Shah and David Barber. 2018. Generative neural machine translation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Pro- cessing Systems 31, pages 1346-1355. Curran Associates, Inc.
Minimum risk training for neural machine translation. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, Yang Liu, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692, Berlin, Germany, August. Association for Computational Linguistics.
Later-stage minimum bayes-risk decoding for neural machine translation. Raphael Shu, Hideki Nakayama, abs/1704.03169CoRRRaphael Shu and Hideki Nakayama. 2017. Later-stage minimum bayes-risk decoding for neural machine transla- tion. CoRR, abs/1704.03169.
Improving beam search by removing monotonic constraint for neural machine translation. Raphael Shu, Hideki Nakayama, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics2Raphael Shu and Hideki Nakayama. 2018. Improving beam search by removing monotonic constraint for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 339-344, Melbourne, Australia, July. Association for Computational Linguistics.
Linguistic Structure Prediction. Noah A Smith, Morgan and ClaypoolNoah A. Smith. 2011. Linguistic Structure Prediction. Morgan and Claypool.
Length bias in encoder decoder models and a case for global conditioning. Pavel Sountsov, Sunita Sarawagi, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsPavel Sountsov and Sunita Sarawagi. 2016. Length bias in encoder decoder models and a case for global condi- tioning. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1516-1525, Austin, Texas, November. Association for Computational Linguistics.
On NMT search errors and model errors: Cat got your tongue?. Felix Stahlberg, Bill Byrne, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsFelix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3347-3353, Hong Kong, China, November. Association for Computational Linguistics.
Neural machine translation by minimising the Bayes-risk with respect to syntactic translation lattices. Felix Stahlberg, Adrià De Gispert, Eva Hasler, Bill Byrne, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, SpainAssociation for Computational Linguistics2Short PapersFelix Stahlberg, Adrià de Gispert, Eva Hasler, and Bill Byrne. 2017. Neural machine translation by minimising the Bayes-risk with respect to syntactic translation lattices. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 362-368, Valencia, Spain, April. Association for Computational Linguistics.
Max-Margin Incremental CCG Parsing. Miloš Stanojević, Mark Steedman, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)Miloš Stanojević and Mark Steedman. 2020. Max-Margin Incremental CCG Parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics.
Effective inference for generative neural parsing. Mitchell Stern, Daniel Fried, Dan Klein, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsMitchell Stern, Daniel Fried, and Dan Klein. 2017. Effective inference for generative neural parsing. In Pro- ceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695-1700, Copenhagen, Denmark, September. Association for Computational Linguistics.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. WeinbergerMontreal, CanadaIlya Sutskever, Oriol Vinyals, and Quoc V. V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, NIPS, 2014, pages 3104-3112. Montreal, Canada.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826.
Lattice Minimum Bayes-Risk decoding for statistical machine translation. Roy Tromble, Shankar Kumar, Franz Och, Wolfgang Macherey, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HawaiiAssociation for Computational LinguisticsRoy Tromble, Shankar Kumar, Franz Och, and Wolfgang Macherey. 2008. Lattice Minimum Bayes-Risk decoding for statistical machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 620-629, Honolulu, Hawaii, October. Association for Computational Linguistics.
Statistical learning theory Wiley. Vladimir Vapnik, 624New York, 1Vladimir Vapnik. 1998. Statistical learning theory Wiley. New York, 1:624.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000-6010.
Sequence-to-sequence learning as beam-search optimization. Sam Wiseman, Alexander M Rush, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsSam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1296-1306, Austin, Texas, November. Association for Computational Linguistics.
Breaking the beam search curse: A study of (re-)scoring methods and stopping criteria for neural machine translation. Yilin Yang, Liang Huang, Mingbo Ma, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsYilin Yang, Liang Huang, and Mingbo Ma. 2018. Breaking the beam search curse: A study of (re-)scoring methods and stopping criteria for neural machine translation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3054-3059, Brussels, Belgium, October-November. Association for Computational Linguistics.
Bridging the gap between training and inference for neural machine translation. Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsWen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between training and inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4334-4343, Florence, Italy, July. Association for Computational Linguistics.
Trading off diversity and quality in natural language generation. Hugh Zhang, Daniel Duckworth, Daphne Ippolito, Arvind Neelakantan, abs/2004.10450CoRRHugh Zhang, Daniel Duckworth, Daphne Ippolito, and Arvind Neelakantan. 2020. Trading off diversity and quality in natural language generation. CoRR, abs/2004.10450.
| [] |
[
"FEDERATED LEARNING FOR KEYWORD SPOTTING",
"FEDERATED LEARNING FOR KEYWORD SPOTTING"
] | [
"David Leroy \n18 rue Saint Marc75002Snips, ParisFrance\n",
"Alice Coucke \n18 rue Saint Marc75002Snips, ParisFrance\n",
"Thibaut Lavril \n18 rue Saint Marc75002Snips, ParisFrance\n",
"Thibault Gisselbrecht \n18 rue Saint Marc75002Snips, ParisFrance\n",
"Joseph Dureau \n18 rue Saint Marc75002Snips, ParisFrance\n"
] | [
"18 rue Saint Marc75002Snips, ParisFrance",
"18 rue Saint Marc75002Snips, ParisFrance",
"18 rue Saint Marc75002Snips, ParisFrance",
"18 rue Saint Marc75002Snips, ParisFrance",
"18 rue Saint Marc75002Snips, ParisFrance"
] | [] | We propose a practical approach based on federated learning to solve out-of-domain issues with continuously running embedded speech-based models such as wake word detectors. We conduct an extensive empirical study of the federated averaging algorithm for the "Hey Snips" wake word based on a crowdsourced dataset that mimics a federation of wake word users. We empirically demonstrate that using an adaptive averaging strategy inspired from Adam in place of standard weighted model averaging highly reduces the number of communication rounds required to reach our target performance. The associated upstream communication costs per user are estimated at 8 MB, which is a reasonable in the context of smart home voice assistants. Additionally, the dataset used for these experiments is being open sourced with the aim of fostering further transparent research in the application of federated learning to speech data. | 10.1109/icassp.2019.8683546 | [
"https://arxiv.org/pdf/1810.05512v3.pdf"
] | 52,983,148 | 1810.05512 | 9afa6cd5c1365827baff9eed06cd461d1424936b |
FEDERATED LEARNING FOR KEYWORD SPOTTING
David Leroy
18 rue Saint Marc75002Snips, ParisFrance
Alice Coucke
18 rue Saint Marc75002Snips, ParisFrance
Thibaut Lavril
18 rue Saint Marc75002Snips, ParisFrance
Thibault Gisselbrecht
18 rue Saint Marc75002Snips, ParisFrance
Joseph Dureau
18 rue Saint Marc75002Snips, ParisFrance
FEDERATED LEARNING FOR KEYWORD SPOTTING
Index Terms-keyword spottingwake word detectionfederated learning
We propose a practical approach based on federated learning to solve out-of-domain issues with continuously running embedded speech-based models such as wake word detectors. We conduct an extensive empirical study of the federated averaging algorithm for the "Hey Snips" wake word based on a crowdsourced dataset that mimics a federation of wake word users. We empirically demonstrate that using an adaptive averaging strategy inspired from Adam in place of standard weighted model averaging highly reduces the number of communication rounds required to reach our target performance. The associated upstream communication costs per user are estimated at 8 MB, which is a reasonable in the context of smart home voice assistants. Additionally, the dataset used for these experiments is being open sourced with the aim of fostering further transparent research in the application of federated learning to speech data.
INTRODUCTION
Wake word detection is used to start an interaction with a voice assistant. A specific case of keyword spotting (KWS), it continuously listens to an audio stream to detect a predefined keyword or set of keywords. Well-known examples of wake words include Apple's "Hey Siri" or Google's "OK Google". Once the wake word is detected, voice input is activated and processed by a spoken language understanding engine, powering the perception abilities of the voice assistant [1].
Wake word detectors usually run on device in an alwayson fashion, which brings two major difficulties. First, it should run with minimal memory footprint and computational cost. The resource constraints for our wake word detector are 200k parameters (based on the medium-sized model proposed in [2]), and 20 MFLOPS.
Secondly, the wake word detector should behave consistently in any usage setting, and show robustness to background noise. The audio signal is highly sensitive to recording proximity (close or far field), recording hardware, but also to the room configuration. Robustness also implies a strong speaker variability coverage (genders, accents, etc.). While the use of digital signal processing front-ends can help mitigate issues related to bad recording conditions, speaker vari-ability remains a major challenge. High accuracy is all the more important since the model can be triggered at any time: it is therefore expected to capture most of the commands (high recall, or low false rejection rate) while not triggering unintentionally (low false alarm rate).
Today, wake word detectors are typically trained on datasets collected in real usage setting e.g. users homes in the case of voice assistants. Speech data being by nature very sensitive, centralized collection raises major privacy concerns. In this work, we investigate the use of federated learning (FL) [3] in the context of an embedded wake word detector. FL is a decentralized optimization procedure that enables to train a central model on the local data of many users without the need to ever upload this data to a central server. The training workload is moved towards the user's devices which perform training steps on the local data. Local updates from users are then averaged by a parameter server in order to create a global model.
RELATED WORK
Most research around decentralized learning has historically been done in the context of a highly controlled cluster/data center setting, e.g. with a dataset evenly partitioned in an i.i.d fashion. The multi-core and multi-gpu distributed training setting has been specifically studied in the context of speech recognition in [4]. Efforts on decentralized training with highly distributed, unbalanced and non-i.i.d data is relatively recent, as the foundations were laid down in [3] with the introduction of the federated averaging (FedAvg) algorithm and its application to a set of computer vision (MNIST, CIFAR-10) and language modeling tasks (applied to the Shakespeare and Google Plus posts datasets). There are for now very few real life experiments that we know of -except for Google's keyboard Gboard for Android [5] and more recently Mozilla's URL suggestion bar [6]. To our knowledge, the present work is the first experiment of its kind on user-specific speech data.
The federated optimization problem in the context of convex objective functions has been studied in [7]. The authors proposed a stochastic variance-reduced gradient descent optimization procedure (SVRG) with both local and global percoordinate gradient scaling to improve convergence. Their global per-coordinate gradient averaging strategy relies on a sparsity measure of the given coordinate in users local datasets and is only applicable in the context of sparse linear-in-the-features models. The latter assumption does not hold in the context of neural networks for speech-based applications.
Several improvements to the initial FedAvg algorithm have been suggested with a focus on client selection [8], budget-constrained optimization [9] and upload cost reduction for clients [10]. A dynamic model averaging strategy robust to concept drift based on a local model divergence criterion was recently introduced in [11]. While these contributions present efficient strategies to reduce the communication costs inherent to federated optimization, the present work is as far as we know the first one introducing a dynamic percoordinate gradient update in place of the global averaging step.
The next section describes the federated optimization procedure, and how its global averaging can be substituted by an adaptative averaging rule inspired from Adam. It is followed by the experiments section, where both the open-sourced crowdsourced data and model used to train our wake word detector are introduced. Results come next, and a communication cost analysis is provided. Finally, the next steps towards training a wake word detector on user data that is really decentralized are described.
FEDERATED OPTIMIZATION
We consider the standard supervised learning objective function f i (w) = l(x i , y i , w) that is the loss function for the prediction on example (x i , y i ) when using a model described by a real-valued parameter vector w of dimension d. In a federated setting, we assume that the datapoints i are partitioned across K users, each user being assigned their own partition P k , |P k | = n k . The optimization objective is therefore the following:
min w∈R d f (w) where f (w) def = K k=1 n k n × F k (w) with F k (w) = 1 n k n k i=1 f i (w)(1)
The FedAvg algorithm introduced in [3] aims at minimizing the objective function 1 assuming a synchronous update scheme and a generic non-convex neural network loss function. The model is initialized with a given architecture on a central parameter server with weights w 0 . Once initialized, the parameter server and the user's devices interact synchronously with each other during communication rounds. A communication round at time t ∈ [1, .., T ] is described below:
1. The central model w t−1 is shared with a subset of users S t that are randomly selected from the pool of K users given a participation ratio C. 2. Each user k ∈ S t performs one or several training steps on their local data based on the minimization of their local objective F k using mini-batch stochastic gradient descent (SGD) with a local learning rate η local . The number of steps performed locally is E × max(ceil( n k B ), 1), n k being the number of datapoints available locally, E the number of local epochs and B the local batch size. 3. Users from S t send back their model updates w t,k , k ∈ S t to the parameter server once local training is finished. 4. The server computes an average model w t based on the user's individual updates w t,k , k ∈ S t , each user's update being weighted by n k nr , where n r = k∈St n k ≈ C × K k=1 n k . When B = ∞ (i.e the batch size is equal to the local dataset size) and E = 1, then a single gradient update is performed on each user's data. It is strictly equivalent to doing a single gradient computation on a batch including all of selected user data points. This specific case is called FedSGD, e.g. stochastic gradient descent with each batch being the data of the federation of selected users at a given round. FedAvg (Federated averaging) is the generic case when more than one update is performed locally for each user.
The global averaging step can be written as follows, using a global update rate η global .
w t ← w t−1 − η global k∈St n k n (w t−1 − w t,k )(2)
Setting the global update rate η global to 1 is equivalent to a weighted averaging case without moving average. Equation 2 highlights the parallel between global averaging and a gradient update G t = k∈St n k n (w t−1 − w t,k ). This parallel motivates the use of adaptive per-coordinate updates for G t that have proven successful for centralized deep neural networks optimization such as Adam [12]. Moment-based averaging allows to smooth the averaged model by taking into account the previous rounds updates that were computed on different user subsets. We conjecture that the exponentiallydecayed first and second order moments perform the same kind of regularization that occurs in the mini-batch gradient descent setting, where Adam has proven to be successful on a wide range of tasks with various neural-network based architectures. In this work, we set the exponential decay rates for the moment estimates to β 1 = 0.9 and β 2 = 0.999 and = 10 −8 as initially suggested by the authors of [12].
EXPERIMENTS
Dataset
Unlike generic speech recognition tasks, there is no reference dataset for wake word detection. The reference dataset for multi-class keyword spotting is the speech command dataset [13], but the speech command task is generally preceded by a wake word detector and is focused on minimizing the confusion across classes, not robustness to false alarms. We constituted a crowdsourced dataset for the Hey Snips wake word.
We are releasing publicly 1 this dataset [14] in the hope it can be useful to the keyword spotting community. The data used here was collected from 1.8k contributors that recorded themselves on their device with their own microphone while saying several occurrences of the Hey Snips wake word along with randomly chosen negative sentences. Each recorded audio sample has gone through a validation process, where at least two of three distinct contributors have validated that the pronounced utterance matches the transcript.
This crowdsourcing-induced data distribution mimicks a real-world non-i.i.d, unbalanced and highly distributed setting, and a parallel is therefore drawn in the following work between a crowdsourcing contributor and a voice assistant user. The statistics about the dataset comforting this analogy are summarized in Table 1. The train, dev and test splits are built purposely using distinct users, 77% of users being used solely for training while the 23% remaining are used for parameter tuning and final evaluation, measuring the generalization power of the model to new users.
Model
Acoustic features are generated based on 40-dimensional melfrequency cepstrum coefficients (MFCC) computed every 10ms over a window of 25ms. The input window consists in 32 stacked frames, symmetrically distributed in left and right contexts. The architecture is a CNN with 5 stacked dilated convolutional layers of increasing dilation rate, followed by two fully-connected layers and a softmax inspired from [15]. The total number of parameters is 190,852. The model is trained using cross-entropy loss on frames prediction. The neural network has 4 output labels, assigned via a custom aligner specialized on the target utterance "Hey Snips": "Hey", "sni", "ps", and "filler" that accounts for all other cases (silence, noise and other words). A posterior handling [16] generates a confidence score for every frame by combining the smoothed label posteriors. The model triggers if the confidence score reaches a certain threshold τ , defining the operating point that maximizes recall for a certain amount of False Alarms per Hour (FAH). We set the number of false alarms per hour to 5 as a stopping criterion on the dev set. The dev set is a "hard" dataset when it comes to false alarms since it belongs to the same domain as data used for training. The model recall is finally evaluated on the test set positive data, while false alarms are computed on both the test set negative data and various background negative audio sets. See section 4.3 for further details about evaluation.
Results
We conduct an extensive empirical study of the federated averaging algorithm for the Hey Snips wake word based on crowdsourced data from Table 1. Federated optimization results are compared with a standard setting e.g. centralized mini-batch SGD with data from train set users being randomly shuffled. Our aim is to evaluate the number of communication rounds that are required in order to reach our stopping criterion on the dev set. For the purpose of this experiment early stopping is evaluated in a centralized fashion, and we assume that the dev set users agreed to share their data with the parameter server. In an actual product setting, early stopping estimation would be run locally on the devices of the dev users, they would download the latest version of the central model at the end of each round and evaluate the early stopping criterion based on prediction scores for their own utterances. These individual metrics would then be averaged by the parameter server to obtain the global model criterion estimation. Final evaluation on test users would be done in the same distributed fashion once training is finished.
Standard baseline: Our baseline e.g. a standard centralized data setting with a single training server and the Adam optimizer reaches the early stopping target in 400 steps (∼ 2 epochs), which is a strong convergence speedup in comparison with standard SGD that remains under 87% after 28 epochs despite learning rate and gradient clipping tuning on the dev set.
User parallelism: The higher the ratio of users selected at each round C, the more data is used for distributed local training, and the faster the convergence is expected, assuming that local training does not diverge too much. Figure 1 shows the impact of C on convergence: the gain of using half of users is limited with comparison with using 10%, specifically in the later stages of convergence. A fraction of 10% of users per round is also more realistic in a practical setup as selected users have to be online. With lower participation ratios (C = 1%), the gradients are much more sensitive and might require the use of learning rate smoothing strategy. C is therefore set to 10%.
Global averaging: Global adaptive learning rates based on Adam accelerates convergence when compared with standard averaging strategies with or without moving averages. Table 2 summarizes experimental results in the FedSGD setting with optimized local learning rates. Applying standard global averaging yields poor performances even after 400 communication rounds when compared with adaptive per-parameter averaging.
Local training: Our results show consistency across local training configurations, with limited improvements coming from increasing the load of local training. The number of [3], the speedup coming from increasing the amount of local training steps does not lead to order of magnitude improvements on convergence speed, while local learning rate and global averaging tuning proved to be crucial in our work. We conjecture that this difference is related to the input semantic variability across users. In the MNIST and CIFAR experiments from [3] the input semantics are the same across emulated users. For instance, images of the 9 digit that are attributed to various users are all very similar. In the wake word setting, each user has their own vocalization of the same wake word utterance with significant differences in pitch and accent than can lead to diverging lower stage representations that might perform poorly when averaged. Evaluation: We evaluate the false alarm rates of the best model (E = 1 and B = 20) for a fixed recall of 95% on the test set. We observe 3.2 FAH on the negative test data, 3.9 FAH on Librispeech [17], and respectively 0.2 and 0.6 FAH on our internal news and collected TV datasets. Unsurprisingly, false alarms are more common on close-field continuous datasets than they are on background negative audio sets.
Communication cost analysis
Communication cost is a strong constraint when learning from decentralized data, especially when users devices have limited connectivity and bandwidth. Considering the asymmetrical nature of broadband speeds, the communication bottleneck for federated learning is the updated weights transfer from clients to the parameter server once local training is completed [10]. We assume that the upstream communication cost associated with users involved in model evaluation at each communication round is marginal, as they would only be uploading a few floating point metrics per round that is much smaller than the model size. The total client upload bandwidth requirement is provided in the equation below:
ClientUploadCost = modelSize f32 × C × N rounds (3)
Based on our results, this would yield a cost of 8MB per client when the stopping criterion is reached within 100 communication rounds. On its end, the server receives 137 updates per round when C = 10%, amounting for 110GB over the course of the whole optimization process with 1.4k users involved during training. This cost is of course directly related to the early stopping criterion. Further experiments with latter convergence stages (400 rounds) yielded 98% recall / 0.5 FAH on the test set for an upload budget of 32 MB per user.
CONCLUSION AND FUTURE WORK
In this work, we investigate the use of federated learning on crowdsourced speech data to learn a resource-constrained wake word detector. We show that a revisited Federated Averaging algorithm with per-coordinate averaging based on Adam in place of standard global averaging allows the training to reach a target stopping criterion of 95% recall per 5 FAH within 100 communication rounds on our crowdsourced dataset for an associated upstream communication costs per client of 8MB. We also open source the Hey Snips wake word dataset.
The next step towards a real-life implementation is to design a system for local data collection and labeling as the wake word task requires data supervision. The frame labeling strategy used in this work relies on an aligner, which cannot be easily embedded. The use of of memory-efficient end-toend models [14] in place of the presented class-based model could ease labelling as it would only rely on voice activity detection.
ACKNOWLEDGMENTS
We thank Oleksandr Olgashko for his contribution in developing the training framework.
Fig. 1 .
1Effect of the share of users involved in each round C on the dev set recall / 5 FAH, FedSGD, Adam global averaging, η global = 0.001, η local = 0
Table 1. Dataset statistics for the Hey Snips wake word -18% of utterances are positive, with strong per user imbalance in the number of utterances (mean: 39, standard dev: 32)Train set
Dev set
Test set
Total
1,374 users 200 users 200 users 1774 users
53,991 utt. 8,337 utt. 7,854 utt. 69,582 utt.
Table 2 .
2Dev set recall / 5 FAH for various averaging strategies -FedSGD, C = 10%communication rounds required to reach the stopping crite-
rion on the dev set ranges between 63 and 112 communication
rounds for E ∈ [1, 3] and B ∈ [20, 50, ∞], using C = 10%,
Adam global averaging with η global = 0.001, and a local
learning rate of 0.01. In our experiments, the best perfor-
mances are obtained for E = 1 and B = 20 for an average of
2.4 local updates per worker taking part in a round, yielding a
80% speedup with comparison to FedSGD. Nevertheless we
observed variability across experiments with regard to weight
initialization and early stage behaviour. Unlike some experi-
ments presented in
http://research.snips.ai/datasets/ keyword-spotting
Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, Joseph Dureau, abs/1805.10190CoRR. Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, Maël Primet, and Joseph Dureau, "Snips voice platform: an embedded spoken language understanding system for private-by-design voice inter- faces," CoRR, vol. abs/1805.10190, 2018.
Hello edge: Keyword spotting on microcontrollers. Yundong Zhang, Naveen Suda, Liangzhen Lai, Vikas Chandra, abs/1711.07128CoRR. Yundong Zhang, Naveen Suda, Liangzhen Lai, and Vikas Chandra, "Hello edge: Keyword spotting on mi- crocontrollers," CoRR, vol. abs/1711.07128, 2017.
Federated learning of deep networks using model averaging. H , Brendan Mcmahan, Eider Moore, Daniel Ramage, Blaise Agüera Y Arcas, abs/1602.05629CoRR. H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas, "Federated learning of deep networks using model averaging," CoRR, vol. abs/1602.05629, 2016.
Parallel training of deep neural networks with natural gradient and parameter averaging. Daniel Povey, Xiaohui Zhang, Sanjeev Khudanpur, abs/1410.7455CoRR. Daniel Povey, Xiaohui Zhang, and Sanjeev Khudan- pur, "Parallel training of deep neural networks with natural gradient and parameter averaging," CoRR, vol. abs/1410.7455, 2014.
Federated learning: Collaborative machine learning without centralized training data. Brendan Mcmahan, Daniel Ramage, Brendan McMahan and Daniel Ramage, "Federated learning: Collaborative machine learning without centralized training data," https://ai.googleblog.com/2017/04/ federated-learning-collaborative. html, 2017.
Federated learning. Florian Hartmann, Florian Hartmann, "Federated learn- ing," https://florian.github.io/ federated-learning/, 2018.
Federated optimization: Distributed machine learning for on-device intelligence. Jakub Konecný, H Brendan Mcmahan, Daniel Ramage, Peter Richtárik, abs/1610.02527CoRR. Jakub Konecný, H. Brendan McMahan, Daniel Ram- age, and Peter Richtárik, "Federated optimization: Dis- tributed machine learning for on-device intelligence," CoRR, vol. abs/1610.02527, 2016.
Client selection for federated learning with heterogeneous resources in mobile edge. Takayuki Nishio, Ryo Yonetani, abs/1804.08333CoRR. Takayuki Nishio and Ryo Yonetani, "Client selection for federated learning with heterogeneous resources in mobile edge," CoRR, vol. abs/1804.08333, 2018.
When edge meets learning: Adaptive control for resource-constrained distributed machine learning. Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K Leung, Christian Makaya, Ting He, Kevin Chan, abs/1804.05271CoRR. Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan, "When edge meets learning: Adaptive control for resource-constrained distributed machine learning," CoRR, vol. abs/1804.05271, 2018.
Federated learning: Strategies for improving communication efficiency. Jakub Konecný, H Brendan Mcmahan, Felix X Yu, Peter Richtárik, abs/1610.05492CoRR. Ananda Theertha Suresh, and Dave BaconJakub Konecný, H. Brendan McMahan, Felix X. Yu, Pe- ter Richtárik, Ananda Theertha Suresh, and Dave Ba- con, "Federated learning: Strategies for improving communication efficiency," CoRR, vol. abs/1610.05492, 2016.
Efficient decentralized deep learning by dynamic model averaging. abs/1807.03210CoRR. Joachim Sicking Fabian Hger Peter Schlicht Tim Wirtz Michael Kamp, Linara Adilova and Stefan WrobelJoachim Sicking Fabian Hger Peter Schlicht Tim Wirtz Michael Kamp, Linara Adilova and Stefan Wrobel, "Ef- ficient decentralized deep learning by dynamic model averaging," CoRR, vol. abs/1807.03210, 2018.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, abs/1412.6980CoRR. Diederik P. Kingma and Jimmy Ba, "Adam: A method for stochastic optimization," CoRR, vol. abs/1412.6980, 2014.
Speech commands: A dataset for limited-vocabulary speech recognition. Pete Warden, abs/1804.03209CoRR. Pete Warden, "Speech commands: A dataset for limited-vocabulary speech recognition," CoRR, vol. abs/1804.03209, 2018.
Efficient keyword spotting using dilated convolutions and gating. Alice Coucke, Mohammed Chlieh, Thibault Gisselbrecht, David Leroy, Mathieu Poumeyrol, Thibaut Lavril, arXiv:1811.07684arXiv preprintAlice Coucke, Mohammed Chlieh, Thibault Gissel- brecht, David Leroy, Mathieu Poumeyrol, and Thibaut Lavril, "Efficient keyword spotting using dilated convo- lutions and gating," arXiv preprint arXiv:1811.07684, 2018.
A time delay neural network architecture for efficient modeling of long temporal contexts. Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur, Sixteenth Annual Conference of the International Speech Communication Association. Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khu- danpur, "A time delay neural network architecture for efficient modeling of long temporal contexts," in Six- teenth Annual Conference of the International Speech Communication Association, 2015.
Small-footprint keyword spotting using deep neural networks. Guoguo Chen, Carolina Parada, Georg Heigold, Acoustics, speech and signal processing (icassp), 2014 ieee international conference on. IEEEGuoguo Chen, Carolina Parada, and Georg Heigold, "Small-footprint keyword spotting using deep neural networks," in Acoustics, speech and signal processing (icassp), 2014 ieee international conference on. IEEE, 2014, pp. 4087-4091.
Librispeech: An ASR corpus based on public domain audio books. Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, IEEEVassil Panayotov, Guoguo Chen, Daniel Povey, and San- jeev Khudanpur, "Librispeech: An ASR corpus based on public domain audio books," in ICASSP. 2015, pp. 5206-5210, IEEE.
| [] |
[
"On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition",
"On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition"
] | [
"Jinyu Li jinyli@microsoft.com \nMicrosoft Speech and Language Group\n\n",
"Yu Wu wu.yu@microsoft.com \nMicrosoft Research Asia\n\n",
"Yashesh Gaur yagaur@microsoft.com \nMicrosoft Speech and Language Group\n\n",
"Chengyi Wang \nMicrosoft Research Asia\n\n",
"Rui Zhao ruzhao@microsoft.com \nMicrosoft Speech and Language Group\n\n",
"Shujie Liu shujiliu@microsoft.com \nMicrosoft Research Asia\n\n"
] | [
"Microsoft Speech and Language Group\n",
"Microsoft Research Asia\n",
"Microsoft Speech and Language Group\n",
"Microsoft Research Asia\n",
"Microsoft Speech and Language Group\n",
"Microsoft Research Asia\n"
] | [] | Recently, there has been a strong push to transition from hybrid models to end-to-end (E2E) models for automatic speech recognition. Currently, there are three promising E2E methods: recurrent neural network transducer (RNN-T), RNN attentionbased encoder-decoder (AED), and Transformer-AED. In this study, we conduct an empirical comparison of RNN-T, RNN-AED, and Transformer-AED models, in both non-streaming and streaming modes. We use 65 thousand hours of Microsoft anonymized training data to train these models. As E2E models are more data hungry, it is better to compare their effectiveness with large amount of training data. To the best of our knowledge, no such comprehensive study has been conducted yet. We show that although AED models are stronger than RNN-T in the non-streaming mode, RNN-T is very competitive in streaming mode if its encoder can be properly initialized. Among all three E2E models, transformer-AED achieved the best accuracy in both streaming and non-streaming mode. We show that both streaming RNN-T and transformer-AED models can obtain better accuracy than a highly-optimized hybrid model. | 10.21437/interspeech.2020-2846 | [
"https://arxiv.org/pdf/2005.14327v1.pdf"
] | 219,124,387 | 2005.14327 | be934c378c897e5bc3b3767376a59a4679093286 |
On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition
Jinyu Li jinyli@microsoft.com
Microsoft Speech and Language Group
Yu Wu wu.yu@microsoft.com
Microsoft Research Asia
Yashesh Gaur yagaur@microsoft.com
Microsoft Speech and Language Group
Chengyi Wang
Microsoft Research Asia
Rui Zhao ruzhao@microsoft.com
Microsoft Speech and Language Group
Shujie Liu shujiliu@microsoft.com
Microsoft Research Asia
On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition
arXiv:2005.14327v1 [eess.AS] 28 May 2020Index Terms: end-to-endRNN-transducerattention-based encoder-decodertransformer
Recently, there has been a strong push to transition from hybrid models to end-to-end (E2E) models for automatic speech recognition. Currently, there are three promising E2E methods: recurrent neural network transducer (RNN-T), RNN attentionbased encoder-decoder (AED), and Transformer-AED. In this study, we conduct an empirical comparison of RNN-T, RNN-AED, and Transformer-AED models, in both non-streaming and streaming modes. We use 65 thousand hours of Microsoft anonymized training data to train these models. As E2E models are more data hungry, it is better to compare their effectiveness with large amount of training data. To the best of our knowledge, no such comprehensive study has been conducted yet. We show that although AED models are stronger than RNN-T in the non-streaming mode, RNN-T is very competitive in streaming mode if its encoder can be properly initialized. Among all three E2E models, transformer-AED achieved the best accuracy in both streaming and non-streaming mode. We show that both streaming RNN-T and transformer-AED models can obtain better accuracy than a highly-optimized hybrid model.
Introduction
Recently, the speech research community is seeing a significant trend of moving from deep neural network based hybrid modeling [1] to end-to-end (E2E) modeling [2,3,4,5,6,7,8] for automatic speech recognition (ASR). Where hybrid models required disjoint optimization of separate constituent models such as acoustic and language model, E2E ASR systems directly translate an input speech sequence into an output token (characters, sub-words, or even words) sequence using a single network.
Some widely used contemporary E2E approaches for sequence-to-sequence transduction are: (a) Connectionist Temporal Classification (CTC) [9,10], (b) recurrent neural network Transducer (RNN-T) [11], and (c) Attention-based Encoder-Decoder (AED) [12,13,3]. Among these three approaches, CTC was the earliest and can map the input speech signal to target labels without requiring any external alignments. However, it also suffers from the conditional frame-independence assumption. RNN-T extends CTC modeling by changing the objective function and the model architecture to remove the frame-independence assumption. Because of its streaming nature, RNN-T has received a lot of attention for industrial applications and has also managed to replace traditional hybrid models for some cases [8,14,15,16].
AED is a general family of models that was initially proposed for machine translation [17] but has shown success in other domains (including ASR [12,13,3]) as well. These models are not streaming in nature by default but there are several studies towards that direction, such as monotonic chunkwise attention [18] and triggered attention [19]. The early AED models used RNNs as a building block for its the encoder and decoder modules. We refer to them as RNN-AED in this study. More recently, the transformer architecture with self attention [20] has also become prevalent and is being used as a fundamental building block for encoder and decoder modules [21,22,23]. We refer to such a model as Transformer-AED in this paper.
Given the fast evolving landscape of E2E technology, it is timely to compare the most popular and promising E2E technologies for ASR in the field, shaping the future research direction. This paper focuses on the comparison of current most promising E2E technologies, namely RNN-T, RNN-AED, and Transformer-AED, in both non-streaming and streaming modes. All models are trained with 65 thousand hours of Microsoft anonymized training data. As E2E models are data hungry, it is better to compare its power with such a large amount of training data. To our best knowledge, there is no such a detailed comparison. In a recent work [14], the streaming RNN-T model was compared with the non-streaming RNN-AED. In [24], streaming RNN-AED is compared with streaming RNN-T for long-form speech recognition. In [23], RNN-AED and Transformer-AED are compared in a non-streaming mode, with training data up to 960 hours. As the industrial applications usually requires the ASR service in a streaming mode, we further put more efforts on how to develop these E2E models in a streaming mode. While it has been shown in [25] that combining RNN-T and RNN-AED in a two-pass decoding configuration can surpass an industry-grade state-of-the-art hybrid model, this study shows that a single streaming E2E model, either RNN-T or Transformer-AED, can also surpass a state-ofthe-art hybrid model [26,27].
In addition to performing a detailed comparison of these promising E2E models for the first time, other contributions of this paper are 1) We propose a multi-layer context modeling scheme to explore future context with significant gains; 2) The cross entropy (CE) initialization is shown to be much more effective than CTC initialization to boost RNN-T models; 3) For streaming Transformer-AED, we show chunk-based future context integration is more effective than the lookahead method. 4) We release our Transformer related code with reproducible results on Librispeech at [28] to facilitate future research in E2E ASR.
Popular End-to-End Models
In this section, we give a brief introduction of current popular E2E models: RNN-T, RNN-AED, and Transformer-AED. These models have an acoustic encoder that generates high level representation for speech and a decoder, which autoregressively generates output tokens in the linguistic domain. While the acoustic encoders can be same, the decoders of RNN-T and AED are different. In RNN-T, the generation of next label is only conditioned on the label outputs at previous steps while the decoder of AED conditions the next output on acoustics as well. More importantly, RNN-T works in a frame-synchronized way while AED works in a label-synchronized fashion.
RNN transducer
The encoder network converts the acoustic feature x1:T into a high-level representation h enc 1:T . The decoder in RNN-T, called prediction network, produces a high-level representation h pre u by consuming previous non-blank target yu−1. Here u denotes output label index. The joint network is a feed-forward network that combines the encoder network output h enc t and the prediction network output h pre u to generate the joint matrix ht,u. Here t denotes time index. This joint matrix is used to calculate softmax output.
The encoder and prediction networks are usually realized using RNN with LSTM [29] units. When the encoder is a unidirectional LSTM-RNN as Eq. (1), RNN-T works in streaming mode by default.
h enc t = LST M (xt, h enc t−1 )(1)
However, when the underlying LSTM-RNN encoder is a bidirectional model as Eq. (2), it is a non-streaming E2E model.
h enc t = [LST M (xt, h enc t−1 ), LST M (xt, h enc t+1 )](2)
When implemented with LSTM-RNN, the prediction network formulation is
h pre u = LST M (yu−1, h pre u−1 ).(3)
Attention-based Encoder-Decoder
While RNN-T has received more attention from the industry due to its streaming nature, the Attention-based Encoder-Decoder (AED) models attracts more research from academia because of its powerful attention structure. RNN-AED and Transformer-AED differ at the realization of encoder and decoder by using LSTM-RNN and Transformer, respectively.
RNN-AED
The encoder of RNN-AED can have the same structure as RNN-T like Eq. (1) and Eq. (2). However, the attention-enhanced decoder operates differently as below:
h dec u = LST M (cu, yu−1, h dec u−1 ).(4)
here cu is the context vector obtained by weighted combination of the encoder output. cu is supposed to contain the acoustic information necessary to emit the next token. It is calculated using the help of the attention mechanism [12,30].
Transformer-AED
Even though RNNs can capture long term dependencies, Transformer [20] based models can do it more effectively given the attention mechanism sees all context directly. Specifically, the encoder is composed of a stack of Transformer blocks, where each block has a multi-head self-attention layer and a feedforward layer. Suppose that the input of a Transformer block can be linearly transformed to Q, K, and V. Then, the output of a multi-head self-attention layer is
Multihead(Q, K, V) = [H1 . . . H d head ]W head(5)
where
H i = softmax( Q i K T i √ d k )V i , Q i = QW Q i , K i = KW K i , V i = VW V i .
Here d head is the number of attention heads and d k is the dimension of the feature vector for each head. This output is fed to the feed-forward layer. Residual connections [31] and layer normalization [32] are indispensable when we connect different layers and blocks. In addition to the two layers in an encoder block, the Transformer decoder also has an additional third layer that performs multi-head attention over the output of the encoder. This is similar to the attention mechanism in RNN-AED.
Our Models
Model building block
The encoder and decoder of E2E models are constructed as the stack of multiple building blocks described in this section. For the models using LSTM-RNN, we explore two structures. The first one, LSTM cuDNN, directly calls Nvidia cuDNN library [33] for the LSTM implementation. We build every block by concatenating a cuDNN LSTM layer, a linear projection layer to reduce model size, and then followed by layer normalization. Calling Nvidia cuDNN implementation enables us for fast experiment of comparing different models. The second structure, LSTM Custom, puts layer normalization and projection layer inside LSTM, as it was indicated in [8] that they are important for better RNN-T model training. Hence, we only use this structure for RNN-T by customizing the LSTM function. The detailed formulations are in [15]. However, this slows down the model training speed by 50%.
For the Transformer-AED models, we remove the position embedding part [34] and use a VGG-like convolution module [35] to pre-process the speech feature x1:T before the Transformer blocks. The layer normalization is put before multihead attention layer (Pre-LN), which makes the gradients wellbehaved at the early stage in training.
Non-streaming models
We achieve non-streaming behavior in RNN-T by adding bidirectionality in the encoder. The encoder of this RNN-T is composed of multiple blocks of bi-directional LSTM cuDNN as described in Section 3.1. The prediction network is realized with multiple uni-directional blocks of LSTM cuDNN.
Similar to RNN-T, the non-streaming RNN-AED investigated in this study also uses multiple blocks of bidirectional LSTM cuDNN in the encoder and uni-directional LSTM cuDNN in the decoder. This decoder works together with a location-aware softmax attention [36]. No multi-task training or joint-decoding with CTC is used for RNN-AED.
Following [23], the Transformer-AED model uses the multi-task training and the joint decoding of CTC/attention. The training objective function is
L = −α log pctc(y|x1:T ) − (1 − α) log patt(y|x1:T ). (6)
The log-likelihood of the next subword log p(yu|x1:t, y1:u) in the joint decoding is formulated as log pctc(yu|x1:t, y1:u) + β1 log patt(yu|x1:t, y1:u).
In practice, we first use the attention model to select top-k candidates and then re-rank them with Eq. 7.
Streaming models
Streaming RNN-T model has a uni-directional encoder. While we can directly incorporate a standard LSTM as the building block with either LSTM cuDNN or LSTM Custom as described in Section 3.1, incorporating the future context into encoder structure can significantly improve the ASR accuracy, as shown in [15]. However, different from [15] which explores future context frames together with the layer trajectory structure, in this study we propose to only use context modeling. We do this to save model parameters. Future context is modelled using the simple equation below.
ζ l t = τ δ=0 q l δ ⊙ g l t+δ .(8)
Because ⊙ is element-wise product, Eq. (8) only increases the number of model parameters very slightly. It transfers a lower layer vector g l t together with its future vectors g l t+δ into a new vector ζ l t , where δ is future frame index. We modify the block of LSTM cuDNN or LSTM Custom with the context modeling.
• LSTM cuDNN Context: the block is constructed with a Nvidia cuDNN LSTM layer, followed by a linear projection layer, then the context modeling layer, and finally a layer normalization layer.
• LSTM Custom Context: the block is constructed with the layer normalized LSTM layer with projection, and then followed by the context modeling layer.
A similar concept of context modeling was applied to RNN in [37] as Lookahead convolution layer. However, it was only applied to the top layer of a multi-layer RNN. In contrast, in this study we apply context modeling to every block of LSTM cuDNN or LSTM Custom, and also investigate its effectiveness in the context of E2E modeling. For RNN-T, we also investigate initializing the encoder with either CTC [7] or CE training [38]. RNN-AED models use blocks of LSTM cuDNN Context as encoder. Experiments with LSTM Custom Context will be a part of future study. The streaming mechanism we have chosen for this study is Monotonic Chunkwise Attention (MoChA) [39]. MoChA consists of a monotonic attention mechanism [40] which scans the encoder output in a left to right order and selects a particular encoder state when it decides to trigger the decoder. This selection probability is selected by sampling from a parameterized Bernoulli random variable. Once a trigger point is detected, MoChA also uses an additional lookback window and applies a regular softmax attention over that. Note that we have a sampling operation here, which precludes the use of standard backpropagation. Therefore we train with respect to the expected values of the context vectors. Please refer to [39] for more details.
To enable streaming scenario in Transformer-AED models, we borrow the idea in trigger-attention (TA) [19], where the CTC conducts frame-synchronized decoding to select top-k candidates for each frame and then the attention model is leveraged to jointly re-rank the candidates using Eq. 7 once a new subword is triggered by the CTC. Since the Transformer encoder is deeper than LSTM, the lookahead method may not be the best solution. We compare the chunk-based method and the lookahead-based method. The former segments the entire input into several fixed-length chunks and then feeds them into the model chunk by chunk, while the latter is exactly the same with the method in RNN-T and RNN-AED. For the chunk-based encoder, the decoder can see the end of a chunk. For the lookahead based encoder, we set a fixed window size for decoder.
Experiments
In this section, we evaluate the effectiveness of all models by training them with 65 thousand (K) hours of transcribed Microsoft data. The test sets cover 13 application scenarios such as Cortana and far-field speech, containing a total of 1.8 million (M) words. We report the word error rate (WER) averaged over all test scenarios. All the training and test data are anonymized with personally identifiable information removed.
For fair comparison, all E2E models built for this study have around 87 M parameters. The input feature is 80dimension log Mel filter bank with a stride of 10 milliseconds (ms). Three of them are stacked together to form a 240dimension super-frame. This is fed to the encoder networks for RNN-T and RNN-AED, while Transformer-AED directly consumes the 10 ms feature. All E2E models use the same 4 K word piece units as the output target.
Non-streaming E2E models
As described in Section 3.1, the non-streaming RNN-T model uses bi-directional LSTM with Nvidia cuDNN library in its encoder. The LSTM memory cell size is 780. The LSTM outputs from the forward and backward direction are concatenated with the total dimension of 1560 then linearly projected to dimension 780, followed by a layer normalization layer. There are total 6 stacked blocks of such operation. The prediction network has 2 stacked blocks, each of which contains a uni-directional cuDNN LSTM with memory cell size of 1280, followed by a linear projection layer to reduce the dimension to 640, and then with a layer normalization layer.
The non-streaming RNN-AED model uses exactly the same encoder and decoder structures as the non-streaming RNN-T model. Similar to [30], a location-aware attention mechanism is used. In addition to the encoder and decoder hidden states, this mechanism also takes alignments from previous decoder step as inputs. The attention dimension is 512.
The Transformer-AED model has 18 Transformer blocks in encoder and 6 Transformer blocks in decoder. Before Transformer blocks in encoder, we use a 4 layers VGG network to pre-process the speech feature with total stride 4. The number of attention head is 8 and the attention dimension of each head is 64. The dimension of the feed-forward layer is 2048 in Transformer blocks. The combination weights of joint training and decoding (i.e. α, β) are both 0.3.
As shown in Table 1, the non-streaming AED models have a clear advantage over the non-streaming RNN-T model due to the power of attention modeling. Transformer-AED improves RNN-AED by 2.7% relative WER reduction.
Surpassing hybrid model with streaming E2E models
In [26] we reported results from our best hybrid model called the contextual layer trajectory LSTM (cltLSTM) [27]. The cltL-STM was trained with a three-stage optimization process. This model was able to obtain a 16.2% relative WER reduction over the CE baseline. Introducing 24 frames of total future-context further yields an 18.7% relative WER reduction. The encode latency is only 480 ms (24*20ms=480 ms; stride-per-frame is 20 ms due to frame skipping [41]). Hence, this cltLSTM model (Table 2) presents a very challenging streaming hybrid model to beat. This model has 65 M parameters, and is decoded with 5 gigabytes 5gram decoding graph.
We list the results for all streaming E2E models in Table 2. The baseline RNN-T implementation uses unidirectional cuDNN LSTMs in both the encoder and the decoder. The encoder has 6 stacked blocks of LSTM cuDNN. Each block has a unidirectional cuDNN LSTM with 1280 memory cells which projected to 640 dimension and followed by layer normalization. The prediction and the joint network is the same as in the non-streaming RNN-T model. This RNN-T model obtains 12.16% test WER. The second RNN-T model inserts the context modeling layer (Eq. (8)) after the linear projection layer in each block. The context modeling has 4 frames lookahead at each block, and therefore the encoder has 4 * 6 = 24 frames lookahead. Because the frame shift is 30 ms, the total encoder lookahead is 720ms. The lookahead brings great WER improvement, obtaining 10.65% WER. This is 12.4% relative WER reduction from the first RNN-T model without any lookahead. We also followed lookahead convolution proposed in [37] by using 24 frames lookahead only on the top most RNN block. This model gives 11.19% WER, showing that our proposed context modeling, which allocates lookahead frames equally at each block, is better than lookahead convolution [37], which simply puts all lookahead frames on the top layer only.
Next, we look at the impact of encoder initialization for RNN-T. Shown in Table 2, the CTC initialization of RNN-T encoder doesn't help too much while the CE initialization significantly reduces WER to 9.80. This is 8.0% relative WER reduction from the randomly initialized model. The CTC initialization makes the encoder emit token spikes together with lots of blanks while CE initialization enables the encoder to learn time alignment. Given the gain with CE initialization, we believe the encoder of RNN-T functions more like an acoustic model in the hybrid model. Note the CE pre-training needs time alignments, which is hard to get for word piece units as many of them don't have phoneme realisation. However, the time alignment for words is still accurate. We make an approximation and obtain alignments for a word piece by simply segmenting the duration of its word equally into its constituent word pieces.
For the last RNN-T model, we put projection layer and layer normalization inside the LSTM cell (Custom LSTM), and then insert the context modeling layer after it. Putting projection layer inside allows us to use larger number of memory cells while keeping similar model size as the cuDNN LSTM setup. This LSTM has 2048 memory cells and the project layer reduces the output size to 640. This model finally gives 9.27% WER, which is slightly better than our best hybrid model.
With the same encoder architecture as the cuDNN RNN-T, the MoChA-based streaming RNN-AED model gives impressive results. Unlike RNN-T, it does not need any initialization and is still able to slightly outperform it in an apple-to-apple comparison (9.61% vs 9.80%). To the best of our knowledge, this is the first time a streaming RNN-AED has outperformed RNN-T on a large scale task. Note that our previous study didn't observe accuracy improvement for RNN-AED with CE initialization [42]. We will investigate whether RNN-AED can also benefit from customized LSTM function in future study. The architecture of the streaming Transformer-AED model is the same as the non-streaming one. For lookahead contextmodeling method, each encoder block looks ahead 1 frame. Considering the total stride of VGG is 4 and the speech sampling rate is 10ms, the encoder has 1 * 18 * 4 * 10ms = 720ms latency. The decoder of the look-ahead method introduces an extra 240ms latency. The chunk-based method considers future context with a fixed-chunk. The latency of each frame is in the range of [480ms, 960ms], resulting in a 720ms averaged latency without extra decoder latency. The chunk-based method obtains 9.16% WER, significantly outperforming the lookahead method, mainly because the bottom Transformer blocks of the lookahead approach cannot enjoy the full advantages provided by the right context.
Conclusions
This work presents the first large-scale comparative study of three popular E2E models (RNN-T, RNN-AED, and Transformer-AED). The models are compared in both streaming and non-streaming modes. All models are trained with 65K hours of Microsoft's internal anonymized data. We observe that with the same encoder structure, AED is better than RNN-T for both non-streaming and streaming models. With customized LSTM and CE initialization for encoder, the RNN-T model becomes better than RNN-AED. Among all models, Transformer-AED obtained the best WERs in both streaming and non-streaming modes.
In this study, both streaming RNN-T and Transformer-AED outperformed a highly-optimized hybrid model. There are several significant factors contributing to this success. For streaming RNN-T, the proposed context modeling reduces the WER by 12.4% relative from the one without any lookahead. The CE initialization for RNN-T improves over the random initialization baseline by 8.0% relative WER reduction. This shows pretraining is helpful even on a large scale task. To utilize future context for streaming Transformer-AED, we show that the chunk-based method is better than the lookahead method by 10.7% relative.
Table 1 :
1Average WER of all non-streaming E2E models on 13 test sets containing 1.8 M words.non-streaming models WER
RNN-T (cuDNN)
9.25
RNN-AED (cuDNN)
8.05
Transformer-AED
7.83
Table 2 :
2Average WERs of streaming models on 13 test sets containing 1.8 M words.streaming models
WER encoder lookahead
hybrid
cltLSTM
9.34
480 ms
RNN-T
cuDNN
12.16
0 ms
cuDNN+Context
10.65
720 ms
cuDNN+convolution [37]
11.19
720 ms
cuDNN+Context+CTC init. 10.62
720 ms
cuDNN+Context+CE init.
9.80
720 ms
Custom+Context+CE init.
9.27
720 ms
RNN-AED
cuDNN+Context
9.61
720 ms
Transformer-AED
Lookahead method
10.26
720 ms
Chunk-based method
9.16
720 ms
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. G Hinton, L Deng, D Yu, G E Dahl, A Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T N Sainath, IEEE Signal Processing Magazine. 296G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. Y Miao, M Gowayyed, F Metze, Proc. ASRU. ASRUIEEEY. Miao, M. Gowayyed, and F. Metze, "EESEN: End-to-end speech recognition using deep RNN models and WFST-based de- coding," in Proc. ASRU. IEEE, 2015, pp. 167-174.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, N Jaitly, Q Le, O Vinyals, Proc. ICASSP. IEEE. ICASSP. IEEEW. Chan, N. Jaitly, Q. Le, and O. Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in Proc. ICASSP. IEEE, 2016, pp. 4960- 4964.
A comparison of sequence-to-sequence models for speech recognition. R Prabhavalkar, K Rao, T N Sainath, B Li, L Johnson, N Jaitly, Proc. Interspeech. InterspeechR. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, "A comparison of sequence-to-sequence models for speech recognition," in Proc. Interspeech, 2017, pp. 939-943.
Exploring neural transducers for end-to-end speech recognition. E Battenberg, J Chen, R Child, A Coates, Y G Y Li, H Liu, S Satheesh, A Sriram, Z Zhu, Proc. ASRU. ASRUIEEEE. Battenberg, J. Chen, R. Child, A. Coates, Y. G. Y. Li, H. Liu, S. Satheesh, A. Sriram, and Z. Zhu, "Exploring neural transducers for end-to-end speech recognition," in Proc. ASRU. IEEE, 2017, pp. 206-213.
Stateof-the-art speech recognition with sequence-to-sequence models. C.-C Chiu, T N Sainath, Y Wu, R Prabhavalkar, P Nguyen, Z Chen, A Kannan, R J Weiss, K Rao, K Gonina, Proc. ICASSP. ICASSPC.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, K. Gonina et al., "State- of-the-art speech recognition with sequence-to-sequence models," in Proc. ICASSP, 2018.
Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer. K Rao, H Sak, R Prabhavalkar, Proc. ASRU. ASRUK. Rao, H. Sak, and R. Prabhavalkar, "Exploring architectures, data and units for streaming end-to-end speech recognition with RNN-transducer," in Proc. ASRU, 2017.
Streaming end-to-end speech recognition for mobile devices. Y He, T N Sainath, R Prabhavalkar, I Mcgraw, R Alvarez, D Zhao, D Rybach, A Kannan, Y Wu, R Pang, Proc. ICASSP. ICASSPY. He, T. N. Sainath, R. Prabhavalkar, I. McGraw, R. Alvarez, D. Zhao, D. Rybach, A. Kannan, Y. Wu, R. Pang et al., "Stream- ing end-to-end speech recognition for mobile devices," in Proc. ICASSP, 2019, pp. 6381-6385.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningACMA. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, "Con- nectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks," in Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 369-376.
Towards end-to-end speech recognition with recurrent neural networks. A Graves, N Jaitley, PMLRA. Graves and N. Jaitley, "Towards end-to-end speech recognition with recurrent neural networks," in PMLR, 2014, pp. 1764-1772.
Sequence transduction with recurrent neural networks. A Graves, abs/1211.3711CoRR. A. Graves, "Sequence transduction with recurrent neural net- works," CoRR, vol. abs/1211.3711, 2012.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintD. Bahdanau, K. Cho, and Y. Bengio, "Neural machine trans- lation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
Attention-based models for speech recognition. J K Chorowski, D Bahdanau, D Serdyuk, K Cho, Y Bengio, NIPS. J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Ben- gio, "Attention-based models for speech recognition," in NIPS, 2015, pp. 577-585.
Two-pass end-to-end speech recognition. T Sainath, R Pang, Proc. Interspeech. InterspeechT. Sainath, R. Pang, and et. al., "Two-pass end-to-end speech recognition," in Proc. Interspeech, 2019.
Improving RNN transducer modeling for end-to-end speech recognition. J Li, R Zhao, H Hu, Y Gong, Proc. ASRU. ASRUJ. Li, R. Zhao, H. Hu, and Y. Gong, "Improving RNN transducer modeling for end-to-end speech recognition," in Proc. ASRU, 2019.
RNN-T for latency controlled ASR with improved beam search. M Jain, K Schubert, J Mahadeokar, arXiv:1911.01629arXiv preprintM. Jain, K. Schubert, J. Mahadeokar et al., "RNN-T for la- tency controlled ASR with improved beam search," arXiv preprint arXiv:1911.01629, 2019.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintD. Bahdanau, K. Cho, and Y. Bengio, "Neural machine trans- lation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
Monotonic chunkwise attention. C.-C Chiu, C Raffel, arXiv:1712.05382arXiv preprintC.-C. Chiu and C. Raffel, "Monotonic chunkwise attention," arXiv preprint arXiv:1712.05382, 2017.
Triggered attention for endto-end speech recognition. N Moritz, T Hori, J Le Roux, Proc. ICASSP. ICASSPN. Moritz, T. Hori, and J. Le Roux, "Triggered attention for end- to-end speech recognition," in Proc. ICASSP, 2019, pp. 5666- 5670.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in Neural Information Processing Systems, 2017, pp. 6000-6010.
Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. L Dong, S Xu, B Xu, Proc. ICASSP. ICASSPL. Dong, S. Xu, and B. Xu, "Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition," in Proc. ICASSP, 2018, pp. 5884-5888.
Syllable-based sequenceto-sequence speech recognition with the transformer in Mandarin Chinese. S Zhou, L Dong, S Xu, B Xu, Proc. Interspeech. InterspeechS. Zhou, L. Dong, S. Xu, and B. Xu, "Syllable-based sequence- to-sequence speech recognition with the transformer in Mandarin Chinese," in Proc. Interspeech, 2018.
A comparative study on transformer vs RNN in speech applications. S Karita, N Chen, T Hayashi, Proc. ASRU. ASRUS. Karita, N. Chen, T. Hayashi et al., "A comparative study on transformer vs RNN in speech applications," in Proc. ASRU, 2019.
A comparison of end-to-end models for long-form speech recognition. C.-C Chiu, W Han, Y Zhang, Proc. ASRU. ASRUC.-C. Chiu, W. Han, Y. Zhang et al., "A comparison of end-to-end models for long-form speech recognition," in Proc. ASRU, 2019.
A streaming on-device end-toend model surpassing server-side conventional model quality and latency. T N Sainath, Y He, B Li, Proc. ICASSP, 2020. ICASSP, 2020T. N. Sainath, Y. He, B. Li et al., "A streaming on-device end-to- end model surpassing server-side conventional model quality and latency," in Proc. ICASSP, 2020, pp. 6059-6063.
High-accuracy and low-latency speech recognition with twohead contextual layer trajectory LSTM model. J Li, R Zhao, E Sun, J H Wong, A Das, Z Meng, Y Gong, Proc. ICASSP. ICASSPJ. Li, R. Zhao, E. Sun, J. H. Wong, A. Das, Z. Meng, and Y. Gong, "High-accuracy and low-latency speech recognition with two- head contextual layer trajectory LSTM model," in Proc. ICASSP, 2020.
Improving layer trajectory LSTM with future context frames. J Li, L Lu, C Liu, Y Gong, Proc. ICASSP. ICASSPJ. Li, L. Lu, C. Liu, and Y. Gong, "Improving layer trajectory LSTM with future context frames," in Proc. ICASSP, 2019, pp. 6550-6554.
Streaming Transformer. C Wang, C. Wang, Streaming Transformer, 2020. [Online]. Available: https://github.com/cywang97/StreamingTransformer
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
End-to-end attention-based large vocabulary speech recognition. D Bahdanau, J Chorowski, D Serdyuk, P Brakel, Y Bengio, Proc. ICASSP. IEEE. ICASSP. IEEED. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Ben- gio, "End-to-end attention-based large vocabulary speech recog- nition," in Proc. ICASSP. IEEE, 2016, pp. 4945-4949.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, arXiv:1512.03385arXiv preprintK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," arXiv preprint arXiv:1512.03385, 2015.
Layer normalization. J L Ba, J R Kiros, G E Hinton, arXiv:1607.06450arXiv preprintJ. L. Ba, J. R. Kiros, and G. E. Hinton, "Layer normalization," arXiv preprint arXiv:1607.06450, 2016.
cuDNN: Efficient primitives for deep learning. S Chetlur, C Woolley, P Vandermersch, J Cohen, J Tran, B Catanzaro, E Shelhamer, arXiv:1410.0759arXiv preprintS. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, "cuDNN: Efficient primitives for deep learning," arXiv preprint arXiv:1410.0759, 2014.
Semantic mask for transformer based end-to-end speech recognition. C Wang, Y Wu, Y Du, J Li, S Liu, L Lu, S Ren, G Ye, S Zhao, M Zhou, arXiv:1912.03010arXiv preprintC. Wang, Y. Wu, Y. Du, J. Li, S. Liu, L. Lu, S. Ren, G. Ye, S. Zhao, and M. Zhou, "Semantic mask for transformer based end-to-end speech recognition," arXiv preprint arXiv:1912.03010, 2019.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, Proc. ICLR. ICLRK. Simonyan and A. Zisserman, "Very deep convolutional net- works for large-scale image recognition," in Proc. ICLR, 2015.
Attention-based models for speech recognition. J K Chorowski, D Bahdanau, D Serdyuk, K Cho, Y Bengio, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Ben- gio, "Attention-based models for speech recognition," in Ad- vances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds. Curran Associates, Inc., 2015, pp. 577-585. [Online]. Available: http://papers.nips.cc/paper/5847-attention-based-models-for-speech-recognition.pd
Lookahead convolution layer for unidirectional recurrent neural networks. C Wang, D Yogatama, A Coates, T Han, A Hannun, B Xiao, Proc. ICLR Workshop. ICLR WorkshopC. Wang, D. Yogatama, A. Coates, T. Han, A. Hannun, and B. Xiao, "Lookahead convolution layer for unidirectional recur- rent neural networks," in Proc. ICLR Workshop, 2016.
Exploring pretraining with alignments for RNN transducer based end-to-end speech recognition. H Hu, R Zhao, J Li, L Lu, Y Gong, Proc. ICASSP. ICASSPH. Hu, R. Zhao, J. Li, L. Lu, and Y. Gong, "Exploring pre- training with alignments for RNN transducer based end-to-end speech recognition," in Proc. ICASSP, 2020.
Monotonic chunkwise attention. C.-C Chiu*, C , International Conference on Learning Representations. C.-C. Chiu* and C. Raffel*, "Monotonic chunk- wise attention," in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=Hko85plCW
Online and linear-time attention by enforcing monotonic alignments. C Raffel, D Eck, P Liu, R J Weiss, T Luong, Thirty-fourth International Conference on Machine Learning. C. Raffel, D. Eck, P. Liu, R. J. Weiss, and T. Luong, "Online and linear-time attention by enforcing monotonic alignments," in Thirty-fourth International Conference on Machine Learning, 2017.
Simplifying long short-term memory acoustic models for fast training and decoding. Y Miao, J Li, Y Wang, S Zhang, Y Gong, Proc. ICASSP. ICASSPY. Miao, J. Li, Y. Wang, S. Zhang, and Y. Gong, "Simplifying long short-term memory acoustic models for fast training and de- coding," in Proc. ICASSP, 2016.
Minimum latency training strategies for streaming sequence-to-sequence asr. H Inaguma, Y Gaur, L Lu, J Li, Y Gong, Proc. ICASSP. ICASSPH. Inaguma, Y. Gaur, L. Lu, J. Li, and Y. Gong, "Minimum la- tency training strategies for streaming sequence-to-sequence asr," in Proc. ICASSP, 2020.
| [
"https://github.com/cywang97/StreamingTransformer"
] |
[
"Parsing linearizations appreciate PoS tags -but some are fussy about errors",
"Parsing linearizations appreciate PoS tags -but some are fussy about errors"
] | [
"Alberto Muñoz-Ortiz alberto.munoz.ortiz@udc.es \nUniversidade da Coruña\nCITIC\nSpain\n",
"Mark Anderson andersonm8@caerdydd.ac.uk \nPrifysgol CaerdyddCaerdyddUnited Kingdom\n",
"David Vilares david.vilares@udc.es \nUniversidade da Coruña\nCITIC\nSpain\n",
"Carlos Gómez-Rodríguez carlos.gomez@udc.es \nUniversidade da Coruña\nCITIC\nSpain\n"
] | [
"Universidade da Coruña\nCITIC\nSpain",
"Prifysgol CaerdyddCaerdyddUnited Kingdom",
"Universidade da Coruña\nCITIC\nSpain",
"Universidade da Coruña\nCITIC\nSpain"
] | [
"Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing"
] | PoS tags, once taken for granted as a useful resource for syntactic parsing, have become more situational with the popularization of deep learning. Recent work on the impact of PoS tags on graph-and transition-based parsers suggests that they are only useful when tagging accuracy is prohibitively high, or in lowresource scenarios. However, such an analysis is lacking for the emerging sequence labeling parsing paradigm, where it is especially relevant as some models explicitly use PoS tags for encoding and decoding. We undertake a study and uncover some trends. Among them, PoS tags are generally more useful for sequence labeling parsers than for other paradigms, but the impact of their accuracy is highly encodingdependent, with the PoS-based head-selection encoding being best only when both tagging accuracy and resource availability are high. | 10.48550/arxiv.2210.15219 | [
"https://www.aclanthology.org/2022.aacl-short.16.pdf"
] | 253,157,328 | 2210.15219 | d54636508c8d1265a4b1b6be0b5a1c2a746ec016 |
Parsing linearizations appreciate PoS tags -but some are fussy about errors
Short PapersCopyright Short PapersNovember 20-23, 2022
Alberto Muñoz-Ortiz alberto.munoz.ortiz@udc.es
Universidade da Coruña
CITIC
Spain
Mark Anderson andersonm8@caerdydd.ac.uk
Prifysgol CaerdyddCaerdyddUnited Kingdom
David Vilares david.vilares@udc.es
Universidade da Coruña
CITIC
Spain
Carlos Gómez-Rodríguez carlos.gomez@udc.es
Universidade da Coruña
CITIC
Spain
Parsing linearizations appreciate PoS tags -but some are fussy about errors
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing
the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language ProcessingShort Papers2November 20-23, 2022117
PoS tags, once taken for granted as a useful resource for syntactic parsing, have become more situational with the popularization of deep learning. Recent work on the impact of PoS tags on graph-and transition-based parsers suggests that they are only useful when tagging accuracy is prohibitively high, or in lowresource scenarios. However, such an analysis is lacking for the emerging sequence labeling parsing paradigm, where it is especially relevant as some models explicitly use PoS tags for encoding and decoding. We undertake a study and uncover some trends. Among them, PoS tags are generally more useful for sequence labeling parsers than for other paradigms, but the impact of their accuracy is highly encodingdependent, with the PoS-based head-selection encoding being best only when both tagging accuracy and resource availability are high.
Introduction
PoS tags have long been considered a useful feature for parsers, especially prior to the prevalence of neural networks (Voutilainen, 1998;Dalrymple, 2006;Alfared and Béchet, 2012). For neural parsers, it is less clear if they are useful or not. Work has shown that when using word and character embeddings, PoS tags become much less useful (Ballesteros et al., 2015;de Lhoneux et al., 2017). However, Dozat et al. (2017) found using universal PoS (UPoS) tags to be somewhat helpful, but improvements are typically quite small (Smith et al., 2018). Similarly, for multi-task systems, small improvements have been observed for both UPoS and finer-grained tags (Zhang et al., 2020).
A limiting factor when using predicted PoS tags is the apparent need for very high accuracy from taggers (Anderson and Gómez-Rodríguez, 2020). This is particularly problematic in a low-resource setting where using gold tags gives unreasonably high performance (Tiedemann, 2015) and high accuracy taggers are difficult to obtain (Kann et al., 2020). However, some work has suggested that in a low-resource setting even low accuracy taggers can be beneficial for parsing performance, especially when there is more PoS tag annotations than dependency tree annotations (Anderson et al., 2021).
These findings relate to transition-based (TB) and graph-based (GB) parsers, but recently several encodings have been proposed to frame dependency parsing as a sequence labeling task (Strzyz et al., 2019;Lacroix, 2019;Gómez-Rodríguez et al., 2020), providing an alternative to GB and TB models when efficiency is a priority (Anderson and Gómez-Rodríguez, 2021). Muñoz-Ortiz et al. (2021) found that the amount of data required for different encodings varied and that some were impacted by predicted PoS tag use more than others.
Here, we evaluate the impact of PoS tagging accuracy on different encodings and also the interplay of this potential relation and the amount of available data (using low-, mid-, high-, and very-highresource treebanks). This is done by artificially controlling the accuracy of PoS taggers by using the nature of errors generated by robust taggers. 1
Sequence labeling parsing
In dependency parsing as sequence labeling, the goal is to assign a single label of the form (x i , l i ) to every input token w i of a sequence, where x i encodes a subset of the arcs related to w i and l i is the dependency type. Below, we review the existing families of linearizations used in this work. Head-selection (Spoustová and Spousta, 2010), where x i encodes the head of w i using an absolute index or a relative offset, that can be based on some word property (usually PoS tags, which is also the property we use in this work due to its strong performance in previous work). So for instance, if x i = (+n, X), this would indicate that the head of w i is the nth word to the right of w i with the word property X. Some desirable properties of this encoding family are a direct correspondence between words and arcs and the capacity to encode any nonprojective tree. However, a major weakness is its dependency on the chosen property (in our case, PoS tags) to decode trees. Bracketing-based x i represents the dependency arcs using a string of brackets, with each arc represented by a bracket pair. Its main advantage is that it is independent of external features, but regarding projectivity it cannot represent arcs that cross in the same direction. To alleviate this, we use the encoding proposed by Strzyz et al. (2020), that adds a second independent plane of brackets (2p b ), inspired by multiplanarity (Yli-Jyrä, 2003). Transition-based (Gómez-Rodríguez et al., 2020), where given a sequence of transitions generated by a left-to-right transition-based parser, it splits it in labels based on read transitions (e.g. SHIFT), such that each word receives a label x i with a subset of transition actions. For this work, we consider mappings from a projective algorithm, arc-hybrid (ah tb ; Kuhlmann et al., 2011) and a non projective algorithm, Covington (c tb ; Covington, 2001).
Parser systems
We use a 2-layer bidirectional long short-term memory (biLSTM) network with a feed-forward network to predict the labels using softmaxes. We use hard-sharing multi-task learning to predict x i and l i . 2 The inputs to the network are randomly initialized word embeddings and LSTM character embeddings and optionally (see §4), PoS tag embeddings. The appendix specifies the hyperparameters. For a homogeneous comparison against work on the usefulness of PoS tags for transition and graph-based models, and focused on efficiency, we do not use large language models.
Controlling PoS tag accuracy
We purposefully change the accuracy of the PoS tags in a treebank, effectively treating this accuracy as the independent variable in a controlled experiment and LAS as the dependent variable, i.e. LAS = f (Acc P oS ) where f is some function. Rather than randomly altering the gold label of PoS tags, we alter them based on the actual errors that PoS taggers make for a given treebank. This means PoS tags that are more likely to be incorrect for a given treebank will be more likely to be altered when changing the overall PoS accuracy of that treebank. We refer to this as the error rate for PoS tags. The incorrect label is also based on the most likely incorrect label for the PoS tag error for that treebank based on the incorrect labeling from the tagger. We refer to this as the error type, e.g.
NOUN→VERB.
We trained BiLSTM taggers for each of the treebanks to get the error rates for each PoS tag type and rate of each error type for each tag. Their generally high performances, even for the smaller treebanks, are shown in Table 5 in the Appendix.
From the errors of these taggers, we first need the estimated probability that a given PoS tag t is tagged erroneously:
p(error |t) = E t C t (1)
where E t is the error count for tag t and C t is the total count for tag t. Then we need the probability of applying an erroneous tag e to a ground-truth tag t:
p(e|t, error ) = E t→e E t (2)
where E t→e is the error count when labeling t as e. This estimated probability remains fixed, whereas p(error |t) is adjusted to vary the overall accuracy. We adjust these values by applying a weight, γ:
γ = E A E (3)
where E is the global error count and E A is the adjusted global error count such that the resulting tagging error is A. p(error |t) is then adjusted:
p(error |t) = γE t C t(4)
It is possible that γE t > C t . When this occurs to tag t we cap γE t at C t and then recalculate γ, removing the counts associated with this tag:
γ = E A − C t E − C t(5)
This is then done iteratively for each tag where E t ≥ C t until we obtain an error count for each tag such that the total error count reaches E A . These are all derived and applied as such to the test set of treebanks as this is where we evaluate the impact of PoS tag errors. To further echo the erroneous nature of these taggers, when E A ≤ E only the subset of real errors are used when generating errors. When E A > E this subset of real errors is maintained and subtracted such that:
p(error |t) = (γ − 1)E t C t − E t(6)
and this is only applied on the tokens which were not erroneously tagged by the taggers. For every eligible token, based on its tag t an error is generated based on p(error |t) and if an error is to be generated, the erroneous tag is selected based on the distribution over p(e|t, error ). This is also applied to the training and dev set as it seems better to use predicted tags when training (Anderson and Gómez-Rodríguez, 2020). There are differences in the distribution of PoS tags and as the algorithm is based on the test data, at times it isn't possible to get exactly E A . We therefore allow a small variation of ±0.05 on E A .
We then selected a set of PoS tag accuracies to test a range of values (75, 80, 85, 95, 97.5, 100). We included the 97.5% accuracy to evaluate the findings of Anderson and Gómez-Rodríguez (2020), where they observed a severe increase in performance between high scoring taggers and gold tags, otherwise we use increments of 5%.
Experiments
We now present the experimental setup to determine how parsing scores evolve for the chosen linearizations when the tagging accuracy degrades. As evaluation metrics, we use Labeled (LAS) and Unlabeled Attachment Scores (UAS).
Data Treebanks from Table 1 were selected using a number of criteria. We chose treebanks that were all from different language families and therefore º exhibit a range of linguistic behaviors. We also selected treebanks such that we used 4 low-resource, 4 mid-resource, 4 high-resource and 4 very highresource treebanks. Within each of those categories, we also selected treebanks with slightly different amounts of data, so as to obtain an incremental range of treebank sizes across low, mid, high and very high boundaries. Moreover, we ensured the quality of the treebanks by selecting treebanks that were either manually annotated in the UD framework or manually checked after automatic conversions. When a treebank did not contain a development set, we re-split the data by collecting the data across the training and test data and split the full data such that 60% was allocated to the training set, 10% to the development, and 30% to the test.
Setup We train and test parsers on sets of predicted tags, as explained in §3. We consider two baselines: (i) parsers trained without PoS tags 3 (base-no-tags), (ii) parsers trained with gold tags on a multi-task setup (base-mtl). results with a state-of-the-art graph based parser (Dozat et al., 2017) in Table 3 for comparison.
Results
All treebanks Figure 1 shows the average LAS across all treebanks for the different linearizations, using PoS tags or not. The results suggest that even using low accuracy tags is better than not using them. In detail, rp h is the linearization that is affected the most by the quality of the PoS tags, as it relies directly on them in order to decode the tree, degrading from the 1st position when using gold tags to the last one when tags have an accuracy of 75%. On the other hand, 2p b seems to be the most useful encoding for real-world situations, outperforming the other linearizations when no tags or tags with an accuracy under 95% are used, and performing on par with rp h over that mark. Note Results for different resourced sets of treebanks Figure 2 shows the results for the lowresource, mid-resource, high-resource and very high-resource treebanks, respectively. Interestingly, we observe trends regarding the cutoff points (the points where a model surpasses another), depending on the quality of PoS tags and quantity of available data. In particular, the cutoff points between the parsers that use PoS tags and the base-no-tags models are found at higher tagging accuracies when the data resources are larger too. Also, the cutoff point between rp h and 2p b is at a lower PoS tagging accuracy when we have more data, although the results for the very highresource treebanks break this trend. Finally, the low performance of the transition-based encodings is more pronounced for high-resource treebanks, with the exception the ah tb for the very high-resource treebanks.
Discussion
The obtained results offer some valuable information about how PoS tag quality affects performance for different encodings and quantities of data. In most situations using PoS tags as features is better than not using them, in contrast with results for other parser architectures as described above.
In addition, the less resources, the harder it is for rp h to beat brackets: cutoffs are at 97.5%, 95%, 90% for low-, mid-, and high-resource treebanks, respectively. However, for very high-resource treebanks, the cutoff is back at 95%. Compounded with the low tagging accuracy expected in low-resource setups, this highlights that rp h is less suited for them. 2p b , which generally outperforms the other encodings below 90% tagging accuracy, is the best low-resource option.
The more resources available, the harder it is for the models using PoS tags to outperform base-no-tags, both for bracketing-and transition-based linearizations; i.e. experiments suggest that the benefits provided by the PoS tags decline when more training data is available. For brackets, the cutoffs occur at <75%, 80%, 85% and 90% for the low-, mid-, high-and very highresource set, and for transition encodings, they are at <75% for the low-resource set and at ∼80% for mid-and high-resource sets. For the very-high resource set, cutoff points are at 85% for c tb and 90% for ah tb .
Conclusion
We connected the impact that the quality of PoS tags and quantity of available data has on several dependency parsing linearizations. We tested this by controlling PoS tagging performance on a range of UD treebanks, diverse in terms of both amount of resources and typology. The results showed that for sequence labeling parsing, which prioritizes efficiency, PoS tags are still welcome, contrary to more mature parsing paradigms such as transitionbased and graph-based ones. The experiments also showed that parsing linearizations benefit from PoS tagging accuracy differently, and in particular linearizations that represent arcs as bracket strings are a better choice for most realistic scenarios. Meanwhile, Table 5 shows the performance of the taggers that we initially used to draw the error distributions and propose PoS tags with different levels of accuracy.
123
A PoS tagging details
B Parsing hyperparameters
Figure 1 :
1Average LAS across all treebanks against PoS tagging accuracies for different linearizations, compared to the no-tags baselines.
Figure 2 :
2Average LAS for the (a) low-, (b) mid-, (c) high-and (d) very high-resource subsets of treebanks for different PoS tagging accuracies and linearizations, compared to the no-tags baselines.
Figure 3 :
3Average UAS across all treebanks against PoS tagging accuracies for different linearizations, compared to the no-tags baselines.
Figure 4 :Figure 5 :Figure 6 :Figure 7 :Figure 8 :Figure 9 :Figure 10 :Figure 11 :Figure 12 :Figure 13 :Figure 14 :Figure 15 :Figure 16 :Figure 17 :Figure 18 :
456789101112131415161718Average UAS for the (a) low-, (b) mid-, (c) high and (d) very-high-resource subsets of treebanks for different PoS tagging accuracies and linearizations, compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Ancient Greek Perseus , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Armenian ArmTDP , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Basque BDT , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Bhojpuri BHTB , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Bulgarian BTB , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Estonian EDT , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Guajajara TuDeT , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Kiche IU , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Korean Kaist , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Ligurian GLT , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Norwegian Bokmål , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Persian PerDT , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Vietnamese VTB , compared to the no-tags baselines. LAS against PoS tagging accuracies for different linearizations for the Skolt Sami Giellagas , compared to the no-tags baselines.
Table 1 :
1Details of the treebanks used in this work.
Table 2
2shows the average LAS scores across all treebank setups for all encodings and tagging accuracies, together with both baselines. To better interpret the results and tendencies, we will also visualize the results in different figures. 4 Note that we don't include base-mtl as they performed very similar to base-no-tags. We include the ah tb c tb rp h 2p b ah tb c tb rp h 2p b ah tb c tb rp h 2p b ah tb ctb rp h 2p b ah tb c tb rp h No PoS tags 47.36 46.18 45.79 49.26 63.94 61.58 60.73 57.52 67.67 64.76 64.75 66.58 81.15 79.22 76.98 80.06 65.03 62.94 62.06 63.35Setup
Low-resource
Mid-resource
High-resource
V. high-resource
All
2p b 75
50.65 49.33 48.43 47.72 63.26 60.18 60.23 58.64 66.34 64.18 63.87 64.09 79.63 77.44 75.26 73.32 64.97 62.78 61.98 60.94
80
53.84 50.58 48.78 50.94 64.00 61.52 61.34 60.87 67.53 64.88 64.88 64.70 80.06 77.93 75.74 77.09 66.36 63.73 62.69 63.40
85
54.17 52.48 51.27 52.62 65.25 62.34 62.06 63.36 68.11 65.38 65.33 66.56 81.18 79.02 77.34 78.76 67.18 64.81 64.00 65.32
90
56.03 53.55 52.78 55.34 67.30 64.05 63.35 66.18 69.31 66.86 66.61 69.47 81.33 79.39 77.05 79.80 68.49 65.96 65.01 67.70
95
59.30 56.88 55.75 58.90 69.84 67.34 66.20 70.30 70.28 67.66 67.32 71.18 82.61 80.62 78.83 82.52 70.51 68.12 67.02 70.72
97.5
60.00 58.70 57.59 61.86 72.63 69.47 68.99 72.84 71.59 69.27 68.39 72.83 83.91 82.00 80.27 84.31 71.96 69.86 68.81 72.96
100
62.16 60.97 58.64 64.23 74.28 71.19 70.02 75.20 73.40 70.60 70.05 74.50 86.52 84.77 82.65 87.20 74.09 71.88 70.34 75.24
MTL
47.78 46.83 45.60 48.08 64.15 62.15 60.68 63.17 67.97 64.94 65.26 67.47 81.52 79.46 76.85 80.95 65.35 63.34 62.10 64.92
Table 2 :
2Average LAS for different setups and PoS tag accuracies for the groups of treebanks studied.
Table 3 :
3Average LAS for different setups and PoS tag
accuracies for the groups of treebanks studied using the
graph-based parser.
that while Strzyz et al. (2019) chose rp h as their
best model for evaluation, the choice was biased
by using English, a language with atypically high
tagging accuracy.
122 2021
122Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 78-83, Online. Association for Computational Linguistics.Mark Anderson and Carlos Gómez-Rodríguez. 2020.
On the frailty of universal POS tags for neural UD
parsers. In Proceedings of the 24th Conference on
Computational Natural Language Learning, pages
69-96, Online. Association for Computational Lin-
guistics.
Mark Anderson and Carlos Gómez-Rodríguez. 2021. A
modest Pareto optimisation analysis of dependency
parsers in 2021. In Proceedings of the 17th Interna-
tional Conference on Parsing Technologies and the
IWPT 2021 Shared Task on Parsing into Enhanced
Universal Dependencies (IWPT 2021), pages 119-
130, Online. Association for Computational Linguis-
tics.
Miguel Ballesteros, Chris Dyer, and Noah A Smith.
2015. Improved transition-based parsing by model-
ing characters instead of words with LSTMs. arXiv
preprint arXiv:1508.00657.
Michael A. Covington. 2001. A fundamental algorithm
for dependency parsing. In Proceedings of the 39th
annual ACM southeast conference, volume 1. Cite-
seer.
Mary Dalrymple. 2006. How much can part-of-speech
tagging help parsing? Natural Language Engineer-
ing, 12(4):373-389.
Miryam de Lhoneux, Yan Shao, Ali Basirat, Eliyahu
Kiperwasser, Sara Stymne, Yoav Goldberg, and
Joakim Nivre. 2017. From raw text to universal
dependencies-look, no tags! In Proceedings of the
CoNLL 2017 Shared Task: Multilingual Parsing from
Raw Text to Universal Dependencies, pages 207-217.
Timothy Dozat, Peng Qi, and Christopher D. Manning.
2017. Stanford's graph-based neural dependency
parser at the CoNLL 2017 shared task. In Proceed-
ings of the CoNLL 2017 Shared Task: Multilingual
Parsing from Raw Text to Universal Dependencies,
pages 20-30.
Carlos Gómez-Rodríguez, Michalina Strzyz, and David
Vilares. 2020. A unifying theory of transition-based
and sequence labeling parsing. In Proceedings of
the 28th International Conference on Computational
Linguistics, pages 3776-3793, Barcelona, Spain (On-
line). International Committee on Computational Lin-
guistics.
Katharina Kann, Ophélie Lacroix, and Anders Søgaard.
2020. Weakly supervised POS taggers perform
poorly on truly low-resource languages. Proceedings
of the AAAI Conference on Artificial Intelligence,
34(5):8066-8073.
Marco Kuhlmann, Carlos Gómez-Rodríguez, and Gior-
gio Satta. 2011. Dynamic programming algorithms
for transition-based dependency parsers. In Proceed-
ings of the 49th Annual Meeting of the Association for
Computational Linguistics: Human Language Tech-
nologies, pages 673-682, Portland, Oregon, USA.
Association for Computational Linguistics.
Ophélie Lacroix. 2019. Dependency parsing as se-
quence labeling with head-based encoding and multi-
task learning. In Proceedings of the Fifth Inter-
national Conference on Dependency Linguistics
(Depling, SyntaxFest 2019), pages 136-143, Paris,
France. Association for Computational Linguistics.
Alberto Muñoz-Ortiz, Michalina Strzyz, and David Vi-
lares. 2021. Not all linearizations are equally data-
hungry in sequence labeling parsing. In Proceed-
ings of the International Conference on Recent Ad-
vances in Natural Language Processing (RANLP
2021), pages 978-988, Held Online. INCOMA Ltd.
Aaron Smith, Miryam de Lhoneux, Sara Stymne, and
Joakim Nivre. 2018. An investigation of the inter-
actions between pre-trained word embeddings, char-
acter models and POS tags in dependency parsing.
arXiv preprint arXiv:1808.09060.
Drahomíra Johanka Spoustová and Miroslav Spousta.
2010. Dependency parsing as a sequence labeling
task. The Prague Bulletin of Mathematical Linguis-
tics, 94(2010):7-14.
Michalina Strzyz, David Vilares, and Carlos Gómez-
Rodríguez. 2019. Viable dependency parsing as se-
quence labeling. In Proceedings of the 2019 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, Volume 1 (Long and Short Pa-
pers), pages 717-723, Minneapolis, Minnesota. As-
sociation for Computational Linguistics.
Michalina Strzyz, David Vilares, and Carlos Gómez-
Rodríguez. 2020. Bracketing encodings for 2-planar
dependency parsing. In Proceedings of the 28th Inter-
national Conference on Computational Linguistics,
pages 2472-2484, Barcelona, Spain (Online). Inter-
national Committee on Computational Linguistics.
Jörg Tiedemann. 2015. Cross-lingual dependency pars-
ing with universal dependencies and predicted pos
labels. Proceedings of the Third International Con-
ference on Dependency Linguistics (Depling 2015),
pages 340-349.
Atro Voutilainen. 1998. Does tagging help parsing?:
a case study on finite state parsing. In Proceedings
of the International Workshop on Finite State Meth-
ods in Natural Language Processing, pages 25-36.
Association for Computational Linguistics.
Anssi Mikael Yli-Jyrä. 2003. Multiplanarity-a model
for dependency structures in treebanks. In TLT 2003,
Proceedings of the Second Workshop on Treebanks
and Linguistic Theories. Växjö University Press.
Yu Zhang, Zhenghua Li, Houquan Zhou, and Min
Zhang. 2020. Is POS tagging necessary or even help-
ful for neural dependency parsing? arXiv preprint
arXiv:2003.03204.
Table 4
4details the hyperparameters used to train the taggers in this work.Hyperparameter
Value
Word embedding dimensions
100
Character embedding in
32
Character embedding out
100
Embedding dropout
0.33
biLSTM layers
3
biLSTM nodes
400
biLSTM dropout
0.33
MLP dimensions
512
MLP layers
1
Epochs
200
Patience
10
training batch size
32
learning rate
0.002
β1, β2
0.9, 0.9
ϵ
1 × 10 −12
decay
0.75
Table 4 :
4Hyperparameters used for the taggers.
Table 5 :
5Accuracy on test sets of biLSTM taggers
trained for each treebank from which each error dis-
tribution was deduced and used to control accuracy for
each treebank in experiments.
Table 6
6details the hyperparameters used to train
all the sequence labeling parsers evaluated in this
work.
Table 6 :
6Hyperparameters used for the sequence label-
ing parsers.
C Additional results
Figures 3 and 4 shows the UAS results comple-
menting the LAS results reported in §4 (in Figures
1 and 2, respectively). Figures from 5 to 20 show
the LAS results for each treebank.
All source code available at https://www. grupolys.org/software/aacl2022/.
We use a 2-task setup for all encodings, except 2p b for which we use 3 tasks, as each plane is predicted independently.
Forced setup for rp h , as PoS tags are needed to decode. 4 UAS results are shown in Figures 3 and 4 in the Appendix.
AknowledgementsMark was supported by a UKRI Future Leaders Fellowship (MR/T042001/1). This paper has received funding from ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), and Centro de Investigación de Galicia "CITIC", funded by Xunta de Galicia and the European Union (ERDF -Galicia 2014-2020 Program), by grant ED431G 2019/01
POS taggers and dependency parsing. Ramadan Alfared, Denis Béchet, International Journal of Computational Linguistics and Applications. 32Ramadan Alfared and Denis Béchet. 2012. POS tag- gers and dependency parsing. International Jour- nal of Computational Linguistics and Applications, 3(2):107-122.
A falta de pan, buenas son tortas: The efficacy of predicted UPOS tags for low resource UD parsing. Mark Anderson, Mathieu Dehouck, Carlos Gómez-Rodríguez, 10.18653/v1/2021.iwpt-1.8Proceedings of the 17th International Conference on Parsing Technologies and the IWPT. the 17th International Conference on Parsing Technologies and the IWPTMark Anderson, Mathieu Dehouck, and Carlos Gómez- Rodríguez. 2021. A falta de pan, buenas son tortas: The efficacy of predicted UPOS tags for low resource UD parsing. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT
| [] |
[
"A Hierarchical Neural Autoencoder for Paragraphs and Documents",
"A Hierarchical Neural Autoencoder for Paragraphs and Documents"
] | [
"Jiwei Li jiweil@stanford.edu \nComputer Science Department\nStanford University\n94305StanfordCAUSA\n",
"Minh-Thang Luong \nComputer Science Department\nStanford University\n94305StanfordCAUSA\n",
"Dan Jurafsky jurafsky@stanford.edu \nComputer Science Department\nStanford University\n94305StanfordCAUSA\n"
] | [
"Computer Science Department\nStanford University\n94305StanfordCAUSA",
"Computer Science Department\nStanford University\n94305StanfordCAUSA",
"Computer Science Department\nStanford University\n94305StanfordCAUSA"
] | [
"Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing"
] | Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Longshort term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization 1 . | 10.3115/v1/p15-1107 | [
"https://www.aclweb.org/anthology/P15-1107.pdf"
] | 207,468 | 1506.01057 | 3d376896a9aa01a71455880e033a27633d88bd6d |
A Hierarchical Neural Autoencoder for Paragraphs and Documents
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 26-31, 2015. 2015
Jiwei Li jiweil@stanford.edu
Computer Science Department
Stanford University
94305StanfordCAUSA
Minh-Thang Luong
Computer Science Department
Stanford University
94305StanfordCAUSA
Dan Jurafsky jurafsky@stanford.edu
Computer Science Department
Stanford University
94305StanfordCAUSA
A Hierarchical Neural Autoencoder for Paragraphs and Documents
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsJuly 26-31, 2015. 2015
Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Longshort term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization 1 .
Introduction
Generating coherent text is a central task in natural language processing. A wide variety of theories exist for representing relationships between text units, such as Rhetorical Structure Theory (Mann and Thompson, 1988) or Discourse Representation Theory (Lascarides and Asher, 1991), for extracting these relations from text units (Marcu, 2000;LeThanh et al., 2004;Hernault et al., 2010;Feng and Hirst, 2012, inter alia), and for extracting other coherence properties characterizing the role each text unit plays with others in a discourse (Barzilay and Lapata, 2008; Barzilay and Lee,1 Code for models described in this paper are available at www.stanford.edu/˜jiweil/.
2004; Elsner and Charniak, 2008;Li and Hovy, 2014, inter alia). However, applying these to text generation remains difficult. To understand how discourse units are connected, one has to understand the communicative function of each unit, and the role it plays within the context that encapsulates it, recursively all the way up for the entire text. Identifying increasingly sophisticated human-developed features may be insufficient for capturing these patterns. But developing neuralbased alternatives has also been difficult. Although neural representations for sentences can capture aspects of coherent sentence structure (Ji and Eisenstein, 2014;, it's not clear how they could help in generating more broadly coherent text.
Recent LSTM models (Hochreiter and Schmidhuber, 1997) have shown powerful results on generating meaningful and grammatical sentences in sequence generation tasks like machine translation Bahdanau et al., 2014;Luong et al., 2015) or parsing . This performance is at least partially attributable to the ability of these systems to capture local compositionally: the way neighboring words are combined semantically and syntactically to form meanings that they wish to express.
Could these models be extended to deal with generation of larger structures like paragraphs or even entire documents? In standard sequenceto-sequence generation tasks, an input sequence is mapped to a vector embedding that represents the sequence, and then to an output string of words. Multi-text generation tasks like summarization could work in a similar way: the system reads a collection of input sentences, and is then asked to generate meaningful texts with certain properties (such as-for summarizationbeing succinct and conclusive). Just as the local semantic and syntactic compositionally of words can be captured by LSTM models, can the com-positionally of discourse releations of higher-level text units (e.g., clauses, sentences, paragraphs, and documents) be captured in a similar way, with clues about how text units connect with each another stored in the neural compositional matrices?
In this paper we explore a first step toward this task of neural natural language generation. We focus on the component task of training a paragraph (document)-to-paragraph (document) autoencoder to reconstruct the input text sequence from a compressed vector representation from a deep learning model. We develop hierarchical LSTM models that arranges tokens, sentences and paragraphs in a hierarchical structure, with different levels of LSTMs capturing compositionality at the tokentoken and sentence-to-sentence levels.
We offer in the following section to a brief description of sequence-to-sequence LSTM models. The proposed hierarchical LSTM models are then described in Section 3, followed by experimental results in Section 4, and then a brief conclusion.
Long-Short Term Memory (LSTM)
In this section we give a quick overview of LSTM models. LSTM models (Hochreiter and Schmidhuber, 1997) are defined as follows: given a sequence of inputs X = {x 1 , x 2 , ..., x n X }, an LSTM associates each timestep with an input, memory and output gate, respectively denoted as i t , f t and o t . For notations, we disambiguate e and h where e t denote the vector for individual text unite (e.g., word or sentence) at time step t while h t denotes the vector computed by LSTM model at time t by combining e t and h t−1 . σ denotes the sigmoid function. The vector representation h t for each time-step t is given by:
i t f t o t l t = σ σ σ tanh W · h t−1 e t (1) c t = f t · c t−1 + i t · l t (2) h s t = o t · c t(3)
where W ∈ R 4K×2K In sequence-to-sequence generation tasks, each input X is paired with a sequence of outputs to predict: Y = {y 1 , y 2 , ..., y n Y }. An LSTM defines a distribution over outputs and sequentially predicts tokens us-ing a softmax function: [1,ny] p(y t |x 1 , x 2 , ..., x t , y 1 , y 2 , ..., y t−1 ) = t∈ [1,ny] exp(f (h t−1 , e yt ))
P (Y |X) = t∈y exp(f (h t−1 , e y ))
(4) f (h t−1 , e yt ) denotes the activation function between e h−1 and e yt , where h t−1 is the representation outputted from the LSTM at time t − 1. Note that each sentence ends up with a special end-ofsentence symbol <end>. Commonly, the input and output use two different LSTMs with different sets of convolutional parameters for capturing different compositional patterns.
In the decoding procedure, the algorithm terminates when an <end> token is predicted. At each timestep, either a greedy approach or beam search can be adopted for word prediction. Greedy search selects the token with the largest conditional probability, the embedding of which is then combined with preceding output for next step token prediction. For beam search, discovered that a beam size of 2 suffices to provide most of benefits of beam search.
Paragraph Autoencoder
In this section, we introduce our proposed hierarchical LSTM model for the autoencoder.
Notation
Let D denote a paragraph or a document, which is comprised of a sequence of N D sentences, D = {s 1 , s 2 , ..., s N D , end D }. An additional "end D " token is appended to each document. Each sentence s is comprised of a sequence of tokens s = {w 1 , w 2 , ..., w Ns } where N s denotes the length of the sentence, each sentence ending with an "end s " token. The word w is associated with a K-dimensional embedding e w , e w = {e 1 w , e 2 w , ..., e K w }. Let V denote vocabulary size. Each sentence s is associated with a Kdimensional representation e s .
An autoencoder is a neural model where output units are directly connected with or identical to input units. Typically, inputs are compressed into a representation using neural models (encoding), which is then used to reconstruct it back (decoding). For a paragraph autoencoder, both the input X and output Y are the same document D. The autoencoder first compresses D into a vector representation e D and then reconstructs D based on e D .
For simplicity, we define LST M (h t−1 , e t ) to be the LSTM operation on vectors h t−1 and e t to achieve h t as in Equ.1 and 2. For clarification, we first describe the following notations used in encoder and decoder:
• h w t and h s t denote hidden vectors from LSTM models, the subscripts of which indicate timestep t, the superscripts of which indicate operations at word level (w) or sequence level (s). h s t (enc) specifies encoding stage and h s t (dec) specifies decoding stage.
• e w t and e s t denotes word-level and sentencelevel embedding for word and sentence at position t in terms of its residing sentence or document.
Model 1: Standard LSTM
The whole input and output are treated as one sequence of tokens. Following and Bahdanau et al. (2014), we trained an autoencoder that first maps input documents into vector representations from a LST M encode and then reconstructs inputs by predicting tokens within the document sequentially from a LST M decode . Two separate LSTMs are implemented for encoding and decoding with no sentence structures considered. Illustration is shown in Figure 1.
Model 2: Hierarchical LSTM
The hierarchical model draws on the intuition that just as the juxtaposition of words creates a joint meaning of a sentence, the juxtaposition of sentences also creates a joint meaning of a paragraph or a document.
Encoder We first obtain representation vectors at the sentence level by putting one layer of LSTM (denoted as LST M word encode ) on top of its containing words:
h w t (enc) = LST M word encode (e w t , h w t−1 (enc)) (5)
The vector output at the ending time-step is used to represent the entire sentence as
e s = h w ends
To build representation e D for the current document/paragraph D, another layer of LSTM (denoted as LST M sentence encode ) is placed on top of all sentences, computing representations sequentially for each timestep:
h s t (enc) = LST M sentence encode (e s t , h s t−1 (enc)) (6)
Representation e s end D computed at the final time step is used to represent the entire document: e D = h s end D . Thus one LSTM operates at the token level, leading to the acquisition of sentence-level representations that are then used as inputs into the second LSTM that acquires document-level representations, in a hierarchical structure.
Decoder As with encoding, the decoding algorithm operates on a hierarchical structure with two layers of LSTMs. LSTM outputs at sentence level for time step t are obtained by:
h s t (dec) = LST M sentence decode (e s t , h s t−1 (dec)) (7)
The initial time step h s 0 (d) = e D , the end-to-end output from the encoding procedure. h s t (d) is used as the original input into LST M word decode for subsequently predicting tokens within sentence t + 1. LST M word decode predicts tokens at each position sequentially, the embedding of which is then combined with earlier hidden vectors for the next timestep prediction until the end s token is predicted. The procedure can be summarized as follows:
h w t (dec) = LST M sentence decode (e w t , h w t−1 (dec)) (8) p(w|·) = softmax(e w , h w t−1 (dec))(9)
During decoding, LST M word decode generates each word token w sequentially and combines it with earlier LSTM-outputted hidden vectors. The LSTM hidden vector computed at the final time step is used to represent the current sentence. This is passed to LST M sentence decode , combined with h s t for the acquisition of h t+1 , and outputted to the next time step in sentence decoding.
For each timestep t, LST M sentence decode has to first decide whether decoding should proceed or come to a full stop: we add an additional token end D to the vocabulary. Decoding terminates when token end D is predicted. Details are shown in Figure 2.
Model 3: Hierarchical LSTM with Attention
Attention models adopt a look-back strategy by linking the current decoding stage with input sentences in an attempt to consider which part of the input is most responsible for the current decoding state. This attention version of hierarchical model is inspired by similar work in image caption generation and machine translation (Xu et al., 2015;Bahdanau et al., 2014).
Let H = {h s 1 (e), h s 2 (e), .
.., h s N (e)} be the collection of sentence-level hidden vectors for each sentence from the inputs, outputted from LST M Sentence encode . Each element in H contains information about input sequences with a strong focus on the parts surrounding each specific sentence (time-step). During decoding, suppose that e s t denotes the sentence-level embedding at current step and that h s t−1 (dec) denotes the hidden vector outputted from LST M sentence decode at previous time step t−1. Attention models would first link the currentstep decoding information, i.e., h s t−1 (dec) which is outputted from LST M sentence dec with each of the input sentences i ∈ [1, N ], characterized by a strength indicator v i :
v i = U T f (W 1 · h s t−1 (dec) + W 2 · h s i (enc)) (10) W 1 , W 2 ∈ R K×K , U ∈ R K×1 . v i is then normal- ized: a i = exp(v i ) i exp(v i )(11)
The attention vector is then created by averaging weights over all input sentences:
m t = i∈[1,N D ] a i h s i (enc)(12)
LSTM hidden vectors for current step is then achieved by combining c t , e s t and h s t−1 (dec):
i t f t o t l t = σ σ σ tanh W · h s t−1 (dec) e s t m t (13) c t = f t · c t−1 + i t · l t (14) h s t = o t · c t(15)
where W ∈ R 4K×3K . h t is then used for word predicting as in the vanilla version of the hierarchical model.
Training and Testing
Parameters are estimated by maximizing likelihood of outputs given inputs, similar to standard sequence-to-sequence models. A softmax function is adopted for predicting each token within output documents, the error of which is first backpropagated through LST M word decode to sentences, then through LST M sentence decode to document representation e D , and last through LST M sentence encode and LST M word encode to inputs. Stochastic gradient descent with minibatches is adopted. For testing, we adopt a greedy strategy with no beam search. For a given document D, e D is first obtained given already learned LSTM encode parameters and word embeddings. Then in decoding, LST M sentence decode computes embeddings at each sentence-level time-step, which is first fed into the binary classifier to decide whether sentence decoding terminates and then into LST M word decode for word decoding.
Experiments
Dataset
We implement the proposed autoencoder on two datasets, a highly domain specific dataset consisting of hotel reviews and a general dataset extracted from Wkipedia.
Hotel Reviews
We use a subset of hotel reviews crawled from TripAdvisor. We consider only reviews consisting sentences ranging from 50 to 250 words; the model has problems dealing with extremely long sentences, as we will discuss later. We keep a vocabulary set consisting of the 25,000 most frequent words. A special "<unk>" token is used to denote all the remaining less frequent tokens. Reviews that consist of more than 2 percent of unknown words are discarded. Our training dataset is comprised of roughly 340,000 reviews; the testing set is comprised of 40,000 reviews. Dataset details are shown in Table 1.
Wikipedia We extracted paragraphs from Wikipedia corpus that meet the aforementioned length requirements. We keep a top frequent vocabulary list of 120,000 words. Paragraphs with larger than 4 percent of unknown words are discarded. The training dataset is comprised of roughly 500,000 paragraphs and testing contains roughly 50,000.
Training Details and Implementation
Previous research has shown that deep LSTMs work better than shallow ones for sequence-tosequence tasks . We adopt a LSTM structure with four layer for encoding and four layer for decoding, each of which is comprised of a different set of parameters. Each LSTM layer consists of 1,000 hidden neurons and the dimensionality of word embeddings is set to 1,000. Other training details are given below, some of which follow .
• Input documents are reversed.
• LSTM parameters and word embeddings are initialized from a uniform distribution between [-0.08, 0.08]. • Stochastic gradient decent is implemented without momentum using a fixed learning rate of 0.1. We stated halving the learning rate every half epoch after 5 epochs. We trained our models for a total of 7 epochs. • Batch size is set to 32 (32 documents).
• Decoding algorithm allows generating at most 1.5 times the number of words in inputs. • 0.2 dropout rate.
• Gradient clipping is adopted by scaling gradients when the norm exceeded a threshold of 5. Our implementation on a single GPU 2 processes a speed of approximately 600-1,200 tokens per second. We trained our models for a total of 7 iterations.
Evaluations
We need to measure the closeness of the output (candidate) to the input (reference). We first adopt two standard evaluation metrics, ROUGE (Lin, 2004;Lin and Hovy, 2003) and BLEU (Papineni et al., 2002).
ROUGE is a recall-oriented measure widely used in the summarization literature. It measures the n-gram recall between the candidate text and the reference text(s). In this work, we only have one reference document (the input document) and ROUGE score is therefore given by: ROUGE n = gram n ∈input count match (gram n ) gram n ∈input count(gram n ) where count match denotes the number of n-grams co-occurring in the input and output. We report ROUGE-1, 2 and W (based on weighted longest common subsequence).
BLEU Purely measuring recall will inappropriately reward long outputs. BLEU is designed to address such an issue by emphasizing precision. n-gram precision scores for our situation are given by:
precision n = gram n ∈output count match (gram n ) gram n ∈output count(gram n ) (17) BLEU then combines the average logarithm of precision scores with exceeded length penalization. For details, see Papineni et al. (2002).
Coherence Evaluation Neither BLEU nor ROUGE attempts to evaluate true coherence. There is no generally accepted and readily available coherence evaluation metric. 3 Because of the difficulty of developing a universal coherence evaluation metric, we proposed here only a tailored metric specific to our case. Based on the assumption that human-generated texts (i.e., input documents in our tasks) are coherent (Barzilay and Lapata, 2008), we compare generated outputs with input documents in terms of how much original text order is preserved.
We develop a grid evaluation metric similar to the entity transition algorithms in (Barzilay and Lee, 2004;Lapata and Barzilay, 2005). The key idea of Barzilay and Lapata's models is to first identify grammatical roles (i.e., object and subject) that entities play and then model the transition probability over entities and roles across sentences. We represent each sentence as a featurevector consisting of verbs and nouns in the sentence. Next we align sentences from output documents to input sentences based on sentence-tosentence F1 scores (precision and recall are computed similarly to ROUGE and BLEU but at sentence level) using feature vectors. Note that multiple output sentences can be matched to one input Input-Wiki washington was unanimously elected President by the electors in both the 1788 -1789 and 1792 elections . he oversaw the creation of a strong, well-financed national government that maintained neutrality in the french revolutionary wars , suppressed the whiskey rebellion , and won acceptance among Americans of all types . washington established many forms in government still used today , such as the cabinet system and inaugural address . his retirement after two terms and the peaceful transition from his presidency to that of john adams established a tradition that continued up until franklin d . roosevelt was elected to a third term . washington has been widely hailed as the " father of his country " even during his lifetime. Output-Wiki washington was elected as president in 1792 and voters <unk> of these two elections until 1789 . he continued suppression <unk> whiskey rebellion of the french revolution war government , strong , national well are involved in the establishment of the fin advanced operations , won acceptance . as in the government , such as the establishment of various forms of inauguration speech washington , and are still in use . <unk> continued after the two terms of his quiet transition to retirement of <unk> <unk> of tradition to have been elected to the third paragraph . but , " the united nations of the father " and in washington in his life , has been widely praised . Input-Wiki apple inc . is an american multinational corporation headquartered in cupertino , california , that designs , develops , and sells consumer electronics , computer software , online services , and personal com -puters . its bestknown hardware products are the mac line of computers , the ipod media player , the iphone smartphone , and the ipad tablet computer . its online services include icloud , the itunes store , and the app store . apple's consumer software includes the os x and ios operating systems , the itunes media browser , the safari web browser , and the ilife and iwork creativity and productivity suites . Output-Wiki apple is a us company in california , <unk> , to develop electronics , softwares , and pc , sells . hardware include the mac series of computers , ipod , iphone . its online services , including icloud , itunes store and in app store . softwares , including os x and ios operating system , itunes , web browser , < unk> , including a productivity suite . Input-Wiki paris is the capital and most populous city of france . situated on the seine river , in the north of the country , it is in the centre of the le-de-france region . the city of paris has a population of 2273305 inhabitants . this makes it the fifth largest city in the european union measured by the population within the city limits . Output-Wiki paris is the capital and most populated city in france . located in the <unk> , in the north of the country , it is the center of <unk> . paris , the city has a population of <num> inhabitants . this makes the eu ' s population within the city limits of the fifth largest city in the measurement . Input-Review on every visit to nyc , the hotel beacon is the place we love to stay . so conveniently located to central park , lincoln center and great local restaurants . the rooms are lovely . beds so comfortable , a great little kitchen and new wizz bang coffee maker . the staff are so accommodating and just love walking across the street to the fairway supermarket with every imaginable goodies to eat . Output-Review every time in new york , lighthouse hotel is our favorite place to stay . very convenient , central park , lincoln center , and great restaurants . the room is wonderful , very comfortable bed , a kitchenette and a large explosion of coffee maker . the staff is so inclusive , just across the street to walk to the supermarket channel love with all kinds of what to eat . sentence. Assume that sentence s i output is aligned with sentence s i input , where i and i denote position index for a output sentence and its aligned input. The penalization score L is then given by:
L = 2 N output · (N output − 1) × i∈[1,Noutput−1] j∈[i+1,Noutput] |(j − i) − (j − i )|(18)
Equ. 18 can be interpreted as follows: (j − i) denotes the distance in terms of position index between two outputted sentences indexed by j and i, and (j − i ) denotes the distance between their mirrors in inputs. As we wish to penalize the degree of permutation in terms of text order, we penalize the absolute difference between the two computed distances. This metric is also relevant to the overall performance of prediction and recall: an irrelevant output will be aligned to a random input, thus being heavily penalized. The deficiency of the proposed metric is that it concerns itself only with a semantic perspective on coherence, barely considering syntactical issues.
Results
A summary of our experimental results is given in Table 3 Table 3: Results for three models on two datasets. As with coherence score L, smaller values signifies better performances.
documents and sentences are written in a more fixed format and easy to predict for hotel reviews. The hierarchical model that considers sentencelevel structure outperforms standard sequenceto-sequence models. Attention models at the sentence level introduce performance boost over vanilla hierarchical models.
With respect to the coherence evaluation, the original sentence order is mostly preserved: the hierarchical model with attention achieves L = 1.57 on the hotel-review dataset, equivalent to the fact that the relative position of two input sentences are permuted by an average degree of 1.57. Even for the Wikipedia dataset where more poor-quality sentences are observed, the original text order can still be adequately maintained with L = 2.04.
Discussion and Future Work
In this paper, we extended recent sequence-tosequence LSTM models to the task of multisentence generation. We trained an autoencoder to see how well LSTM models can reconstruct input documents of many sentences. We find that the proposed hierarchical LSTM models can partially preserve the semantic and syntactic integrity of multi-text units and generate meaningful and grammatical sentences in coherent order. Our model performs better than standard sequence-tosequence models which do not consider the intrinsic hierarchical discourse structure of texts.
While our work on auto-encoding for larger texts is only a preliminary effort toward allowing neural models to deal with discourse, it nonetheless suggests that neural models are capable of encoding complex clues about how coherent texts are connected .
The performance on this autoencoder task could certainly also benefit from more sophisticated neural models. For example one extension might align the sentence currently being generated with the original input sentence (similar to sequence-tosequence translation in (Bahdanau et al., 2014)), and later transform the original task to sentenceto-sentence generation. However our long-term goal here is not on perfecting this basic multi-text generation scenario of reconstructing input documents, but rather on extending it to more important applications.
That is, the autoencoder described in this work, where input sequence X is identical to output Y , is only the most basic instance of the family of document (paragraph)-to-document (paragraph) generation tasks. We hope the ideas proposed in this paper can play some role in enabling such more sophisticated generation tasks like summarization, where the inputs are original documents and outputs are summaries or question answering, where inputs are questions and outputs are the actual wording of answers. Sophisticated generation tasks like summarization or dialogue systems could extend this paradigm, and could themselves benefit from task-specific adaptations. In summarization, sentences to generate at each timestep might be pre-pointed to or pre-aligned to specific aspects, topics, or pieces of texts to be summarized. Dialogue systems could incorporate information about the user or the time course of the
Figure 1 :
1Standard Sequence to Sequence Model.
Figure 2 :
2Hierarchical Sequence to Sequence Model.
Figure 3 :
3Hierarchical Sequence to Sequence Model with Attention.
Table 2 :
2A few examples produced by the hierarchical LSTM alongside the inputs.
. We observe better performances for the hotel-review dataset than the open domain Wikipedia dataset, for the intuitive reason thatModel
Dataset
BLEU ROUGE-1 ROUGE-2 Coherence(L)
Standard
Hotel Review 0.241
0.571
0.302
1.92
Hierarchical
Hotel Review 0.267
0.590
0.330
1.71
Hierarchical+Attention Hotel Review 0.285
0.624
0.355
1.57
Standard
Wikipedia
0.178
0.502
0.228
2.75
Hierarchical
Wikipedia
0.202
0.529
0.250
2.30
Hierarchical+Attention
Wikipedia
0.220
0.544
0.291
2.04
Wolf and Gibson (2005) andLin et al. (2011) proposed metrics based on discourse relations, but these are hard to apply widely since identifying discourse relations is a difficult problem. Indeed sophisticated coherence evaluation metrics are seldom adopted in real-world applications, and summarization researchers tend to use simple approximations like number of overlapped tokens or topic distribution similarity (e.g.,(Yan et al., 2011b;Yan et al., 2011a;Celikyilmaz and Hakkani-Tür, 2011)).
dialogue. In any case, we look forward to more sophi4d applications of neural models to the important task of natural language generation.
AcknowledgementThe authors want to thank Gabor Angeli, Sam Bowman, Percy Liang and other members of the Stanford NLP group for insightful comments and suggestion. We also thank the three anonymous ACL reviewers for helpful comments. This work is supported by Enlight Foundation Graduate Fellowship, and a gift from Bloomberg L.P, which we gratefully acknowledge.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Modeling local coherence: An entity-based approach. Regina Barzilay, Mirella Lapata, Computational Linguistics. 341Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics, 34(1):1-34.
Catching the drift: Probabilistic content models, with applications to generation and summarization. Regina Barzilay, Lillian Lee, cs/0405039arXiv preprintRegina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. arXiv preprint cs/0405039.
Discovery of topically coherent sentences for extractive summarization. Asli Celikyilmaz, Dilek Hakkani-Tür, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies1Association for Computational LinguisticsAsli Celikyilmaz and Dilek Hakkani-Tür. 2011. Dis- covery of topically coherent sentences for extractive summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1, pages 491-499. Association for Computational Lin- guistics.
. Micha Elsner, Eugene Charniak, Micha Elsner and Eugene Charniak. 2008.
Coreference-inspired coherence modeling. Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers. the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short PapersAssociation for Computational LinguisticsCoreference-inspired coherence modeling. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Hu- man Language Technologies: Short Papers, pages 41-44. Association for Computational Linguistics.
Textlevel discourse parsing with rich linguistic features. Vanessa Wei Feng, Graeme Hirst, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers. the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersAssociation for Computational Linguistics1Vanessa Wei Feng and Graeme Hirst. 2012. Text- level discourse parsing with rich linguistic fea- tures. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 60-68. Association for Computational Linguistics.
Hilda: a discourse parser using support vector machine classification. Hugo Hernault, Helmut Prendinger, Mitsuru Ishizuka, Dialogue & Discourse. 13Hugo Hernault, Helmut Prendinger, Mitsuru Ishizuka, et al. 2010. Hilda: a discourse parser using sup- port vector machine classification. Dialogue & Dis- course, 1(3).
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Representation learning for text-level discourse parsing. Yangfeng Ji, Jacob Eisenstein, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics1Yangfeng Ji and Jacob Eisenstein. 2014. Represen- tation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, volume 1, pages 13-24.
Automatic evaluation of text coherence: Models and representations. Mirella Lapata, Regina Barzilay, IJCAI. 5Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and represen- tations. In IJCAI, volume 5, pages 1085-1090.
Discourse relations and defeasible knowledge. Alex Lascarides, Nicholas Asher, Proceedings of the 29th annual meeting on Association for Computational Linguistics. the 29th annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsAlex Lascarides and Nicholas Asher. 1991. Discourse relations and defeasible knowledge. In Proceedings of the 29th annual meeting on Association for Com- putational Linguistics, pages 55-62. Association for Computational Linguistics.
Generating discourse structures for written texts. Huong Lethanh, Geetha Abeysinghe, Christian Huyck, Proceedings of the 20th international conference on Computational Linguistics. the 20th international conference on Computational Linguistics329Association for Computational LinguisticsHuong LeThanh, Geetha Abeysinghe, and Christian Huyck. 2004. Generating discourse structures for written texts. In Proceedings of the 20th inter- national conference on Computational Linguistics, page 329. Association for Computational Linguis- tics.
A model of coherence based on distributed sentence representation. Jiwei Li, Eduard Hovy, Jiwei Li and Eduard Hovy. 2014. A model of coher- ence based on distributed sentence representation.
Recursive deep models for discourse parsing. Jiwei Li, Rumeng Li, Eduard Hovy, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jiwei Li, Rumeng Li, and Eduard Hovy. 2014. Recur- sive deep models for discourse parsing. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2061-2069.
Automatic evaluation of summaries using n-gram cooccurrence statistics. Chin-Yew Lin, Eduard Hovy, Proceedings of the 2003 Conference of the North American Chapter. the 2003 Conference of the North American ChapterAssociation for Computational Linguistics1Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology-Volume 1, pages 71-78. Association for Computational Linguistics.
Automatically evaluating text coherence using discourse relations. Ziheng Lin, Min-Yen Hwee Tou Ng, Kan, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using dis- course relations. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 997-1006. Association for Computational Linguistics.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Work- shop, pages 74-81.
Addressing the rare word problem in neural machine translation. Thang Luong, Ilya Sutskever, V Quoc, Oriol Le, Wojciech Vinyals, Zaremba, Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. ACL.
Rhetorical structure theory: Toward a functional theory of text organization. C William, Sandra A Mann, Thompson, Text. 83William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text, 8(3):243-281.
The rhetorical parsing of unrestricted texts: A surface-based approach. Daniel Marcu, Computational linguistics. 263Daniel Marcu. 2000. The rhetorical parsing of unre- stricted texts: A surface-based approach. Computa- tional linguistics, 26(3):395-448.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc Vv Le, Advances in Neural Information Processing Systems. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems, pages 3104-3112.
Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey Hinton, arXiv:1412.7449Grammar as a foreign language. arXiv preprintOriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2014. Grammar as a foreign language. arXiv preprint arXiv:1412.7449.
Representing discourse coherence: A corpus-based study. Florian Wolf, Edward Gibson, Computational Linguistics. 312Florian Wolf and Edward Gibson. 2005. Representing discourse coherence: A corpus-based study. Com- putational Linguistics, 31(2):249-287.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio, arXiv:1502.03044Show, attend and tell: Neural image caption generation with visual attention. arXiv preprintKelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural im- age caption generation with visual attention. arXiv preprint arXiv:1502.03044.
Timeline generation through evolutionary trans-temporal summarization. Rui Yan, Liang Kong, Congrui Huang, Xiaojun Wan, Xiaoming Li, Yan Zhang, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsRui Yan, Liang Kong, Congrui Huang, Xiaojun Wan, Xiaoming Li, and Yan Zhang. 2011a. Timeline gen- eration through evolutionary trans-temporal summa- rization. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, pages 433-443. Association for Computational Lin- guistics.
Evolutionary timeline summarization: a balanced optimization framework via iterative substitution. Rui Yan, Xiaojun Wan, Jahna Otterbacher, Liang Kong, Xiaoming Li, Yan Zhang, Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval. the 34th international ACM SIGIR conference on Research and development in Information RetrievalACMRui Yan, Xiaojun Wan, Jahna Otterbacher, Liang Kong, Xiaoming Li, and Yan Zhang. 2011b. Evolution- ary timeline summarization: a balanced optimiza- tion framework via iterative substitution. In Pro- ceedings of the 34th international ACM SIGIR con- ference on Research and development in Information Retrieval, pages 745-754. ACM.
| [] |
[
"MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding",
"MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding"
] | [
"Jia-Chen Gu \nNational Engineering Laboratory for Speech and Language Information Processing\nUniversity of Science and Technology of China\nHefeiChina\n",
"Chongyang Tao \nMicrosoft\nBeijingChina\n",
"Zhen-Hua Ling zhling@ustc.edu.cn \nNational Engineering Laboratory for Speech and Language Information Processing\nUniversity of Science and Technology of China\nHefeiChina\n",
"Can Xu \nMicrosoft\nBeijingChina\n",
"Xiubo Geng \nMicrosoft\nBeijingChina\n",
"Daxin Jiang djiang@microsoft.com \nMicrosoft\nBeijingChina\n"
] | [
"National Engineering Laboratory for Speech and Language Information Processing\nUniversity of Science and Technology of China\nHefeiChina",
"Microsoft\nBeijingChina",
"National Engineering Laboratory for Speech and Language Information Processing\nUniversity of Science and Technology of China\nHefeiChina",
"Microsoft\nBeijingChina",
"Microsoft\nBeijingChina",
"Microsoft\nBeijingChina"
] | [
"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing"
] | Recently, various neural models for multiparty conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction. However, these existing methods on MPC usually represent interlocutors and utterances individually and ignore the inherent complicated structure in MPC which may provide crucial interlocutor and utterance semantics and would enhance the conversation understanding process. To this end, we present MPC-BERT, a pre-trained model for MPC understanding that considers learning who says what to whom in a unified model with several elaborated self-supervised tasks. Particularly, these tasks can be generally categorized into (1) interlocutor structure modeling including reply-to utterance recognition, identical speaker searching and pointer consistency distinction, and (2) utterance semantics modeling including masked shared utterance restoration and shared node detection. We evaluate MPC-BERT on three downstream tasks including addressee recognition, speaker identification and response selection. Experimental results show that MPC-BERT outperforms previous methods by large margins and achieves new state-of-the-art performance on all three downstream tasks at two benchmarks. | 10.18653/v1/2021.acl-long.285 | [
"https://www.aclanthology.org/2021.acl-long.285.pdf"
] | 235,313,361 | 2106.01541 | 02ffd226f2c614bb4fbfa9ab0063f4a482de1ea6 |
MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding
August 1-6, 2021
Jia-Chen Gu
National Engineering Laboratory for Speech and Language Information Processing
University of Science and Technology of China
HefeiChina
Chongyang Tao
Microsoft
BeijingChina
Zhen-Hua Ling zhling@ustc.edu.cn
National Engineering Laboratory for Speech and Language Information Processing
University of Science and Technology of China
HefeiChina
Can Xu
Microsoft
BeijingChina
Xiubo Geng
Microsoft
BeijingChina
Daxin Jiang djiang@microsoft.com
Microsoft
BeijingChina
MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20213682
Recently, various neural models for multiparty conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction. However, these existing methods on MPC usually represent interlocutors and utterances individually and ignore the inherent complicated structure in MPC which may provide crucial interlocutor and utterance semantics and would enhance the conversation understanding process. To this end, we present MPC-BERT, a pre-trained model for MPC understanding that considers learning who says what to whom in a unified model with several elaborated self-supervised tasks. Particularly, these tasks can be generally categorized into (1) interlocutor structure modeling including reply-to utterance recognition, identical speaker searching and pointer consistency distinction, and (2) utterance semantics modeling including masked shared utterance restoration and shared node detection. We evaluate MPC-BERT on three downstream tasks including addressee recognition, speaker identification and response selection. Experimental results show that MPC-BERT outperforms previous methods by large margins and achieves new state-of-the-art performance on all three downstream tasks at two benchmarks.
Introduction
Building a conversational agent with intelligence has drawn significant attention from both academia and industry. Most of existing methods have studied understanding conversations between two participants, aiming to return an appropriate response either in a generation-based (Shang et al., Here, "I." is the abbreviation of "interlocutor". Serban et al., 2016Serban et al., , 2017Zhang et al., 2018bZhang et al., , 2020 or retrieval-based manner (Lowe et al., 2015;Wu et al., 2017;Zhou et al., 2018;Tao et al., 2019a,b;Gu et al., 2019aGu et al., ,b, 2020. Recently, researchers have paid more attention to a more practical and challenging scenario involving more than two participants, which is well known as multiparty conversation (MPC) (Ouchi and Tsuboi, 2016;Zhang et al., 2018a;Le et al., 2019;. Table 1 shows an MPC example in the Ubuntu Internet Relay Chat (IRC) channel, which is composed of a sequence of (speaker, utterance, addressee) triples. In addition to returning an appropriate response, predicting who will be the next speaker (Meng et al., 2018) and who is the addressee of an utterance (Ouchi and Tsuboi, 2016;Zhang et al., 2018a;Le et al., 2019) are unique and important issues in MPC. An instance of MPC always contains complicated interactions between interlocutors, between utterances and between an interlocutor and an utterance. Therefore, it is challenging to model the conversation flow and fully understand the dialogue content. Existing studies on MPC learn the representations of interlocutors and utterances with neural networks, and their representation spaces are either separate (Ouchi and Tsuboi, 2016) or interactive (Zhang et al., 2018a). However, the semantics contained in the interlocutor and utterance representations may not be effectively captured as they are from two different representation spaces. Recently, to take advantage of the breakthrough in pre-training language models (PLMs) for natural language understanding, some studies proposed to integrate the speaker (Gu et al., 2020) or topic (Wang et al., 2020) information into PLMs. Despite of the performance improvement on response selection, these models still overlook the inherent relationships between utterances and interlocutors, such as "address-to". Furthermore, most existing studies design models for each individual task in MPC (e.g., addressee recognition, speaker identification and response prediction) separately. Intuitively, these tasks are complementary among each other. Making use of these tasks simultaneously may produce better contextualized representations of interlocutors and utterances, and would enhance the conversation understanding, but is neglected in previous studies.
On account of above issues, we propose MPC-BERT which jointly learns who says what to whom in MPC by designing self-supervised tasks for PLMs, so as to improve the ability of PLMs on MPC understanding. Specifically, the five designed tasks includes reply-to utterance recognition, identical speaker searching, pointer consistency distinction, masked shared utterance restoration and shared node detection. The first three tasks are designed to model the interlocutor structure in MPC in a semantics-to-structure manner. In the output of MPC-BERT, an interlocutor is described through the encoded representations of the utterances it says. Thus, the representations of utterance semantics are utilized to construct the conversation structure in these three tasks. On the other hand, the last two tasks are designed to model the utterance semantics in a structure-to-semantics manner. Intuitively, the conversation structure influences the information flow in MPC. Thus, the structure information can also be used to strengthen the representations of utterance semantics in return. In general, these five self-supervised tasks are employed to jointly train the MPC-BERT in a multi-task learning framework, which helps the model to learn the complementary information among interlocutors and utterances, and that between structure and semantics. By this means, MPC-BERT can produce better interlocutor and utterance representations which can be effectively generalized to multiple downstream tasks of MPC.
To measure the effectiveness of these selfsupervised tasks and to test the generalization ability of MPC-BERT, we evaluate it on three downstream tasks including addressee recognition, speaker identification and response selection, which are three core research issues of MPC. Two benchmarks based on Ubuntu IRC channel are employed for evaluation. One was released by . The other was released by Ouchi and Tsuboi (2016) and has three experimental settings according to session lengths. Experimental results show that MPC-BERT outperforms the current state-of-the-art models by margins of 3.51%, 2.86%, 3.28% and 5.36% on the test sets of these two benchmarks respectively in terms of the session accuracy of addressee recognition, by margins of 7.66%, 2.60%, 3.38% and 4.24% respectively in terms of the utterance precision of speaker identification, and by margins of 3.82%, 2.71%, 2.55% and 3.22% respectively in terms of the response recall of response selection.
In summary, our contributions in this paper are three-fold: (1) MPC-BERT, a PLM for MPC understanding, is proposed by designing five selfsupervised tasks based on the interactions among utterances and interlocutors. (2) Three downstream tasks are employed to comprehensively evaluate the effectiveness of our designed self-supervised tasks and the generalization ability of MPC-BERT. (3) Our proposed MPC-BERT achieves new state-ofthe-art performance on all three downstream tasks at two benchmarks.
Related Work
Existing methods on building dialogue systems can be generally categorized into studying twoparty conversations and multi-party conversations (MPC). In this paper, we study MPC. In addition to predicting utterances, identifying the speaker and recognizing the addressee of an utterance are also important tasks for MPC. Ouchi and Tsuboi (2016) first proposed the task of addressee and response selection and created an MPC corpus for studying this task. Zhang et al. (2018a) proposed SI-RNN, which updated speaker embeddings role-sensitively for addressee and response selection. Meng et al. (2018) proposed a task of speaker classification as a surrogate task for speaker modeling. Le et al. (2019) proposed a who-to-whom (W2W) model to recognize the addressees of all utterances. proposed a graph-structured network (GSN) to model the graphical information flow for response generation. Wang et al. (2020) proposed to track the dynamic topic for response selection.
Generally speaking, previous studies on MPC cannot unify the representations of interlocutors and utterances effectively. Also, they are limited to each individual task, ignoring the complementary information among different tasks. To the best of our knowledge, this paper makes the first attempt to design various self-supervised tasks for building PLMs aiming at MPC understanding, and to evaluate the performance of PLMs on three downstream tasks as comprehensively as possible.
MPC-BERT and Self-Supervised Tasks
An MPC instance is composed of a sequence of (speaker, utterance, addressee) triples, denoted as {(s n , u n , a n )} N n=1 , where N is the number of turns in the conversation. Our goal is to build a pre-trained language model for universal MPC understanding. Given a conversation, this model is expected to produce embedding vectors for all utterances which contain not only the semantic information of each utterance, but also the speaker and addressee structure of the whole conversation. Thus, it can be effectively adapted to various downstream tasks by fine-tuning model parameters.
Model Overview
In this paper, BERT (Devlin et al., 2019) is chosen as the backbone of our PLM for MPC. Thus, we name it MPC-BERT. It is worth noting that our proposed self-supervised tasks for training MPC-BERT can also be applied to other types of PLMs.
We first give an overview of the input representations and the overall architectures of MPC-BERT. When constructing the input representations, in order to consider the speaker information of each utterance, speaker embeddings (Gu et al., 2020) are introduced as shown in Figure 1. Considering that the set of interlocutors are inconsistent in different conversations, a position-based interlocutor embedding table is initialized randomly at first and updated during pre-training, which means each interlocutor in a conversation is assigned with an embedding vector according to the order it appears in the conversation. Then, the speaker embeddings for each utterance can be derived by looking up this embedding table. The speaker embeddings are combined with standard token, position and segmentation embeddings and are then encoded by BERT. The output embeddings of BERT corresponding to different input tokens are utilized by different self-supervised tasks for further calculation.
Tasks of Interlocutor Structure Modeling
The first three tasks follow the semantics-tostructure manner. In MPC-BERT, each interlocutor is described through the encoded representations of the utterances it says. Thus, the representations of utterance semantics are utilized to construct the conversation structure. Figure 1 shows the input representations and the model architectures of these three tasks. A [CLS] token is inserted at the start of each utterance, denoting its utterancelevel representation. Then, all utterances in a conversation are concatenated and a [SEP] token is inserted at the end of the whole sequence. It is notable that these three tasks share the same form of input data. Thus, the input only needs to be encoded once by BERT while the output can be fed into three tasks, which is computation-efficient. As shown in Figure 1, a task-dependent non-linear transformation layer is placed on top of BERT in order to adapt the output of BERT to different tasks. We will describe the details of these tasks as follows.
Reply-to Utterance Recognition
To enable the model to recognize the addressee of each utterance, a self-supervised task named replyto utterance recognition (RUR) is proposed to learn which preceding utterance the current utterance replies to. After encoded by BERT, we extract the contextualized representations for each [CLS] token representing individual utterances. Next, a non-linear transformation followed by a layer normalization are performed to derive the utterance representations for this specific task {u rur i } N i=1 , where u rur i ∈ R d and d = 768. Then, for a specific utterance U i , its matching scores with all its preceding utterances are calculated as
m ij = softmax(u rur i · A rur · u rur j ),(1)
where A rur ∈ R d×d is a linear transformation, m ij denotes the matching degree of U j being the replyto utterance of U i , and 1 ≤ j < i. We construct a set S by sampling a certain number of utterances in a conversation and this recognition operation is performed for each utterance in S. Meanwhile, a dynamic sampling strategy is adopted so that models can see more samples. Finally, the pretraining objective of this self-supervised task is to minimize the cross-entropy loss as
L rur = − i∈S i−1 j=1 y ij log(m ij ),(2)
where y ij = 1 if U j is the reply-to utterance of U i and y ij = 0 otherwise.
Identical Speaker Searching
Having knowledge of who is the speaker of an utterance is also important for MPC. The task of identical speaker searching (ISS) is designed by masking the speaker embedding of a specific utterance in the input representation, and aims to predict its speaker given the conversation. Since the set of interlocutors vary across conversations, the task of predicting the speaker of an utterance is reformulated as searching for the utterances sharing the identical speaker.
First, for a specific utterance, its speaker embedding is masked with a special [Mask] interlocutor embedding to avoid information leakage. Given the utterance representations for this specific task
{u iss i } N i=1
where u iss i ∈ R d , the matching scores of U i with all its preceding utterances are calculated similarly with Eq. (1). Here, m ij denotes the matching degree of U j sharing the same speaker with U i . For each instance in the dynamic sampling set S, there must be an utterance in previous turns sharing the same speaker. Otherwise, it is removed out of the set. Finally, the pre-training objective of this task is to minimize the cross-entropy loss similarly with Eq. (2). Here, y ij = 1 if U j shares the same speaker with U i and y ij = 0 otherwise.
Pointer Consistency Distinction
We design a task named pointer consistency distinction (PCD) to jointly model speakers and addressees in MPC. In this task, a pair of utterances representing the "reply-to" relationship is defined as a speaker-to-addressee pointer. Here, we assume that the representations of two pointers directing from the same speaker to the same addressee should be consistent. As illustrated in Figure 2 (a), speaker S m speaks U i and U j which reply to U i and U j from speaker S n respectively. Thus, the utterance tuples (U i , U i ) and (U j , U j ) both represent the pointer of S m -to-S n and their pointer representations should be consistent..
Given the utterance representations for this specific task {u pcd i } N i=1 where u pcd i ∈ R d , we first capture the pointer information contained in each utterance tuple. The element-wise difference and multiplication between an utterance tuple (U i , U i ) are computed and are concatenated as Figure 2: Illustrations of the self-supervised tasks of (a) pointer consistency distinction and (b) shared node detection. Rectangles denote utterances, circles denote interlocutors, a solid line denotes an utterance replying to an utterance, and a dashed line denotes an utterance from an interlocutor.
p ii = [u pcd i − u pcd i ; u pcd i u pcd i ],(3)
where p ii ∈ R 2d . Then, we compress p ii and obtain the pointer representationp ii as
p ii = ReLU(p ii · W pcd + b pcd ),(4)
where W pcd ∈ R 2d×d and b pcd ∈ R d are parameters. Identically, a consistent pointer representationsp jj and an inconsistent onep kk sampled from this conversation are obtained. The similarities between every two pointers are calculated as
m ij = sigmoid(p ii · A pcd ·p jj ),(5)
where m ij denotes the matching degree of pointer p ii being consistent with pointerp jj . m ik can be derived accordingly. Finally, the pre-training objective of this task is to minimize the hinge loss which enforces m ij to be larger than m ik by at least a margin ∆ as
L pcd = max{0, ∆ − m ij + m ik }.(6)
Tasks of Utterance Semantics Modeling
Intuitively, the conversation structure might influence the information flow, so that it can be used to strengthen the representations of utterance semantics. Thus, two self-supervised tasks following the structure-to-semantics manner are designed.
Masked Shared Utterance Restoration
There are usually several utterances replying-to a shared utterance in MPC. Intuitively, a shared utterance is semantically relevant to more utterances in the context than non-shared ones. Based on this characteristic, we design a task named masked shared utterance restoration (MSUR). We first randomly sample an utterance from all shared utterances in a conversation and all tokens in this sampled utterance are masked with a [MASK]
token. Then the model is enforced to restore the masked utterance given the rest conversation. Formally, assuming U i as the masked shared utterance and l i as the number of tokens in U i . Given the token representations for this task {u msur
i,t } l i t=1
where u msur i,t ∈ R d , the probability distribution of each masked token can be calculated as
p u i,t = softmax(u msur i,t · W msur + b msur ),(7)
where W msur ∈ R d×V is the token embedding table, V denotes the vocabulary size, and b msur ∈ R V is a bias vector. Finally, the pre-training objective of this self-supervised task is to minimize the negative log-likelihood loss as
L msur = − 1 l i l i t=1 log p u i,t ,(8)
where p u i,t is the element in p u i,t corresponding to the original token.
Shared Node Detection
A full MPC instance can be divided into several sub-conversations and we assume that the representations of sub-conversations under the same parent node tend to be similar. As illustrated in Figure 2 (b), two sub-conversations {U 3 , U 5 , U 7 , U 8 } and {U 4 , U 6 , U 9 } share the same parent node U 2 . Thus, they should be semantically relevant. Under this assumption, we design a self-supervised task named shared node detection (SND), which utilizes the conversation structure to strengthen the capability of models on measuring the semantic relevance of two sub-conversations. We first construct the pre-training samples for this task. Empirically, only the sub-conversations under the top shared node in a conversation are collected in order to filter out the sub-conversations with few utterances. Given a full MPC, the two sub-conversations with the most utterances form a positive pair. For each positive pair, we replace one of its elements with another sub-conversation randomly sampled from the training corpus to form a negative pair.
Formally, given two sub-conversations c i and c j , utterances in each sub-conversation are first concatenated respectively to form two segments. Then, the two segments are concatenated with a [SEP] token and a [CLS] token is inserted at the beginning of the whole sequence. This sequence are encoded by BERT to derive the contextualized representation for the [CLS] token. A non-linear transformation with sigmoid activation is further applied to this representation for calculating the matching score m ij , i.e., the probability of c i and c j sharing the same parent node. Finally, the pretraining objective of this task is to minimize the cross-entropy loss as
L snd = −[y ij log(m ij ) + (1 − y ij )log(1 − m ij )],(9)
where y ij = 1 if c i and c j share the same parent node and y ij = 0 otherwise.
Multi-task Learning
In addition, we also adopt the tasks of masked language model (MLM) and next sentence prediction (NSP) in original BERT pre-training (Devlin et al., 2019), which have been proven effective for incorporating domain knowledge (Gu et al., 2020;Gururangan et al., 2020). Finally, MPC-BERT is trained by performing multi-task learning that minimizes the sum of all loss functions as
L = L rur + L iss + L pcd + L msur + L snd + L mlm + L nsp .(10)
4 Downstream Tasks
Addressee Recognition
Given a multi-party conversation where part of the addressees are unknown, Ouchi and Tsuboi (2016) and Zhang et al. (2018a) recognized an addressee of the last utterance. Le et al. (2019) recognized addressees of all utterances in a conversation. In this paper, we follow the more challenging setting in Le et al. (2019). Formally, models are asked to predict {â n } N n=1 given {(s n , u n , a n )} N n=1 \{a n } N n=1 , whereâ n is selected from the interlocutor set in this conversation and \ denotes exclusion. When applying MPC-BERT, this task is reformulated as finding a preceding utterance from the same addressee. Its RUR matching scores with all preceding utterances are calculated following Eq. (1). Then, the utterance with the highest score is selected and the speaker of the selected utterance is considered as the recognized addressee. Finally, the fine-tuning objective of this task is to minimize the crossentropy loss as
L ar = − N i=2 i−1 j=1 y ij log(m ij ),(11)
where m ij is defined in Eq. (1), y ij = 1 if the speaker of U j is the addressee of U i and y ij = 0 otherwise.
Speaker Identification
This task aims to identify the speaker of the last utterance in a conversation. Formally, models are asked to predictŝ N given {(s n , u n , a n )} N n=1 \s N , whereŝ N is selected from the interlocutor set in this conversation. When applying MPC-BERT, this task is reformulated as identifying the utterances sharing the same speaker. For the last utterance U N , its speaker embedding is masked and its ISS matching scores m N j with all preceding utterances are calculated following Section 3.2.2. The finetuning objective of this task is to minimize the cross-entropy loss as
L si = − N −1 j=1 y N j log(m N j ),(12)
where y N j = 1 if U j shares the same speaker with U N and y N j = 0 otherwise.
Response Selection
This task asks models to selectû N from a set of response candidates given the conversation context {(s n , u n , a n )} N n=1 \u N . The key is to measure the similarity between two segments of context and response. We concatenate each response candidate with the context and extract the contextualized representation e [CLS] for the first [CLS] token using MPC-BERT. Then, e [CLS] is fed into a nonlinear transformation with sigmoid activation to obtain the matching score between the context and the response. Finally, the fine-tuning objective of this task is to minimize the cross-entropy loss according to the true/false labels of responses in the training set as
L rs = −[ylog(m cr )+(1−y)log(1−m cr )],(13)
where y = 1 if the response r is a proper one for the context c; otherwise y = 0.
Experiments
Datasets
We evaluated our proposed methods on two Ubuntu IRC benchmarks. One was released by , in which both speaker and addressee labels was provided for each utterance. The other benchmark was released by Ouchi and Tsuboi (2016). Here, we adopted the version shared in Le et al. (2019) for fair comparison. The conversation sessions were separated into three categories according to the session length (Len-5, Len-10 and Len-15) following the splitting strategy of previous studies (Ouchi and Tsuboi, 2016;Zhang et al., 2018a;Le et al., 2019). Table 2 presents the statistics of the two benchmarks evaluated in our experiments.
Baseline Models
Non-pre-training-based models Ouchi and Tsuboi (2016) proposed a dynamic model DRNN which updated speaker embeddings with the conversation flow. Zhang et al. (2018a) improved DRNN to SI-RNN which updated speaker embeddings role-sensitively. Le et al. (2019) proposed W2W which jointly modeled interlocutors and utterances in a uniform framework, and predicted all addressees.
Pre-training-based models BERT (Devlin et al., 2019) was pre-trained to learn general language representations with MLM and NSP tasks. SA-BERT (Gu et al., 2020) added speaker embeddings and further pre-trained BERT on a domain-specific corpus to incorporate domain knowledge. We re-implemented SA-BERT with the pre-training corpus used in this paper to ensure fair comparison.
Implementation Details
The version of BERT-base-uncased was adopted for all our experiments. For pre-training, GELU (Hendrycks and Gimpel, 2016) was employed as the activation for all non-linear transformations. The Adam method (Kingma and Ba, 2015) was employed for optimization. The learning rate was initialized as 0.00005 and the warmup proportion was set to 0.1. We pre-trained BERT for 10 epochs. The training set of the dateset used in was employed for pre-training. The maximum utterance number was set to 7. The maximum sequence length was set to 230. The maximum sampling numbers for each example were set to 4 for RUR, 2 for ISS and 2 for PCD. ∆ in Eq. (6) was set to 0.4, achieving the best performance out of {0.2, 0.4, 0.6, 0.8} on the validation set. The pre-training was performed using a GeForce RTX 2080 Ti GPU and the batch size was set to 4. For fine-tuning, some configurations were different according to the characteristics of these datasets. For , the maximum utterance number was set to 7 and the maximum sequence length was set to 230. For the three experimental settings in Ouchi and Tsuboi (2016), the maximum utterance numbers were set to 5, 10 and 15, and the maximum sequence lengths were set to 120, 220 and 320. All parameters in PLMs were updated. The learning rate was initialized as 0.00002 and the warmup proportion was set to 0.1. For , the fine-tuning process was performed for 10 epochs for addressee recognition, 10 epochs for speaker identification, and 5 epochs for response selection. For Ouchi and Tsuboi (2016), the fine-tuning epochs were set to 5, 5 and 3 respectively. The fine-tuning was also performed using a GeForce RTX 2080 Ti GPU. The batch sizes were set to 16 for , and 40, 20, and 12 for the three experimental settings in Ouchi and Tsuboi (2016) respectively. The validation set was used to select the best model for testing.
All codes were implemented in the TensorFlow framework (Abadi et al., 2016) and are published to help replicate our results. 1
Metrics and Results
Addressee recognition We followed the metrics of previous work (Le et al., 2019) by employing precision@1 (P@1) to evaluate each utterance with ground truth. Also, a session is marked as positive if the addressees of all its utterances are correctly recognized, which is calculated as accuracy (Acc.). Table 3 presents the results of addressee recognition. It shows that MPC-BERT outperforms the best performing model, i.e., SA-BERT, by margins of 3.51%, 2.86%, 3.28% and 5.36% on these test sets respectively in terms of Acc., verifying the effectiveness of the proposed five selfsupervised tasks as a whole. To further illustrate the effectiveness of each task, ablation tests were performed as shown in the last five rows of Table 3. We can observe that all self-supervised tasks are useful as removing any of them causes performance Ouchi and Tsuboi (2016) Table 4: Evaluation results of speaker identification on the test sets in terms of P@1. Numbers in bold denote that the improvement over the best performing baseline is statistically significant (t-test with p-value < 0.05).
drop. Among the five tasks, RUR plays the most important role, and the tasks focusing on modeling interlocutor structure contribute more than those for utterance semantics.
Speaker identification Similarly, P@1 was employed as the evaluation metric of speaker identification for the last utterance of a conversation and the results are shown in Table 4. It shows that MPC-BERT outperforms SA-BERT by margins of 7.66%, 2.60%, 3.38% and 4.24% respectively in terms of P@1. Besides, from the ablation results we find that all tasks are useful for improving the performance of speaker identification and ISS and RUR contribute the most. In particular, removing PCD, MSUR and SND only leads to slight performance drop. The reason might be that the information conveyed by these tasks is redundant.
Response selection The R n @k metrics adopted by previous studies (Ouchi and Tsuboi, 2016;Zhang et al., 2018a) were used here. Each model was tasked with selecting k best-matched responses from n available candidates, and we calculated the recall as R n @k. Two settings were followed in which k was set to 1 and n was set to 2 or 10. Table 5 presents the results of response selection. It shows that MPC-BERT outperforms SA-BERT by margins of 3.82%, 2.71%, 2.55% and 3.22% respectively in terms of R 10 @1. Ablation tests show that SND is the most useful task for response selection and the two tasks focusing on the utterance semantics contribute more than those Ouchi and Tsuboi (2016) Len-5
Len-10 Len-15 R 2 @1 R 10 @1 R 2 @1 R 10 @1 R 2 @1 R 10 @1 R 2 @1 R 10 @1 DRNN (Ouchi and (2016) and Zhang et al. (2018a). Numbers in bold denote that the improvement over the best performing baseline is statistically significant (t-test with p-value < 0.05). focusing on the interlocutor structures. Figure 3 illustrates how the performance of BERT, SA-BERT and MPC-BERT changed with respect to different session lengths on the test sets of Ouchi and Tsuboi (2016). It can be seen that the performance of addressee recognition and speaker identification dropped as the session length increased. The reason might be that longer sessions always contain more interlocutors which increase the difficulties of predicting interlocutors. Meanwhile, the performance of response selection was significantly improved as the session length increased. It can be attributed to that longer sessions enrich the representations of contexts with more details which benefit response selection. Furthermore, as the session length increased, the performance of MPC-BERT dropped more slightly than that of SA-BERT on addressee recognition and speaker identification, and the R 10 @1 gap between MPC-BERT and SA-BERT on response selection enlarged from 2.71% to 3.22%. These results imply the superiority of MPC-BERT over SA-BERT on modeling long MPCs with complicated structures.
Discussions
Conclusion
In this paper, we present MPC-BERT, a pre-trained language model with five self-supervised tasks for MPC understanding. These tasks jointly learn who says what to whom in MPCs. Experimental results on three downstream tasks show that MPC-BERT outperforms previous methods by large margins and achieves new state-of-the-art performance on two benchmarks.
Figure 1 :
1Input representations and model architectures of the three self-supervised tasks for interlocutor structure modeling, including (a) reply-to utterance recognition, (b) identical speaker searching and (c) pointer consistency distinction.
Figure 3 :
3Performance of models under different session lengths on the test sets of Ouchi and Tsuboi (2016) on the tasks of (a) addressee recognition, (b) speaker identification and (c) response selection.
* Work done during the internship at Microsoft. † Corresponding author.Table 1: An MPC example in Ubuntu IRC channel.Speaker
Utterance
Addressee
I.1
How can I setup if I want add new
-
server at xchat?
I.2
From places, network servers, work
I.1
group, his computer, and then I
clicked on the shared folder.
I.3
It did not allow you to see the files?
I.2
I.2
It prompts for authentication and I
I.3
don't know what to put. I tried guest
with no password.
I.4
Put proper authentication in, then?
I.2
I.3
I think you had kde on suse?
I.2
Table 2 :
2Statistics of the two benchmarks evaluated in this paper.
Table 3 :
3Evaluation results of addressee recognition on the test sets. Results except ours are cited fromLe et al. (2019). Numbers in bold denote that the improvement over the best performing baseline is statistically significant (t-test with p-value < 0.05).Hu et al. (2019) Ouchi and Tsuboi (2016)
Len-5 Len-10 Len-15
BERT (Devlin et al., 2019)
71.81
62.24
53.17
51.58
SA-BERT (Gu et al., 2020)
75.88
64.96
57.62
54.28
MPC-BERT
83.54
67.56
61.00
58.52
MPC-BERT w/o. RUR
82.48
66.88
60.12
57.33
MPC-BERT w/o. ISS
77.95
66.77
60.03
56.73
MPC-BERT w/o. PCD
83.39
67.12
60.62
58.00
MPC-BERT w/o. MSUR
83.51
67.21
60.76
58.03
MPC-BERT w/o. SND
83.47
67.04
60.44
58.12
Devlin et al., 2019) 92.48 73.42 85.52 53.95 86.93 57.41 87.19 58.92 SA-BERT (Gu et al., 2020) 92.98 75.16 86.53 55.24 87.98 59.27 88.34 60.42 MPC-BERT 94.90 78.98 87.63 57.95 89.14 61.82 89.70 63.64 MPC-BERT w/o. RUR 94.48 78.16 87.20 57.56 88.96 61.47 89.07 63.24 MPC-BERT w/o. ISS 94.58 78.82 87.54 57.77 88.98 61.76 89.58 63.51 MPC-BERT w/o. PCD 94.66 78.70 87.50 57.51 88.75 61.62 89.45 63.46 MPC-BERT w/o. MSUR 94.36 78.22 87.11 57.58 88.59 61.05 89.25 63.20 MPC-BERT w/o. SND 93.92 76.96 87.30 57.54 88.77 61.54 89.27 63.34Tsuboi, 2016)
-
-
76.07 33.62 78.16 36.14 78.64 36.93
SIRNN (Zhang et al., 2018a)
-
-
78.14 36.45 80.34 39.20 80.91 40.83
BERT (
Table 5 :
5Evaluation results of response selection on the test sets. Results except ours are cited from Ouchi and Tsuboi
https://github.com/JasonForJoy/MPC-BERT
AcknowledgmentsWe thank anonymous reviewers for their valuable comments.
Tensorflow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, Xiaoqiang Zheng, 12th USENIX Symposium on Operating Systems Design and Implementation. Savannah, GA, USAMartín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016, Savannah, GA, USA, November 2-4, 2016., pages 265-283.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USA1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186.
Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots. Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu, 10.1145/3340531.3412330CIKM '20: The 29th ACM International Conference on Information and Knowledge Management. IrelandJia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 2041- 2044.
Interactive matching network for multi-turn response selection in retrieval-based chatbots. Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, 10.1145/3357384.3358140Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019. the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019Beijing, ChinaJia-Chen Gu, Zhen-Hua Ling, and Quan Liu. 2019a. In- teractive matching network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pages 2321-2324.
Dually interactive matching network for personalized response selection in retrieval-based chatbots. Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, Quan Liu, 10.18653/v1/D19-1193Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsJia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, and Quan Liu. 2019b. Dually interactive matching network for personalized response selection in retrieval-based chatbots. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1845-1854. Association for Computational Linguistics.
Don't stop pretraining: Adapt language models to domains and tasks. Ana Suchin Gururangan, Swabha Marasovic, Kyle Swayamdipta, Iz Lo, Doug Beltagy, Noah A Downey, Smith, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline2020Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8342-8360.
Bridging nonlinearities and stochastic regularizers with gaussian error linear units. Dan Hendrycks, Kevin Gimpel, abs/1606.08415CoRRDan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaus- sian error linear units. CoRR, abs/1606.08415.
GSN: A graph-structured network for multi-party dialogues. Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, Rui Yan, 10.24963/ijcai.2019/696Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019. the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019Macao, ChinaWenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, and Rui Yan. 2019. GSN: A graph-structured network for multi-party dialogues. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5010-5016.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Who is speaking to whom? learning to identify utterance addressee in multi-party conversations. Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, Rui Yan, 10.18653/v1/D19-1199Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaRan Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, and Rui Yan. 2019. Who is speaking to whom? learning to identify utterance addressee in multi-party conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1909- 1919.
The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Prague, Czech RepublicProceedings of the SIGDIAL 2015 ConferenceRyan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the SIGDIAL 2015 Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2-4 September 2015, Prague, Czech Republic, pages 285-294.
Towards neural speaker modeling in multi-party conversation: The task, dataset, and models. Zhao Meng, Lili Mou, Zhi Jin, Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018. the Eleventh International Conference on Language Resources and Evaluation, LREC 2018Miyazaki, JapanEuropean Language Resources Association (ELRAZhao Meng, Lili Mou, and Zhi Jin. 2018. Towards neural speaker modeling in multi-party conversation: The task, dataset, and models. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. European Language Re- sources Association (ELRA).
Addressee and response selection for multi-party conversation. Hiroki Ouchi, Yuta Tsuboi, 10.18653/v1/d16-1231Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAHiroki Ouchi and Yuta Tsuboi. 2016. Addressee and response selection for multi-party conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2133-2143.
Building end-to-end dialogue systems using generative hierarchical neural network models. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, Joelle Pineau, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USAIulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 3776-3784.
A hierarchical latent variable encoder-decoder model for generating dialogues. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, Yoshua Bengio, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceSan Francisco, California, USAAAAI PressIulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3295- 3301. AAAI Press.
Neural responding machine for short-text conversation. Lifeng Shang, Zhengdong Lu, Hang Li, 10.3115/v1/p15-1152Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language ProcessingBeijing, ChinaLong Papers1Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26- 31, 2015, Beijing, China, Volume 1: Long Papers, pages 1577-1586.
Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan, 10.1145/3289600.3290985Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM 2019. the Twelfth ACM International Conference on Web Search and Data Mining, WSDM 2019Melbourne, VIC, AustraliaACMChongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019a. Multi- representation fusion network for multi-turn re- sponse selection in retrieval-based chatbots. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM 2019, Melbourne, VIC, Australia, February 11-15, 2019, pages 267-275. ACM.
One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan, 10.18653/v1/p19-1001Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019b. One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1- 11.
Response selection for multi-party conversations with dynamic topic tracking. Weishi Wang, C H Steven, Shafiq R Hoi, Joty, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline2020Weishi Wang, Steven C. H. Hoi, and Shafiq R. Joty. 2020. Response selection for multi-party conversations with dynamic topic tracking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6581- 6591.
Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, Zhoujun Li, 10.18653/v1/P17-1046Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 496-505.
Addressee and response selection in multi-party conversations with speaker interaction rnns. Rui Zhang, Honglak Lee, Lazaros Polymenakos, Dragomir R Radev, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)New Orleans, Louisiana, USARui Zhang, Honglak Lee, Lazaros Polymenakos, and Dragomir R. Radev. 2018a. Addressee and response selection in multi-party conversations with speaker interaction rnns. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5690-5697.
Generating informative and diverse conversational responses via adversarial information maximization. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, Bill Dolan, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems. NeurIPS; Montréal, CanadaYizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Sys- tems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 1815-1825.
DIALOGPT : Large-scale generative pre-training for conversational response generation. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan, 10.18653/v1/2020.acl-demos.30Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020. the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020OnlineAssociation for Computational LinguisticsYizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversa- tional response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 270-278. Association for Computational Linguistics.
Multi-turn response selection for chatbots with deep attention matching network. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, Hua Wu, 10.18653/v1/P18-1103Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers1Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1118-1127.
| [
"https://github.com/JasonForJoy/MPC-BERT"
] |
[
"Diversifying Dialogue Generation with Non-Conversational Text",
"Diversifying Dialogue Generation with Non-Conversational Text"
] | [
"Hui Su \nPattern Recognition Center\nWechat AITencent IncChina\n",
"Xiaoyu Shen xshen@mpi-inf.mpg.de \nSaarland Informatics Campus\nMPI Informatics & Spoken Language Systems (LSV)\n\n",
"Sanqiang Zhao \nUniversity of Pittsburgh\n\n",
"Xiao Zhou \nPattern Recognition Center\nWechat AITencent IncChina\n",
"Pengwei Hu \nThe Hong Kong Polytechnic University\nHong Kong\n",
"Randy Zhong \nPattern Recognition Center\nWechat AITencent IncChina\n",
"Cheng Niu \nPattern Recognition Center\nWechat AITencent IncChina\n",
"Jie Zhou \nPattern Recognition Center\nWechat AITencent IncChina\n"
] | [
"Pattern Recognition Center\nWechat AITencent IncChina",
"Saarland Informatics Campus\nMPI Informatics & Spoken Language Systems (LSV)\n",
"University of Pittsburgh\n",
"Pattern Recognition Center\nWechat AITencent IncChina",
"The Hong Kong Polytechnic University\nHong Kong",
"Pattern Recognition Center\nWechat AITencent IncChina",
"Pattern Recognition Center\nWechat AITencent IncChina",
"Pattern Recognition Center\nWechat AITencent IncChina"
] | [] | Neural network-based sequence-to-sequence (seq2seq) models strongly suffer from the lowdiversity problem when it comes to opendomain dialogue generation. As bland and generic utterances usually dominate the frequency distribution in our daily chitchat, avoiding them to generate more interesting responses requires complex data filtering, sampling techniques or modifying the training objective.In this paper, we propose a new perspective to diversify dialogue generation by leveraging non-conversational text. Compared with bilateral conversations, nonconversational text are easier to obtain, more diverse and cover a much broader range of topics. We collect a large-scale nonconversational corpus from multi sources including forum comments, idioms and book snippets.We further present a training paradigm to effectively incorporate these text via iterative back translation. The resulting model is tested on two conversational datasets and is shown to produce significantly more diverse responses without sacrificing the relevance with context. | 10.18653/v1/2020.acl-main.634 | [
"https://arxiv.org/pdf/2005.04346v2.pdf"
] | 218,581,884 | 2005.04346 | 70262f997138216f29372cff89d264fc258e77b2 |
Diversifying Dialogue Generation with Non-Conversational Text
Hui Su
Pattern Recognition Center
Wechat AITencent IncChina
Xiaoyu Shen xshen@mpi-inf.mpg.de
Saarland Informatics Campus
MPI Informatics & Spoken Language Systems (LSV)
Sanqiang Zhao
University of Pittsburgh
Xiao Zhou
Pattern Recognition Center
Wechat AITencent IncChina
Pengwei Hu
The Hong Kong Polytechnic University
Hong Kong
Randy Zhong
Pattern Recognition Center
Wechat AITencent IncChina
Cheng Niu
Pattern Recognition Center
Wechat AITencent IncChina
Jie Zhou
Pattern Recognition Center
Wechat AITencent IncChina
Diversifying Dialogue Generation with Non-Conversational Text
Neural network-based sequence-to-sequence (seq2seq) models strongly suffer from the lowdiversity problem when it comes to opendomain dialogue generation. As bland and generic utterances usually dominate the frequency distribution in our daily chitchat, avoiding them to generate more interesting responses requires complex data filtering, sampling techniques or modifying the training objective.In this paper, we propose a new perspective to diversify dialogue generation by leveraging non-conversational text. Compared with bilateral conversations, nonconversational text are easier to obtain, more diverse and cover a much broader range of topics. We collect a large-scale nonconversational corpus from multi sources including forum comments, idioms and book snippets.We further present a training paradigm to effectively incorporate these text via iterative back translation. The resulting model is tested on two conversational datasets and is shown to produce significantly more diverse responses without sacrificing the relevance with context.
Introduction
Seq2seq models have achieved impressive success in a wide range of text generation tasks. In opendomain chitchat, however, people have found the model tends to strongly favor short, generic responses like "I don't know" or "OK" (Vinyals and Le, 2015;Shen et al., 2017a). The reason lies in the extreme one-to-many mapping relation between every context and its potential responses (Zhao et al., 2017;Su et al., 2018). Generic utterances, which can be in theory paired with most context, usually dominate the frequency distribution in the dialogue training corpus and thereby pushes the model to * Equal contribution.
Conversational Text
Context 暗恋的人却不喜欢我 (Translation) The one I have a crush on doesn't like me.
Response
摸摸头 Head pat.
Non-Conversational Text Forum Comments
暗恋这碗酒,谁喝都会醉啊
Crush is an alcoholic drink, whoever drinks it will get intoxicated.
Idiom
何必等待一个没有结果的等待 Why wait for a result without hope
Book Snippet
真诚的爱情之路永不会是平坦的
The course of true love never did run smooth (From A Midsummer Night's Dream) blindly produce these safe, dull responses Csáky et al., 2019) Current solutions can be roughly categorized into two classes: (1) Modify the seq2seq itself to bias toward diverse responses (Li et al., 2016a;Shen et al., 2019a). However, the model is still trained on the limited dialogue corpus which restricts its power at covering broad topics in opendomain chitchat. (2) Augment the training corpus with extra information like structured world knowledge, personality or emotions (Li et al., 2016b;Dinan et al., 2019), which requires costly human annotation.
In this work, we argue that training only based on conversational corpus can greatly constrain the usability of an open-domain chatbot system since many topics are not easily available in the dialogue format. With this in mind, we explore a cheap way to diversify dialogue generation by utilizing large amounts of non-conversational text. Compared with bilateral conversations, non-conversational text covers a much broader range of topics, and can be easily obtained without further human annotation from multiple sources like forum comments, idioms and book snippets. More importantly, nonconversational text are usually more interesting and contentful as they are written to convey some spe-cific personal opinions or introduce a new topic, unlike in daily conversations where people often passively reply to the last utterance. As can be seen in Table 1, the response from the daily conversation is a simple comfort of "Head pat". Nonconversational text, on the contrary, exhibit diverse styles ranging from casual wording to poetic statements, which we believe can be potentially utilized to enrich the response generation.
To do so, we collect a large-scale corpus containing over 1M non-conversational utterances from multiple sources. To effectively integrate these utterances, we borrow the back translation idea from unsupervised neural machine translation (Sennrich et al., 2016;Lample et al., 2018b) and treat the collected utterances as unpaired responses. We first pre-train the forward and backward transduction model on the parallel conversational corpus. The forward and backward model are then iteratively tuned to find the optimal mapping relation between conversational context and non-conversational utterances (Cotterell and Kreutzer, 2018). By this means, the content of non-conversational utterances is gradually distilled into the dialogue generation model (Kim and Rush, 2016), enlarging the space of generated responses to cover not only the original dialogue corpus, but also the wide topics reflected in the non-conversational utterances. We test our model on two popular Chinese conversational datasets weibo (Shang et al., 2015a) and douban (Wu et al., 2017). We compare our model against retrieval-based systems, style-transfer methods and several seq2seq variants which also target the diversity of dialogue generation. Automatic and human evaluation show that our model significantly improves the responses' diversity both semantically and syntactically without sacrificing the relevance with context, and is considered as most favorable judged by human evaluators 1 .
Related Work
The tendency to produce generic responses has been a long-standing problem in seq2seq-based open-domain dialogue generation (Vinyals and Le, 2015;Li et al., 2016a). Previous approaches to alleviate this issue can be grouped into two classes.
The first class resorts to modifying the seq2seq architecture itself. For example, Shen et al. (2018a); Zhang et al. (2018b) changes the train-ing objective to mutual information maximization and rely on continuous approximations or policy gradient to circumvent the non-differentiable issue for text. Li et al. (2016d);Serban et al. (2017a) treat open-domain chitchat as a reinforcement learning problem and manually define some rewards to encourage long-term conversations. There is also research that utilizes latent variable sampling (Serban et al., 2017b;Shen et al., 2018b, adversarial learning Su et al., 2018), replaces the beam search decoding with a more diverse sampling strategy (Li et al., 2016c;Holtzman et al., 2019) or applies reranking to filter generic responses (Li et al., 2016a;Wang et al., 2017). All of the above are still trained on the original dialogue corpus and thereby cannot generate out-of-scope topics.
The second class seeks to bring in extra information into existing corpus like structured knowledge (Zhao et al., 2018;Ghazvininejad et al., 2018;Dinan et al., 2019), personal information (Li et al., 2016b;Zhang et al., 2018a) or emotions (Shen et al., 2017b;Zhou et al., 2018). However, corpus with such annotations can be extremely costly to obtain and is usually limited to a specific domain with small data size. Some recent research started to do dialogue style transfer based on personal speeches or TV scripts (Niu and Bansal, 2018;Gao et al., 2019;. Our motivation differs from them in that we aim at enriching general dialogue generation with abundant non-conversational text instead of being constrained on one specific type of style.
Back translation is widely used in unsupervised machine translation (Sennrich et al., 2016;Lample et al., 2018a;Artetxe et al., 2018) and has been recently extended to similar areas like style transfer (Subramanian et al., 2019), summarization (Zhao et al., 2019) and data-to-text (Chang et al., 2020). To the best of our knowledge, it has never been applied to dialogue generation yet. Our work treats the context and non-conversational text as unpaired source-target data. The backtranslation idea is naturally adopted to learn the mapping between them. The contents of nonconversational text can then be effectively utilized to enrich the dialogue generation.
Dataset
We would like to collect non-conversational utterances that stay close with daily-life topics and can be potentially used to augment the response space. The utterance should be neither too long nor too short, similar with our daily chitchats. Therefore, we collect data from the following three sources:
1. Forum comments. We collect comments from zhihu 2 , a popular Chinese forums. Selected comments are restricted to have more than 10 likes and less than 30 words 3 .
2. Idioms. We crawl idioms, famous quotes, proverbs and locutions from several websites. These phrases are normally highly-refined and graceful, which we believe might provide a useful augmentation for responses.
3. Book Snippets. We select top 1,000 favorite novels or prose from wechat read 4 . Snippets highlighted by readers, which are usually quintessential passages, and with the word length range 10-30 are kept.
We further filter out sentences with offensive or discriminative languages by phrase matching against a large blocklist. The resulting corpus contains over 1M utterances. The statistics from each source are listed in Table 2.
Approach
Let D = {(X 1 , Y 1 ), (X 2 , Y 2 ), . . . , (X N , Y N )} de- note the parallel conversational corpus. X i is the context and Y i is the corresponding response. D T = {T 1 , T 2 , . . . , T M } denotes our collected cor- pus where T i is a non-conversational utterance.
As the standard seq2seq model trained only on D tends to generate over-generic responses, our purpose is to diversify the generated responses by leveraging the non-conversational corpus D T , which are semantically and syntactically much richer than responses contained in D. In the following section, we first go through several baseline systems, then introduce our proposed method based on back translation.
Retrieval-based System
The first approach we consider is a retrieval-based system that considers all sentences contained in D T as candidate responses. As the proportion of generic utterances in D T is much lower than that in D, the diversity will be largely improved. Standard retrieval algorithms based on contextmatching (Wu et al., 2017;Bartl and Spanakis, 2017) fail to apply here since non-conversational text does not come with its corresponding context. Therefore, we train a backward seq2seq model on the parallel conversational corpus D to maximize p(X i |Y i ). The score assigned by the backward model, which can be seen as an estimation of the point-wise mutual information, is used to rank the responses (Li et al., 2016a)
5 .
The major limitation of the retrieval-based system is that it can only produce responses from a finite set of candidates. The model can work well only if an appropriate response already exists in the candidate bank. Nonetheless, due to the large size of the non-conversational corpus, this approach is a very strong baseline.
Weighted Average
The second approach is to take a weighted average score of a seq2seq model trained on D and a language model trained on D T when decoding responses. The idea has been widely utilized on domain adaptation for text generation tasks (Koehn and Schroeder, 2007;Wang et al., 2017;Niu and Bansal, 2018). In our scenario, basically we hope the generated responses could share the diverse topics and styles of the non-conversational text, yet stay relevant with the dialogue context. The seq2seq model S2S is trained on D as an indicator of how relevant each response is with the context. A language model L is trained on D T to measure how the response matches the domain of D T . The decoding probability for generating word w at time step t is assigned by: where α is a hyperparameter to adjust the balance between the two. Setting α = 1 will make it degenerate into the standard seq2seq model while α = 0 will totally ignore the dialoge context.
p t (w) = αS2S t (w) + (1 − α)L t (w) (1)
Multi-task
The third approach is based on multi-task learning. A seq2seq model is trained on the parallel conversational corpus D while an autoencoder model is trained on the non-parallel monologue data D T . Both models share the decoder parameters to facilitate each other. The idea was first experimented on machine translation in order to leverage large amounts of target-side monolingual text (Luong et al., 2016;Sennrich et al., 2016). Luan et al. (2017) extended it to conversational models for speaker-role adaptation. The intuition is that by tying the decoder parameters, the seq2seq and autoencoder model can learn a shared latent space between the dialogue corpus and non-conversational text. When decoding, the model can generate responses with features from both sides.
Back Translation
Finally, we consider the back translation technique commonly used for unsupervised machine translation (Artetxe et al., 2018;Lample et al., 2018a). The basic idea is to first initialize the model properly to provide a good starting point, then iteratively perform backward and forward translation to learn the correspondence between context and unpaired non-conversational utterances.
Initialization Unlike unsupervised machine translation, the source and target side in our case come from the same language, and we already have a parallel conversational corpus D, so we can get rid of the careful embedding alignment and autoencoding steps as in Lample et al. (2018b). For the initialization, we simply train a forward and backward seq2seq model on D. The loss function is:
E X i ,Y i ∼D − log P f (Y i |X i ) − log P b (X i |Y i ) (2)
where P f and P b are the decoding likelihood defined by the forward and backward seq2seq model respectively. We optimize Eq. 2 until convergence. Afterwards, the forward and backward seq2seq can learn the backbone mapping relation between a context and its response in a conversational structure.
Backward After the initialization, we use the backward seq2seq to create pseudo parallel training examples from the non-conversational text D T . The forward seq2seq is then trained on the pseudo pairs. The objective is to minimize:
E T i ∼D T − log P f (T i |b(T i )) b(T i ) = arg max u P b (u|T i )(3)
where we approximate the arg max function by using a beam search decoder to decode from the backward model P b (u|T i ). Because of the nondifferentiability of the arg max operator, the gradient is only passed through P f but not P b 6 . As P b is already well initialized by training on the parallel corpus D, the back-translated pseudo Forward The forward translation follows a similar step as back translation. The forward seq2seq P f translates context into a response, which in return form a pseudo pair to train the backward model P b . The objective is to minimize:
E X i ∼D − log P b (X i |f (X i )) f (X i ) = arg max v P f (v|X i )(4)
where the arg max function is again approximated with a beam search decoder and the gradient is only backpropagated through P b . Though X i has its corresponding Y i in D, we drop Y i and instead train on forward translated pseudo pairs {X i , f (X i )}.
As P f is trained by leveraging data from D T , f (X i ) can have superior diversity compared with Y i . The encoder parameters are shared between the forward and backward models while decoders are separate. The backward and forward translation are iteratively performed to close the gap between P f and P b (Hoang et al., 2018;Cotterell and Kreutzer, 2018). The effects of non-conversational text are strengthened after each iteration. Eventually, the forward model will be able to produce diverse responses covering the wide topics in D T . Algorithm 1 depicts the training process.
Experiments
Datasets
We conduct our experiments on two Chinese dialogue corpus Weibo (Shang et al., 2015b) and Douban (Wu et al., 2017). Weibo 7 is a popular Twitter-like microblogging service in China, on which a user can post short messages, and other 7 http://www.weibo.com/ users make comment on a published post. The postcomment pairs are crawled as short-text conversations. Each utterance has 15.4 words on average and the data is split into train/valid/test subsets with 4M/40k/10k utterance pairs. Douban 8 is a Chinese social network service where people can chat about different topics online. The original data contains 1.1M multi-turn conversations. We split them into two-turn context-response pairs, resulting in 10M train, 500k valid and 100K test samples.
General Setup
For all models, we use a two-layer LSTM (Hochreiter and Schmidhuber, 1997) encoder/decoder structure with hidden size 500 and word embedding size 300. Models are trained with Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 0.15. We set the batch size as 256 and use the gradients clipping of 5. We build out vocabulary with character-based segmentation for Chinese. For non-Chinese tokens, we simply split by space and keep all unique tokens that appear at least 5 times. Utterances are cut down to at most 50 tokens and fed to every batch. We implement our models based on the OpenNMT toolkit (Klein et al., 2017) and other hyperparameters are set as the default values.
Compared Models
We compare our model with the standard seq2seq and four popular variants which were proposed to improve the diversity of generated utterances. All of them are trained only on the parallel conversational corpus:
Standard The standard seq2seq with beam search decoding (size 5).
MMI The maximum mutual information decoding which reranks the decoded responses with a backward seq2seq model (Li et al., 2016a). The hyperparameter λ is set to 0.5 as suggested. 200 candidates per context are sampled for re-ranking Diverse Sampling The diverse beam search strategy proposed in Vijayakumar et al. (2018) which explicitly controls for the exploration and exploitation of the search space. We set the number of groups as 5, λ = 0.3 and use the Hamming diversity as the penalty function as in the paper.
Nucleus Sampling Proposed in Holtzman et al. (2019), it allows for diverse sequence generations. Instead of decoding with a fixed beam size, it samples text from the dynamic nucleus. We use the default configuration and set p = 0.9.
CVAE The conditional variational autoencoder (Serban et al., 2017b;Zhao et al., 2017) which injects diversity by imposing stochastical latent variables. We use a latent variable with dimension 100 and utilize the KL-annealing strategy with step 350k and a word drop-out rate of 0.3 to alleviate the posterior collapse problem (Bowman et al., 2016).
Furthermore, we compare the 4 approaches mentioned in §4 which incorporate the collected nonconversational text:
Retrieval-based ( §4.1) Due to the large size of the non-conversational corpus, exact ranking is extremely slow. Therefore, we first retrieve top 200 matched text with elastic search based on the similarity of Bert embeddings (Devlin et al., 2019). Specifically, we pass sentences through Bert and derive a fixed-sized vector by averaging the outputs from the second-to-last layer (May et al., 2019) 9 . The 200 candidates are then ranked with the backward score 10 .
Weighted Average ( §4.2) We set λ = 0.5 in eq. 1, which considers context relevance and diversity with equal weights.
Multi-task (( §4.3)) We concatenate each contextresponse pair with a non-conversational utterance and train with a mixed objective of seq2seq and autoencoding (by sharing the decoder).
Back Translation ( §4.4) We perform the iterative backward and forward translation 4 times for both datasets. We observe the forward cross entropy loss converges after 4 iterations.
Results
As for the experiment results, we report the automatic and human evaluation in §6.1 and §6.2 respectively. Detailed analysis are shown in §6.3 to elaborate the differences among model performances and some case studies.
Automatic Evaluation
Evaluating dialogue generation is extremely difficult. Metrics which measure the word-level overlap like BLEU (Papineni et al., 2002) have been widely used for dialogue evaluation. However, these metrics do not fit into our setting well as we would like to diversify the response generation with an external corpus, the generations will inevitably differ greatly from the ground-truth references in the original conversational corpus. Though we report the BLEU score anyway and list all the results in Table 3, it is worth mentioning that the BLEU score itself is by no means a reliable metric to measure the quality of dialogue generations.
Diversity Diversity is a major concern for dialogue generation. Same as in (Li et al., 2016a), we measure the diversity by the ratio of distinct unigrams (Dist-1) and bigrams (Dist-2) in all generated responses. As the ratio itself ignores the frequency distribution of n-grams, we further calculate the entropy value for the empirical distribution of n-grams (Zhang et al., 2018b). A larger entropy indicates more diverse distributions. We report the entropy of four-grams (Ent-4) in Table 3. Among models trained only on the conversational corpus, the standard seq2seq performed worst as expected. All different variants improved the diversity more or less. Nucleus sampling and CVAE generated most diverse responses, especially Nucleus who wins on 6 out of the 8 metrics. By incorporating the non-conversational corpus, the diversity of generated responses improves dramatically. The retrieval-based system and our model perform best, in most cases even better than human references. This can happen as we enrich the response generation with external resources. The diversity would be more than the original conversational corpus. Weighted-average and multi-task models are relatively worse, though still greatly outperforming models trained only on the conversational corpus. We can also observe that our model improves over standard seq2seq only a bit after one iteration. As more iterations are added, the diversity improves gradually.
Relevance Measuring the context-response relevance automatically is tricky in our case. The typical way of using scores from forward or backward models as in is not suitable as our model borrowed information from extra resources. The generated responses are out-of-scope for the seq2seq model trained on only on the conversational corpus and thus would be assigned very low scores. Apart from the BLEU-2 score, we further evaluate the relevance by leveraging an adversarial discriminator . As has been shown in previous research, discriminative models are generally less biased to high-frequent utterances and more robust against their generative counterparts (Lu et al., 2017;Luo et al., 2018). The discriminator is trained on the parallel conversational corpus distinguish correct responses from randomly sampled ones. We encode the context and response separately with two different LSTM neural networks and output a binary signal indicating relevant or not 11 . The relevance score is defined as the success rate that the model fools the adversarial classifier into believing its generations (Adver in Table 3). The retrieval-based model, who generates the most diverse generations, achieve the lowest score as for relevance with context. The restriction that it can only select from a set of fixed utterances do affect the relevance a lot 12 . Note that the discriminator is also trained on the same bilateral conversational corpus, putting our model into a naturally disadvantageous place due to the incorporation of outof-scope non-conversational text. Nonetheless, our model still achieves competitive relevance score even compared with models trained only on the conversational corpus. This suggests our model does learn the proper patterns in human conversations instead of randomly synthesizing diverse 11 In our experiment, the discriminator performs reasonably well in the 4 scenarios outlined in and thus can be considered as a fair evaluation metric. 12 The fact that we only rank on 200 most similar utterances might also affect. We tried increasing the size to 1,000 but observe no tangible improvement. The candidate size required for a decent relevance score can be unbearably large.
generations.
Metrics
Weibo
Human Evaluation
Apart from automatic evaluations, we also employed crowdsourced judges to evaluate the quality of generations for 500 contexts of each dataset. We focus on evaluating the generated responses regarding the (1) relevance: if they coincide with the context (Rel), (2) interestingness: if they are interesting for people to continue the conversation (Inter) and (3) fluency: whether they are fluent by grammar (Flu) 13 . Each sample gets one point if judged as yes and zero otherwise. Each pair is judged by three participants and the score supported by most people is adopted. The averaged scores are summarized in Table 4. We compare the standard seq2seq model, nucleus sampling which performs best among all seq2seq variants, and the four approaches leveraging the non-conversational text. All models perform decently well as for fluency except the weighted average one. The scores for diversity and relevance generally correlate well with the automatic evaluations. Overall the backtranslation model are competitive with respect to fluency and relevance, while generating much more interesting responses to human evaluators. It also significantly outperforms the other three baseline approaches in its capability to properly make use of the non-conversational corpus. Effect of Iterative Training To show the importance of the iterative training paradigm, we visualize the change of the validation loss in Figure 2 14 . The forward validation loss is computed as the perplexity of the forward seq2seq on the pseudo context-response pairs obtained from the backward model, vice versa for backward loss. It approximately quantifies the KL divergence between them two (Kim and Rush, 2016;Cotterell and Kreutzer, 2018). As the iteration goes, the knowledge from the backward model is gradually distilled into the forward model. The divergence between them reaches the lowest point at iteration 4, where we stop our model. Table 5 further displays examples for different iterations. Iteration 0 generates mostly generic responses. Iteration 1 starts to become more diverse but still struggle with fluency and relevance. In the final iteration, it can learn to incorporate novel topics from the non-conversational text yet maintaining the relevance with context. BT 爱一个人不 不 不难 难 难, , ,难 难 难的 的 的是 是 是放下一个人。 To love is easy, to give up is hard. shown in Table 6. To find semantically novel responses, we segment them into phrases and find those containing novel phrases that do not exist on the conversational corpus. As in the first example of Table 6, the word 胖若两人 only exists in the non-conversational corpus. The model successfully learnt its semantic meaning and adopt it to generate novel responses. It is also common that the model learns frequent syntax structures from the non-conversational corpus. In the second example, it learnt the pattern of "To ... is easy, to ... is hard", which appeared frequently in the non-conversational corpus, and utilized it to produce novel responses with the same structure. Note that both generations from the BT model never appear exactly in the non-conversational corpus. It must generate them by correctly understanding the meaning of the phrase components instead of memorizing the utterances verbally.
Analysis
Diversity of Generation
Conclusion and Future Work
We propose a novel way of diversifying dialogue generation by leveraging non-conversational text.
To do so, we collect a large-scale corpus from forum comments, idioms and book snippets. By training the model through iterative back translation, it is able to significantly improve the diversity of generated responses both semantically and syntactically. We compare it with several strong baselines and find it achieved the best overall performance. The model can be potentially improved by filtering the corpus according to different domains, or augmenting with a retrieve-and-rewrite mechanism, which we leave for future work.
Figure 1 :
1Comparison of four approaches leveraging the non-conversational text. S2S f w , S2S bw and LM indicate the forward, backward seq2seq and language model respectively. (d) visualizes the process of one iteration for the back translation approach. Striped component are not updated in each iteration.
Figure 2 :
2Change of validation loss across iterations.
Table 1 :
1A daily dialogue and non-conversational text from three sources. The contents of non-conversational text can be potentially utilized to enrich the response generation.
Table 2 :
2Statistics of Non-Conversational Text.
Table 3 :
3Automatic evaluation on Weibo and Douban datasets. Upper areas are models trained only on the conversationalcorpus. Middle areas are baseline models incorporating the non-conversational corpus. Bottom areas are our model with different
number of iterations. Best results in every area are bolded.
Table 4 :
4Human Evaluation Results
Context 一直单身怎么办(Being always single, what should I do?) Response 勇敢一点多去加好友啊 (Be brave and add more people to friends.) Generation [Iteration 0]: 不知道该怎么办 (I don't know what to do.) [Iteration 1]: 单身不可怕,单身不可怕(Being single is nothing, being single is nothing.) [Iteration 4]: 斯人若彩虹,遇上方知有(Every once in a while you find someone who's iridescent, and when you do, nothing will ever compare.)
Table 5 :
5Example of response generation in different iterations.
We find the back translation model can generate both semantically and syntactically novel responses. Some examples are 14 Iteration 0 means before the iteration starts but after the initialization stage, equal to a standard seq2seq.BT哈哈哈莫非已经胖 胖 胖若 若 若两 两 两人 人 人了 hahaha already as fat as two people?CXT
最近又长胖了
Fleshing out again recently.
NS
我也是这样的
Me too.
CXT
爱一个人真的不能跟她表白吗?
Why loving someone but cannot confess?
NS
不一定的
Not necessarily.
Table 6 :
6Context (CXT), example generations from neucleus sampling (NS) and back-translation (BT). Novel words and syntax patterns are highlighted.
Code and dataset available at https://github. com/chin-gyou/Div-Non-Conv
https://www.zhihu.com 3 The posts are usually very long, describing a specific social phenomenon or news event, so building parallel conversational corpus from post-comment pairs is difficult. Nonetheless, these high-liked comments are normally high-quality themselves and can be used to augment the response space. 4 https://weread.qq.com/
The backward seq2seq model measures the context relevance better than forward models since the latter highly biases generic utterances(Li et al., 2016a;Zhang et al., 2018b)
As also noted inLample et al. (2018b), backpropagating further through P b brings no improvement.
https://www.douban.com/group
https://github.com/hanxiao/ bert-as-service 10 This makes it similar to MMI reranking, whose 200 candidates are from seq2seq decodings instead of top-matched non-conversational utterances.
We do not evaluate the retrieval-based model for the fluency score as the retrieved utterances are fluent by construct.
AcknowledgmentsWe thank anonymous reviewers for valuable comments. Xiaoyu Shen is supported by IMPRS-CS fellowship. The work is partially funded by DFG collaborative research center SFB 1102.
Unsupervised neural machine translation. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Kyunghyun Cho, ICLRMikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. ICLR.
A retrieval-based dialogue system utilizing utterance and context embeddings. Alexander Bartl, Gerasimos Spanakis, 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEEAlexander Bartl and Gerasimos Spanakis. 2017. A retrieval-based dialogue system utilizing utterance and context embeddings. In 2017 16th IEEE Inter- national Conference on Machine Learning and Ap- plications (ICMLA), pages 1120-1125. IEEE.
Generating sentences from a continuous space. Luke Samuel R Bowman, Oriol Vilnis, Andrew Vinyals, Rafal Dai, Samy Jozefowicz, Bengio, Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. The 20th SIGNLL Conference on Computational Natural Language LearningSamuel R Bowman, Luke Vilnis, Oriol Vinyals, An- drew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Confer- ence on Computational Natural Language Learning, pages 10-21.
Unsupervised pidgin text generation by pivoting english data and self-training. Ernie Chang, David Ifeoluwa Adelani, Xiaoyu Shen, Vera Demberg, arXiv:2003.08272arXiv preprintErnie Chang, David Ifeoluwa Adelani, Xiaoyu Shen, and Vera Demberg. 2020. Unsupervised pidgin text generation by pivoting english data and self-training. arXiv preprint arXiv:2003.08272.
Explaining and generalizing back-translation through wakesleep. Ryan Cotterell, Julia Kreutzer, arXiv:1806.04402arXiv preprintRyan Cotterell and Julia Kreutzer. 2018. Explain- ing and generalizing back-translation through wake- sleep. arXiv preprint arXiv:1806.04402.
Improving neural conversational models with entropy-based data filtering. Richárd Csáky, Patrik Purgai, Gábor Recski, 10.18653/v1/P19-1567Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsRichárd Csáky, Patrik Purgai, and Gábor Recski. 2019. Improving neural conversational models with entropy-based data filtering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5650-5669, Florence, Italy. Association for Computational Linguistics.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Wizard of wikipedia: Knowledge-powered conversational agents. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, Jason Weston, ICLREmily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. ICLR.
Structuring latent spaces for stylized response generation. Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2019. Structuring latent spaces for stylized response gen- eration. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 1814-1823.
A knowledge-grounded neural conversation model. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-Tau Yih, Michel Galley, Thirty-Second AAAI Conference on Artificial Intelligence. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Confer- ence on Artificial Intelligence.
Iterative backtranslation for neural machine translation. Duy Vu Cong, Philipp Hoang, Gholamreza Koehn, Trevor Haffari, Cohn, Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. the 2nd Workshop on Neural Machine Translation and GenerationVu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24.
Long short-term memory. Sepp Hochreiter, Jurgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Ari Holtzman, Jan Buys, Maxwell Forbes, Yejin Choi, arXiv:1904.09751The curious case of neural text degeneration. arXiv preprintAri Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degener- ation. arXiv preprint arXiv:1904.09751.
Sequencelevel knowledge distillation. Yoon Kim, Alexander M Rush, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingYoon Kim and Alexander M Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317-1327.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, ICLRDiederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR.
Opennmt: Opensource toolkit for neural machine translation. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander Rush, Proceedings of ACL 2017, System Demonstrations. ACL 2017, System DemonstrationsGuillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. Opennmt: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72.
Experiments in domain adaptation for statistical machine translation. Philipp Koehn, Josh Schroeder, Proceedings of the second workshop on statistical machine translation. the second workshop on statistical machine translationPhilipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine transla- tion. In Proceedings of the second workshop on sta- tistical machine translation, pages 224-227.
Unsupervised machine translation using monolingual corpora only. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'aurelio Ranzato, ICLRGuillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. ICLR.
Phrase-based & neural unsupervised machine translation. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingGuillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, et al. 2018b. Phrase-based & neu- ral unsupervised machine translation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objec- tive function for neural conversation models. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.
Jiwei Li, Michel Galley, Chris Brockett, P Georgios, Jianfeng Spithourakis, Bill Gao, Dolan, arXiv:1603.06155A persona-based neural conversation model. arXiv preprintJiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155.
Neural net models for open-domain discourse coherence. Jiwei Li, Dan Jurafsky, EMNLP. Jiwei Li and Dan Jurafsky. 2017. Neural net models for open-domain discourse coherence. EMNLP.
A simple, fast diverse decoding algorithm for neural generation. Jiwei Li, Will Monroe, Dan Jurafsky, abs/1611.08562CoRRJiwei Li, Will Monroe, and Dan Jurafsky. 2016c. A simple, fast diverse decoding algorithm for neural generation. CoRR, abs/1611.08562.
Deep reinforcement learning for dialogue generation. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, Jianfeng Gao, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingJiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016d. Deep rein- forcement learning for dialogue generation. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1192- 1202.
Adversarial learning for neural dialogue generation. Jiwei Li, Will Monroe, Tianlin Shi, Sėbastien Jean, Alan Ritter, Dan Jurafsky, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingJiwei Li, Will Monroe, Tianlin Shi, Sėbastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157-2169.
Best of both worlds: Transferring knowledge from discriminative learning to a generative visual dialog model. Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, Dhruv Batra, Advances in Neural Information Processing Systems. Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, and Dhruv Batra. 2017. Best of both worlds: Trans- ferring knowledge from discriminative learning to a generative visual dialog model. In Advances in Neu- ral Information Processing Systems, pages 314-324.
Multi-task learning for speaker-role adaptation in neural conversation models. Yi Luan, Chris Brockett, Bill Dolan, Jianfeng Gao, Michel Galley, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingYi Luan, Chris Brockett, Bill Dolan, Jianfeng Gao, and Michel Galley. 2017. Multi-task learning for speaker-role adaptation in neural conversation mod- els. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 605-614.
Discriminability objective for training descriptive captions. Ruotian Luo, Brian Price, Scott Cohen, Gregory Shakhnarovich, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionRuotian Luo, Brian Price, Scott Cohen, and Gregory Shakhnarovich. 2018. Discriminability objective for training descriptive captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6964-6974.
Multi-task sequence to sequence learning. Minh-Thang Luong, V Quoc, Ilya Le, Oriol Sutskever, Lukasz Vinyals, Kaiser, ICLRMinh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning. ICLR.
On measuring social biases in sentence encoders. Chandler May, Alex Wang, Shikha Bordia, Samuel Bowman, Rachel Rudinger, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Chandler May, Alex Wang, Shikha Bordia, Samuel Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628.
Polite dialogue generation without parallel data. Tong Niu, Mohit Bansal, Transactions of the Association for Computational Linguistics. 6Tong Niu and Mohit Bansal. 2018. Polite dialogue gen- eration without parallel data. Transactions of the As- sociation for Computational Linguistics, 6:373-389.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96.
. Chinnadhurai Iulian V Serban, Mathieu Sankar, Saizheng Germain, Zhouhan Zhang, Sandeep Lin, Taesup Subramanian, Michael Kim, Sarath Pieper, Chandar, arXiv:1709.02349Nan Rosemary KearXiv preprintet al. 2017a. A deep reinforcement learning chatbotIulian V Serban, Chinnadhurai Sankar, Mathieu Ger- main, Saizheng Zhang, Zhouhan Lin, Sandeep Sub- ramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, et al. 2017a. A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349.
A hierarchical latent variable encoder-decoder model for generating dialogues. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, Yoshua Bengio, Thirty-First AAAI Conference on Artificial Intelligence. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dia- logues. In Thirty-First AAAI Conference on Artifi- cial Intelligence, pages 3295-3301.
Neural responding machine for short-text conversation. Lifeng Shang, Zhengdong Lu, Hang Li, arXiv:1503.02364arXiv preprintLifeng Shang, Zhengdong Lu, and Hang Li. 2015a. Neural responding machine for short-text conversa- tion. arXiv preprint arXiv:1503.02364.
Neural responding machine for short-text conversation. Lifeng Shang, Zhengdong Lu, Hang Li, 10.3115/v1/P15-1152Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Lifeng Shang, Zhengdong Lu, and Hang Li. 2015b. Neural responding machine for short-text conver- sation. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1577-1586, Beijing, China. Association for Computational Linguistics.
Estimation of gap between current language models and human performance. Xiaoyu Shen, Youssef Oualil, Clayton Greenberg, Mittul Singh, Dietrich Klakow, Proc. Interspeech. InterspeechXiaoyu Shen, Youssef Oualil, Clayton Greenberg, Mit- tul Singh, and Dietrich Klakow. 2017a. Estimation of gap between current language models and human performance. Proc. Interspeech 2017, pages 553- 557.
Nexus network: Connecting the preceding and the following in dialogue generation. Xiaoyu Shen, Hui Su, Wenjie Li, Dietrich Klakow, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingXiaoyu Shen, Hui Su, Wenjie Li, and Dietrich Klakow. 2018a. Nexus network: Connecting the preceding and the following in dialogue generation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4316- 4327.
A conditional variational framework for dialog generation. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, Guoping Long, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics2Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017b. A conditional variational framework for di- alog generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), volume 2, pages 504-509.
Improving variational encoder-decoders in dialogue generation. Xiaoyu Shen, Hui Su, Shuzi Niu, Vera Demberg, AAAIXiaoyu Shen, Hui Su, Shuzi Niu, and Vera Demberg. 2018b. Improving variational encoder-decoders in dialogue generation. AAAI, pages 5456-5463.
Select and attend: Towards controllable content selection in text generation. Xiaoyu Shen, Jun Suzuki, Kentaro Inui, Hui Su, arXiv:1909.04453arXiv preprintDietrich Klakow, and Satoshi SekineXiaoyu Shen, Jun Suzuki, Kentaro Inui, Hui Su, Diet- rich Klakow, and Satoshi Sekine. 2019a. Select and attend: Towards controllable content selection in text generation. arXiv preprint arXiv:1909.04453.
Improving latent alignment in text summarization by generalizing the pointer generator. Xiaoyu Shen, Yang Zhao, Hui Su, Dietrich Klakow, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Xiaoyu Shen, Yang Zhao, Hui Su, and Dietrich Klakow. 2019b. Improving latent alignment in text summa- rization by generalizing the pointer generator. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3753- 3764.
Personalized dialogue response generation learned from monologues. Feng-Guang Su, R Aliyah, Yi-Lin Hsu, Hung-Yi Tuan, Lee, Proc. Interspeech. InterspeechFeng-Guang Su, Aliyah R Hsu, Yi-Lin Tuan, and Hung- Yi Lee. 2019a. Personalized dialogue response gen- eration learned from monologues. Proc. Interspeech 2019, pages 4160-4164.
Dialogue generation with gan. Hui Su, Xiaoyu Shen, Pengwei Hu, Wenjie Li, Yun Chen, Thirty-Second AAAI Conference on Artificial Intelligence. Hui Su, Xiaoyu Shen, Pengwei Hu, Wenjie Li, and Yun Chen. 2018. Dialogue generation with gan. In Thirty-Second AAAI Conference on Artificial Intelli- gence.
Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, Jie Zhou, arXiv:1906.07004Improving multi-turn dialogue modelling with utterance rewriter. arXiv preprintHui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Peng- wei Hu, Cheng Niu, and Jie Zhou. 2019b. Improv- ing multi-turn dialogue modelling with utterance rewriter. arXiv preprint arXiv:1906.07004.
. Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Ludovic Denoyer, Marc'aurelio Ranzato, Y-Lan Boureau, Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau.
Multiple-attribute text style transfer. ICLRMultiple-attribute text style transfer. ICLR.
Diverse beam search for improved description of complex scenes. K Ashwin, Michael Vijayakumar, Cogswell, R Ramprasaath, Qing Selvaraju, Stefan Sun, Lee, J David, Dhruv Crandall, Batra, AAAIAshwin K Vijayakumar, Michael Cogswell, Ram- prasaath R Selvaraju, Qing Sun, Stefan Lee, David J Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. AAAI, pages 7371-7379.
A neural conversational model. Oriol Vinyals, V Quoc, Le, abs/1506.05869CoRROriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. CoRR, abs/1506.05869.
Steering output style and topic in neural response generation. Di Wang, Nebojsa Jojic, Chris Brockett, Eric Nyberg, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingDi Wang, Nebojsa Jojic, Chris Brockett, and Eric Ny- berg. 2017. Steering output style and topic in neu- ral response generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2140-2150.
Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, Zhoujun Li, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhou- jun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 496-505.
Personalizing dialogue agents: I have a dog, do you have pets too?. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204- 2213.
Generating informative and diverse conversational responses via adversarial information maximization. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, Bill Dolan, Advances in Neural Information Processing Systems. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Sys- tems, pages 1810-1820.
Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. Tiancheng Zhao, Ran Zhao, Maxine Eskenazi, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoen- coders. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654-664.
Unsupervised rewriter for multi-sentence compression. Yang Zhao, Xiaoyu Shen, Wei Bi, Akiko Aizawa, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsYang Zhao, Xiaoyu Shen, Wei Bi, and Akiko Aizawa. 2019. Unsupervised rewriter for multi-sentence compression. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2235-2240.
A comprehensive study: Sentence compression with linguistic knowledge-enhanced gated neural network. Yang Zhao, Xiaoyu Shen, Hajime Senuma, Akiko Aizawa, Data & Knowledge Engineering. 117Yang Zhao, Xiaoyu Shen, Hajime Senuma, and Akiko Aizawa. 2018. A comprehensive study: Sentence compression with linguistic knowledge-enhanced gated neural network. Data & Knowledge Engineer- ing, 117:307-318.
Emotional chatting machine: Emotional conversation generation with internal and external memory. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, Bing Liu, Thirty-Second AAAI Conference on Artificial Intelligence. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In Thirty-Second AAAI Conference on Artificial Intelligence.
| [
"https://github.com/hanxiao/"
] |
[
"Syntactic Search by Example",
"Syntactic Search by Example"
] | [
"Micah Shlain \nAllen Institute for AI\nTel AvivIsrael\n\nBar Ilan University\nRamat-GanIsrael\n",
"Hillel Taub-Tabib \nAllen Institute for AI\nTel AvivIsrael\n",
"Shoval Sadde \nAllen Institute for AI\nTel AvivIsrael\n",
"Yoav Goldberg yoavg@allenai.orgyogo@cs.biu.ac.il \nAllen Institute for AI\nTel AvivIsrael\n\nBar Ilan University\nRamat-GanIsrael\n"
] | [
"Allen Institute for AI\nTel AvivIsrael",
"Bar Ilan University\nRamat-GanIsrael",
"Allen Institute for AI\nTel AvivIsrael",
"Allen Institute for AI\nTel AvivIsrael",
"Allen Institute for AI\nTel AvivIsrael",
"Bar Ilan University\nRamat-GanIsrael"
] | [
"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"
] | We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we introduce a light-weight query language that does not require the user to know the details of the underlying syntactic representations, and instead to query the corpus by providing an example sentence coupled with simple markup. Search is performed at an interactive speed due to an efficient linguistic graphindexing and retrieval engine. This allows for rapid exploration, development and refinement of syntax-based queries. We demonstrate the system using queries over two corpora: the English wikipedia, and a collection of English pubmed abstracts. A demo of the wikipedia system is avilable at: https: //allenai.github.io/spike/ . | 10.18653/v1/2020.acl-demos.3 | [
"https://www.aclweb.org/anthology/2020.acl-demos.3.pdf"
] | 219,303,477 | 2006.03010 | 96d152fa6706b034d91cb286a2e191c84a9af725 |
Syntactic Search by Example
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -July 10, 2020. 2020
Micah Shlain
Allen Institute for AI
Tel AvivIsrael
Bar Ilan University
Ramat-GanIsrael
Hillel Taub-Tabib
Allen Institute for AI
Tel AvivIsrael
Shoval Sadde
Allen Institute for AI
Tel AvivIsrael
Yoav Goldberg yoavg@allenai.orgyogo@cs.biu.ac.il
Allen Institute for AI
Tel AvivIsrael
Bar Ilan University
Ramat-GanIsrael
Syntactic Search by Example
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -July 10, 2020. 202017
We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we introduce a light-weight query language that does not require the user to know the details of the underlying syntactic representations, and instead to query the corpus by providing an example sentence coupled with simple markup. Search is performed at an interactive speed due to an efficient linguistic graphindexing and retrieval engine. This allows for rapid exploration, development and refinement of syntax-based queries. We demonstrate the system using queries over two corpora: the English wikipedia, and a collection of English pubmed abstracts. A demo of the wikipedia system is avilable at: https: //allenai.github.io/spike/ .
Introduction
The introduction of neural-network based models into NLP brought with it a substantial increase in syntactic parsing accuracy. We can now produce accurate syntactically annotated corpora at scale. However, the produced representations themselves remain opaque to most users, and require substantial linguistic expertise to use. Patterns over syntactic dependency graphs 1 can be very effective for interacting with linguistically-annotated corpora, either for linguistic retrieval or for information and relation extraction (Fader et al., 2011;Akbik et al., 2014;Valenzuela-Escárcega et al., 2015, 1 In this paper, we very loosely use the term "syntactic" to refer to a linguistically motivated graph-based annotation over a piece of text, where the graph is directed and there is a path between any two nodes. While this usually implies syntactic dependency trees or graphs (and indeed, our system currently indexes Enhanced English Universal Dependency graphs (Nivre et al., 2016;Schuster and Manning, 2016)) the system can work also with more semantic annotation schemes e.g, (Oepen et al., 2015), given the availability of an accurate enough parser for them. 2018). However, their use in mainstream NLP as represented in ACL and affiliated venues remain limited. We argue that this is due to the high barrier of entry associated with the application of such patterns. Our aim is to lower this barrier and allow also linguistically-naïve users to effectively experiment with and develop syntactic patterns. Our proposal rests on two components:
(1) A light-weight query language that does not require in-depth familiarity with the underlying syntactic representation scheme, and instead lets the user specify their intent via a natural language example and lightweight markup.
(2) A fast, near-real-time response time due to efficient indexing, allowing for rapid experimentation. Figure 1 (next page) shows the interface of our web-based system. The user issued the query: , and the t capture should have the same word form as the one in the sentence (founder) (indicated by [w]). The syntactic relation between the captures should be the same as the one in the sentence, and the founder and entity captures should be expanded (indicated by ). The query is translated into a graph-based query, which is shown below the query, each graph-node associated with the query word that triggered it. The system also returned a list of matched sentences. The matched tokens for each capture group (founder, t and entity) are highlighted. The user can then issue another query, browse the results, or download all the results as a tab-separated file.
Existing syntactic-query languages
While several rich query languages over linguistic tree and graph structure exist, they require a substantial amount of expertise to use. 2 The user needs to be familiar not only with the syntax of the query language itself, but to also be intimately familiar with the specific syntactic scheme used in the underlying linguistic annotations. For example, in Odin (Valenzuela-Escárcega et al., 2015), a dedicated language for pattern-based information extraction, the same rule as above is expressed as: The Spacy NLP toolkit 3 also includes pattern matcher over dependency trees,using JSON based syntax:
[{"PATTERN": {"ORTH": "founder"}, "SPEC": {"NODE_NAME": "t"}}, {"PATTERN": {"ENT_TYPE": "PERSON"}}, "SPEC": {"NODE_NAME": "founder", "NBOR_RELOP": ">nsubj", 2 We focus here on systems that are based on dependency syntax, but note that many systems and query languages exist also for constituency-trees, e.g., TGREP/TGREP2, TigerSearch (Lezius et al., 2002), the linguists search engine (Resnik and Elkiss, 2005), Fangorn (Ghodke and Bird, 2012).
3 https://spacy.io/ "NBOR_NAME": "t"}}, {"PATTERN": {"ENT_TYPE": "ORGANIZATION"}, "SPEC": {"NODE_NAME": "entity", "NBOR_RELOP": ">nmod", "NBOR_NAME": "t"}}]
Stanford's Core-NLP package (Manning et al., 2014) includes a dependency matcher called SEM-GREX, 4 which uses a more concise syntax: The dep search system 5 from Turku university (Luotolahti et al., 2017) is designed to provide a rich and expressive syntactic search over large parsebanks. They use a lightweight syntax and support working against pre-indexed data, though they do not support named captures of specific nodes.
PERSON <nsubj founder >nmod ORG
While the different systems vary in the verboseness and complexity of their own syntax (indeed, the Turku system's syntax is rather minimal), they all require the user to explicitly specify the dependency relations between the tokens, making it challenging and error-prone to write, read or edit. The challenge grows substantially as the complexity of the pattern increases beyond the very simple example we show here.
Closest in spirit to our proposal, the PROP-MINER system of Akbik et al. (2013) which lets the user enter a natural language sentence, mark spans as subject, predicate and object, and have a rule be generated automatically. However, the system is restricted to ternary subject-predicate-object patterns. Furthermore, the generated pattern is written in a path-expression SQL variant (SerQL, (Broekstra and Kampman, 2003)), which the user then needs to manually edit. For example, our query above will be translated to:
SELECT subject, predicate, object FROM predicate.3 nsubj subject, predicate.3 nmod object, WHERE subject POS NNP AND predicate.3 POS NN AND object POS NNP AND subject TEXT PAUL AND predicate.3 TEXT founder AND object TEXT Microsoft AND subject FULL_ENTITY AND object FULL_ENTITY All these systems require the user to closely interact with linguistic concepts and explicitly specify graph-structures, posing a high barrier of entry for non-expert users. They also slow down expert users: formulating a complex query may require a few minutes. Furthermore, many of these query languages are designed to match against a provided sentence, and are not indexable. This requires iterating over all sentences in the corpus attempting to match each one, requiring substantial time to obtain matches from large corpora.
Augustinus et al. (2012) describe a system for syntactic search by example, which retrieves tree fragments and which is completely UI based. Our system takes a similar approach, but replaces the UI-only interface with an expressive textual query language, allowing for richer queries. We also return node matches rather than tree fragments.
Syntactic Search by Example
We propose a substantially simplified language, that has the minimal syntax and that does not require the user to know the underlying syntactic schema upfront (though it does not completely hide it from the user, allowing for exposure over time, and allowing control for expert users who understand the underlying syntactic annotation scheme).
The query language is designed to be linguistically expressive, simple to use and amenable to efficient indexing and query. The simplicity and indexing requirements do come at a cost, though: we purposefully do not support some of the features available in existing languages. We expect these features to correlate with expertise. 6 At the same 6 Example of a query feature we do not support is quantifi-time, we also seamlessly support expressing arbitrary sub-graphs, a task which is either challenging or impossible with many of the other systems. The language is based on the following principles:
(1) The core of the query is a natural language sentence.
(2) A user can specify the tokens of interest and constraints on them via lightweight markup.
(3) While expert users can specify complex token constraints, effective constraints can be specified by pulling values from the query words.
The required syntactic knowledge from the user, both in terms of the syntax of the query language itself and in terms of the underlying linguistic formalism, remains minimal.
Graph Query Formalism
The language is structured around between-token relations and within-token constraints, where tokens can be captured.
Formally, our query G = (V, E) is a labeled directed graph, where each node v i ∈ V corresponds to a token, and a labeled edge e = (v i , v j , ) ∈ E between the nodes corresponds to a between-token syntactic constraint. This query graph is then matched against parsed target sentences, looking for a correspondence between query nodes and target-sentence nodes that adhere to the token and edge constraints.
For example, the following graph specifies three tokens, where the first and second are connected via an 'xcomp' relation, and the second and third via a 'dobj' relation. The first token is unconstrained, while the second token must have the POS-tag of VB, and the third token must be the word home.
Sentences whose syntactic graph has a subgraph that aligns to the query graph and adheres to the constraints will be considered as matches. Example of such matching sentences are:
-John wanted w to go v home h after lunch.
-It was a place she decided w to call v her home h . The <w>, <v> and <h> marks on the nodes denote named captures. When matching a sentence, the sentence tokens corresponding to the graph-nodes will be bound to variables named 'w', 'v' and 'h', in our case {w=wanted, v=go, h=home} for the first sentence and {w=decided, v=call, h=home} for the second. Graph nodes can also be cation, i.e., "nodes a and b should be connected via a path that includes one or more 'conj' edges". unnamed, in which case they must match sentence tokens but will not bind to any variable. The graph structure is not meant to be specified by hand, 7 but rather to be inferred from the example based query language described in the next section (an example query resulting in this graph is "They w:wanted to v:[tag]go h:[word]home").
Between-token constraints correspond to labeled directed edges in the sentence's syntactic graph.
Within-token constraints correspond to properties of individual sentence tokens. 8 For each property we specify a list of possible values (a disjunction) and if lists for several properties are provided, we require all of them to hold (a conjunction). For example, in the constraint tag=VBD|VBZ&lemma=buy we look for tokens with POS-tag of either VBD or VBZ, and the lemma buy. The list of possible values for a property can be specified as a pipe-separated list (tag=VBD|VBZ|VBN) or as a regular expression (tag=/VB[DZN]/).
Example-based User-friendly Query Language
The graph language described above is expressive enough to support many interesting queries, but it is also very tedious to specify query graphs G, especially for non-expert users. We propose a simple syntax that allows to easily specify a graph query G (constrained nodes connected by labeled edges) using a textual query q that takes the form of an example sentence and lightweight markup. Let s = w 1 , ..., w n be a proper English sentence. Let D be its dependency graph, with nodes w i and labeled edges (w i , w j , ). A corresponding textual query q takes the form q = q 1 , ..., q n , where each q i is either a word q i = w i , or a marked word q i = m(w i ). A marking of a word takes the form: :word (unnamed capture) name:word (named capture) or name: [ 7 Indeed, we currently do not even expose a textual representation of the graph. 8 Currently supported properties are word-form (word), lemma (lemma), pos-tag (tag) or entity type (entity). Additional types can be easily added, provided that we have suitable linguistic annotators for them. Each of these corresponds to a node v qi in the query graph above.
Let m be the set of marked query words, and m + be a minimal connected subgraph of D that includes all the words in m. When translating q to G, each marked word w i ∈ m is translated to a named query graph node v q i with the appropriate restriction. The additional words w j ∈ m + \ m are translated to unrestricted, unnamed nodes v q j . We add a query graph edge (v q i , v q j , ) for each pair in V for which (w i , w j , ) ∈ D.
Further query simplifications. Consider the marked word h:[word=home] home. The constraint is redundant with the word. In such cases we allow the user to drop the value, which is then taken from the corresponding property of the query word. This allows us to replace the query: This further drives the "by example" agenda, as the user does not need to know what the lemma, entity-type or POS-tag of a word are in order to specify them as a constraint. Full property names can be replaced with their shorthands w,l,t,e: Anchors. In some cases we want to add a node to the graph, without an explicit capture. In such cases we can use the anchor $ ($John). These are interpreted as having a default constraint of [w], which can be overriden by providing an alternative constraint ($[e]John), or an empty one ($[]John).
Expansions When matching a query against a sentence the graph nodes bind to sentence words. Sometimes, we may want the match to be expanded to a larger span of the sentence. For example, when matching a word which is part of a entity, we often wish to capture the entire entity rather than the word. This is achieved by prefixing the term with the "expansion diamond" . The default behavior is to expand the match from the current word to the named entity boundary or NP-chunk that surrounds it, if it exists. We are currently investigating the option of providing additional expansion strategies.
Summary To summarize the query language from the point of view of the user: the user starts with a sentence w 1 , ..., w n , and marks some of the words for inclusion in the query graph. For each marked word, the user may specify a name, and optional constraints. The user query is then translated to a graph query as described above. The results list highlights the words corresponding to the marked query words. The user can choose for the results to highlight entire entities rather than single words.
Interactive Pattern Authoring
An important aspect of the system is its interactivity. Users enter queries by writing a sentence and adding markup on some words, and can then refine them following feedback from the environment, as we demonstrate with a walk-through example.
A user interested in people who obtained degrees from higher education institutions may issue the following query:
subj:John obtained his d:[w]degree from inst:Harvard
Here, the person in the "subj" capture and the institution in the "inst" capture are placeholders for items to be captured, so the user uses generic names and leaves them unconstrained. The "degree" ("d") capture should match exactly, as the user specified the "w" constraint (exact word match). When pressing Enter, the user is then shown the resulting query-graph and a result list. The user can then refine their queries based on either the query graph, the result list, or both. For the above query, the graph is:
Note that the query graph associates each graph node with the query word that triggered it. The word "obtained" resulted in a graph node even though it was not marked by the user as a capture. The user makes note to themselves to go back to this word later. The user also notices that the word "from" is not part of the query.
Looking at the result list, things look weird:
Maybe this is because the word from is not in the graph? Indeed, adding a non-capturing exact-word anchor on "from" solves this issue: These are the kind of results the user expected, but now they are curious about degrees obtained by females, and their representation in the Wikipedia corpus. Adding the pronoun to the query, the user then issues the following two queries, saving the result-sets from each one as a CSV for further comparative analysis. Our user now worries that they may be missing some results by focusing on the word degree. Maybe other things can be obtained from a university? The user then sets an exact-word constraint on "Harvard", adds a lemma constraint to "obtain" and clears the constraint from "degree": Browsing the results, the d capture includes words such as "BA, PhD, MBA, certificate". But the result list is rather short, suggesting that either Harvard or obtain are too restrictive. The user seeks to expand the "obtain" node's vocabulary, adding back the exact word constraint on "degree" while removing the one from "obtain": The result is a list of person names earning degrees from institution, and the entire list can be downloaded as a tab-separated file which includes the named captures as well as the source sentences (over Wikipedia, this list has 6197 rows). 9
The query can also be further refined to capture which degree was obtained, e.g.:
Additional Query Examples
To whet the reader's appetite, here are a sample of additional queries, showing different potential 9 The list can be even more comprehensive had we selected additional degree words and obtain words, and considered also additional re-phrasings.
Implementation Details
The indexing is handled by Lucene. 10 We currently use Odinson (Valenzuela-Escárcega et al., 2020), 11 an open-source Lucene-based query engine developed at Lum.ai, as a successor of Odin (Valenzuela-Escárcega et al., 2015), that allows to index syntactic graphs and issue efficient path queries on them. We translate our queries into an Odinson path query that corresponds to a longest path in our query graph. We then iterate over the returned Odinson matches and verify the constraints that were not on the path. Conceptually, the Odinson system works by first using Lucene's reverse-index for retrieving sentences for which there is a token matching each of the specified token-constraints, and then verifying the syntactic between-token constraints. To improve the Lucene-query selectivity, tokens are indexed with incoming and outgoing syntactic edge label information, which is incorporated as additional token-constraints to the Lucene engine. The system easily supports millions of sentences, returning results at interactive speeds.
Conclusions
We introduce a simple query language that allows to pose complex syntax-based queries, and obtain results in an interactive speed.
A search interface over Wikipedia sentences is available at https://allenai.github.io/ spike/. We intend to release the code as open source, as well as providing hosted open access to a PubMed-based corpus.
Figure 1 :
1Syntactic Search System
subj:[e]John :[l]obtained his d:degree $from inst:[w]Harvard
The query specifies a sentence (Paul was a founder of Microsoft) and three named captures: founder, t and entity. The founder and entity captures should have the same entity-type as thefounder:[e]Paul
was
a
t:[w]founder
of
entity:[e]Microsoft.
corresponding sentence words (PERSON for Paul
and ORGANIZATION for Microsoft, indicated
by [e])
constraints]word , :[constraints]word . Consider the query: John w:wanted to v:[tag=VB] go h:[word=home] home corresponding to the above graph query. The marked words are:q 2 =w:wanted
(unconstrained, name:w)
q 4 =v:[tag=VB]go
(cnstr:tag=VB, name:v)
q 5 =h:[word=home]home (cnstr:word=home, name:h)
John w:wanted to v:[tag=VB]go h:[word=home]home John w:wanted to v:[tag]go h:[word]homewith:
John w:wanted to v:[t]go h:[w]home Finally, capture names can be omitted, in which case an automatic name is generated based on the corresponding word: John :wanted to :[t]go :[w]home
subj:[e]John :[]obtained his d:[w]degree $from inst:[w]Harvard Looking at the result list in the o capture, the user chooses the lemmas "receive, complete, earn, obtain, get", adds them to the o constraint, and removes the degree constraint. subj:[e]John o:[l=receive|complete|earn|obtain|get]obtained his d:degree $from inst:[w]Harvard The returned result-set is now much longer, and we select additional terms for the degree slot and remove the institution word constraint, resulting in the final query: subj:[e]John o:[l=receive|complete|earn|obtain|get]obtained his d: [w=degree|MA|BA|MBA|doctorate|masters|PhD]degree $from inst:Harvard
subj:[e]John o:[l=...]obtained] his kind:law d:[w=...]degree $from inst:Harvard capturing under kind words like law, chemistry, engineering and DLitt but also bachelors, masters and graduate. This concludes our walk-through.
use-cases. Over wikipedia: p:[e]Sam $[l=win|receive]won an $Oscar. p:[e]Sam $[l=win|receive]won an $Oscar $for thing:something -$fish $such $as fish:salmon hero:[t]Spiderman $is a $superhero -I like kind:coconut $oil kind:coconut $oil is $used for purpose:eating Over a pubmed corpus, annotated with the SciSpacy (Neumann et al., 2019) pipeline: x:[e]aspirin $inhibits y:thing a $combination of d1:[e]aspirin and d2:[e]alcohol $:[l]causes t:thing patients:[t]rats were $injected $with what:drugs
https://nlp.stanford.edu/software/ tregex.shtml 5 http://bionlp-www.utu.fi/dep_search/
https://lucene.apache.org 11 https://github.com/lum-ai/odinson/
AcknowledgmentsWe thank the team at LUM.ai and the University of Arizona, in particular Mihai Surdeanu, Marco Valenzuela-Escárcega, Gus Hahn-Powell and Dane Bell, for fruitful discussion and their work on the Odinson system. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).
Propminer: A workflow for interactive information extraction and exploration using dependency trees. Alan Akbik, Oresti Konomi, Michail Melnikov, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 51st Annual Meeting of the Association for Computational Linguistics: System DemonstrationsSofia, BulgariaAssociation for Computational LinguisticsAlan Akbik, Oresti Konomi, and Michail Melnikov. 2013. Propminer: A workflow for interactive in- formation extraction and exploration using depen- dency trees. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 157-162, Sofia, Bul- garia. Association for Computational Linguistics.
Exploratory relation extraction in large text corpora. Alan Akbik, Thilo Michael, Christoph Boden, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersAlan Akbik, Thilo Michael, and Christoph Boden. 2014. Exploratory relation extraction in large text corpora. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguis- tics: Technical Papers, pages 2087-2096.
Example-based treebank querying. Liesbeth Augustinus, Vincent Vandeghinste, Frank Van Eynde, LREC. Liesbeth Augustinus, Vincent Vandeghinste, and Frank Van Eynde. 2012. Example-based treebank querying. In LREC.
Serql: A second generation rdf query language. Jeen Broekstra, Arjohn Kampman, Proc. SWAD-Europe Workshop on Semantic Web Storage and Retrieval. SWAD-Europe Workshop on Semantic Web Storage and RetrievalJeen Broekstra and Arjohn Kampman. 2003. Serql: A second generation rdf query language. In Proc. SWAD-Europe Workshop on Semantic Web Storage and Retrieval, pages 13-14.
Identifying relations for open information extraction. Anthony Fader, Stephen Soderland, Oren Etzioni, Proceedings of the conference on empirical methods in natural language processing. the conference on empirical methods in natural language processingAssociation for Computational LinguisticsAnthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the conference on empir- ical methods in natural language processing, pages 1535-1545. Association for Computational Linguis- tics.
Fangorn: A system for querying very large treebanks. Sumukh Ghodke, Steven Bird, Proceedings of COLING 2012: Demonstration Papers. COLING 2012: Demonstration PapersMumbai, IndiaThe COLING 2012 Organizing CommitteeSumukh Ghodke and Steven Bird. 2012. Fangorn: A system for querying very large treebanks. In Pro- ceedings of COLING 2012: Demonstration Papers, pages 175-182, Mumbai, India. The COLING 2012 Organizing Committee.
. Wolfgang Lezius, Hannes Biesinger, Ciprian-Virgil Gerstenberger, Tigersearch manualWolfgang Lezius, Hannes Biesinger, and Ciprian- Virgil Gerstenberger. 2002. Tigersearch manual.
Dep search: Efficient search tool for large dependency parsebanks. Juhani Luotolahti, Jenna Kanerva, Filip Ginter, Proceedings of the 21st Nordic Conference on Computational Linguistics. the 21st Nordic Conference on Computational LinguisticsGothenburg, SwedenAssociation for Computational LinguisticsJuhani Luotolahti, Jenna Kanerva, and Filip Ginter. 2017. Dep search: Efficient search tool for large dependency parsebanks. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 255-258, Gothenburg, Sweden. Association for Computational Linguistics.
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mc-Closky, Association for Computational Linguistics (ACL) System Demonstrations. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.
Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing. Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing.
Universal dependencies v1: A multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, D Christopher, Ryan Manning, Slav Mcdonald, Sampo Petrov, Natalia Pyysalo, Silveira, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Joakim Nivre, Marie-Catherine De Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 1659-1666.
Broad-coverage semantic dependency parsing. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinková, Dan Flickinger, 10.18653/v1/S15-2153Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationDenver, Colorado18Association for Computational LinguisticsStephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinková, Dan Flickinger, Jan Hajič, and Zdeňka Urešová. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency pars- ing. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 915-926, Denver, Colorado. Association for Compu- tational Linguistics.
The Linguist's Search Engine: An overview. Philip Resnik, Aaron Elkiss, 10.3115/1225753.1225762Proceedings of the ACL Interactive Poster and Demonstration Sessions. the ACL Interactive Poster and Demonstration SessionsAnn Arbor, MichiganAssociation for Computational LinguisticsPhilip Resnik and Aaron Elkiss. 2005. The Linguist's Search Engine: An overview. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 33-36, Ann Arbor, Michigan. Association for Computational Linguistics.
Enhanced english universal dependencies: An improved representation for natural language understanding tasks. Sebastian Schuster, D Christopher, Manning, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Sebastian Schuster and Christopher D Manning. 2016. Enhanced english universal dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC'16), pages 2371-2378.
Large-scale automated machine reading discovers new cancerdriving mechanisms. A Marco, Özgün Valenzuela-Escárcega, Gus Babur, Dane Hahn-Powell, Thomas Bell, Enrique Hicks, Xia Noriega-Atala, Mihai Wang, Emek Surdeanu, Clayton T Demir, Morrison, Database. Marco A Valenzuela-Escárcega,Özgün Babur, Gus Hahn-Powell, Dane Bell, Thomas Hicks, Enrique Noriega-Atala, Xia Wang, Mihai Surdeanu, Emek Demir, and Clayton T Morrison. 2018. Large-scale automated machine reading discovers new cancer- driving mechanisms. Database, 2018.
Odinson: A fast rule-based information extraction framework. A Marco, Gus Valenzuela-Escárcega, Dane Hahn-Powell, Bell, Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020). the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)Marseille, FranceEuropean Language Resources Association (ELRAMarco A. Valenzuela-Escárcega, Gus Hahn-Powell, and Dane Bell. 2020. Odinson: A fast rule-based in- formation extraction framework. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020), Marseille, France. European Language Resources Association (ELRA).
A domainindependent rule-based framework for event extraction. A Marco, Gus Valenzuela-Escárcega, Mihai Hahn-Powell, Thomas Surdeanu, Hicks, Proceedings of ACL-IJCNLP 2015 System Demonstrations. ACL-IJCNLP 2015 System DemonstrationsMarco A Valenzuela-Escárcega, Gus Hahn-Powell, Mi- hai Surdeanu, and Thomas Hicks. 2015. A domain- independent rule-based framework for event extrac- tion. In Proceedings of ACL-IJCNLP 2015 System Demonstrations, pages 127-132.
| [
"https://github.com/lum-ai/odinson/"
] |
[
"IMPROVING THE OUT-OF-DISTRIBUTION GENERALIZATION CAPABILITY OF LANGUAGE MODELS: COUNTERFACTUALLY-AUGMENTED DATA IS NOT ENOUGH",
"IMPROVING THE OUT-OF-DISTRIBUTION GENERALIZATION CAPABILITY OF LANGUAGE MODELS: COUNTERFACTUALLY-AUGMENTED DATA IS NOT ENOUGH"
] | [
"Caoyun Fan \nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina\n",
"Wenqing Chen \nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina\n\nSchool of Software Engineering\nSun Yat-sen University\nChina\n",
"Jidong Tian \nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina\n",
"Yitian Li \nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina\n",
"Hao He ",
"Yaohui Jin \nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina\n"
] | [
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina",
"School of Software Engineering\nSun Yat-sen University\nChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina",
"MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\nChina"
] | [] | Counterfactually-Augmented Data (CAD) has the potential to improve language models' Out-Of-Distribution (OOD) generalization capability, as CAD induces language models to exploit causal features and exclude spurious correlations. However, the empirical results of OOD generalization on CAD are not as efficient as expected. In this paper, we attribute the inefficiency to Myopia Phenomenon caused by CAD: language models only focus on causal features that are edited in the augmentation and exclude other non-edited causal features. As a result, the potential of CAD is not fully exploited. Based on the structural properties of CAD, we design two additional constraints to help language models extract more complete causal features contained in CAD, thus improving the OOD generalization capability. We evaluate our method on two tasks: Sentiment Analysis and Natural Language Inference, and the experimental results demonstrate that our method could unlock CAD's potential and improve language models' OOD generalization capability. | 10.1109/icassp49357.2023.10095209 | [
"https://export.arxiv.org/pdf/2302.09345v1.pdf"
] | 257,038,706 | 2302.09345 | 466d8dc7a9995f806fc6fa98e4d3611cfe3a2265 |
IMPROVING THE OUT-OF-DISTRIBUTION GENERALIZATION CAPABILITY OF LANGUAGE MODELS: COUNTERFACTUALLY-AUGMENTED DATA IS NOT ENOUGH
Caoyun Fan
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
China
Wenqing Chen
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
China
School of Software Engineering
Sun Yat-sen University
China
Jidong Tian
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
China
Yitian Li
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
China
Hao He
Yaohui Jin
MoE Key Lab of Artificial Intelligence
AI Institute
Shanghai Jiao Tong University
China
IMPROVING THE OUT-OF-DISTRIBUTION GENERALIZATION CAPABILITY OF LANGUAGE MODELS: COUNTERFACTUALLY-AUGMENTED DATA IS NOT ENOUGH
Index Terms-Counterfactually-Augmented DataOut- Of-Distribution GeneralizationLanguage Models
Counterfactually-Augmented Data (CAD) has the potential to improve language models' Out-Of-Distribution (OOD) generalization capability, as CAD induces language models to exploit causal features and exclude spurious correlations. However, the empirical results of OOD generalization on CAD are not as efficient as expected. In this paper, we attribute the inefficiency to Myopia Phenomenon caused by CAD: language models only focus on causal features that are edited in the augmentation and exclude other non-edited causal features. As a result, the potential of CAD is not fully exploited. Based on the structural properties of CAD, we design two additional constraints to help language models extract more complete causal features contained in CAD, thus improving the OOD generalization capability. We evaluate our method on two tasks: Sentiment Analysis and Natural Language Inference, and the experimental results demonstrate that our method could unlock CAD's potential and improve language models' OOD generalization capability.
INTRODUCTION
Despite the remarkable performance of language models in Natural Language Processing (NLP) [1,2], the Out-Of-Distribution (OOD) generalization capability of language models is often disappointing [3,4]. Many studies [5,6] have pointed out that such limited generalization capability partly arises from the language models' exploitation of spurious correlations [7,8,9,10] in the dataset. Specifically, the language models tend to exploit dataset-specific correlation bias [5,11] rather than the intrinsic properties of tasks to make predictions during the training process, while the spurious correlations can not be generalized to OOD data.
To solve the problem of spurious correlations, a recent promising direction is Counterfactually-Augmented Data (CAD) [12,13,14]: minimal editing of sentence to flip the corresponding label Y , where the edited part is considered Original: Nolan is an excellent film director. !"#$%&%'() Edited: Nolan is a terrible film director. !*(+,&%'() (a) An example of counterfactual sentence pairs. We expect the language model to exploit the causal features (in gray) and exclude the possible spurious correlations (e.g., Nolan in the sentence). Fig. 1(a)) changes the data distribution of the dataset (from Fig. 1(b) to Fig. 1(c)), which helps the model Φ to exploit causal features h c and exclude correlated features h r .
ℎ ! ℎ " Φ #"$ #"$ !% Φ (b) Original Dataset ℎ ! ℎ " Φ #$% (c) CAD
to be the intrinsic properties of the task and have a causal effect on the label ( Fig. 1(a)). Unlike the Independent Identically Distribution (IID) principle of most data augmentation methods, CAD aims to change the data distribution of the dataset so that the language models can alleviate reliance on dataset-specific bias and exclude spurious correlations.
Under the ideal conditions assumed by [6] 1 , CAD keeps correlated features h r in the counterfactual sentence pairs constant while the causal features h c change. Therefore, the classifier Φ can make predictions based on causal features and then exclude the interference of spurious correlations as:
Φ(h c , h r ) = Y Φ(h * c , h r ) = Y *(1)
where h * c and Y * are the causal features and the label of the edited sentence, respectively. Intuitively, the classifier Φ no longer focuses on h r because different labels correspond to the same h r , as shown in Fig. 1(b) & 1(c). However, some experiments [15,16] have demonstrated that CAD is not efficient in improving the generalization capability of language models, especially in more complex tasks. This is not in line with our expectations for CAD.
In this work, we attribute the inefficiency of CAD in generalization to the CAD-imposed myopia phenomenon: language models focus only on causal features edited by counterfactual augmentation, which means correlated features along with other non-edited causal features are excluded. However, all causal features are beneficial for OOD generalization [17]. To Extract more complete Causal Features and unlock the potential of CAD for language models, we design the ECF algorithm: introducing additional constraints in the training process based on the structural properties of CAD. Specifically, we extract invariant causal features over both distributions of CAD and the original dataset by the Invariant Risk Minimization [18] method (dataset level) and constrain the correlated feature similarity of counterfactual sentence pairs (sentence level). Through extensive experiments across multiple language models and NLP tasks, we conclude that the proposed ECF algorithm could help language models to extract more complete causal features, and then improve the OOD generalization capability in multiple NLP tasks.
MYOPIA PHENOMENON IN CAD
As mentioned before, the essence of CAD is to change the data distribution through data augmentation, thereby reducing the dataset-specific bias implied in the data distribution. Intuitively, by comparing the differences in counterfactual sentence pairs, language models could capture the features that have a causal effect on the labels. However, the results of counterfactual augmentation are diverse for a particular sentence, as illustrated in Fig. 2. Specifically, the causal components and the perturbation types [6] (e.g., negation, quantifier, lexical, delete) that can flip labels are diverse, so the different counterfactual sentence can be obtained by making a specific perturbation for a particular causal component, while the other causal components remain unchanged.
Therefore, compared to Eq. 1, a more reasonable assumption is that only part of h c in the counterfactual sentence pairs would change with the counterfactual augmentation as:
Φ(h e , h u , h r ) = Y Φ(h * e , h u , h r ) = Y *(2)
where h c is distinguished into edited features h e that change with augmentation and non-edited features h u that do not change. This assumption is empirically convincing because of the analysis and experiments in [6,12]. Similar to the analysis of Eq. 1, Eq. 2 gives us an important insight: language models trained on original data and CAD focus on different features in the sentence. On the one hand, CAD eliminates the interference of correlated features; on the other hand, language models inevitably ignore non-edited causal features. In this paper, we refer to this as Myopia Phenomenon.
PROPOSED METHOD
To solve the myopia phenomenon and extract more complete causal features, we propose two insights on the structural properties of CAD at the dataset level and at the sentence level, and design additional constraints based on these insights, to further exploit the generalization potential of CAD.
Dataset-Level Constraint
Insight: the data distribution of the original dataset can alleviate the Myopia Phenomenon of CAD.
Due to the change in data distribution, the features that language models focus on are different: models with CAD only focus on edited causal features h e (Myopia Phenomenon), while models with the original dataset confuse h c and h r (but no Myopia Phenomenon). Different data distributions lead to different problems, which indicates that the original data distribution carries information that is missing in CAD. Therefore, there are potential complementary effects of the original dataset and CAD on causal feature extraction.
Inspired by [18], we adopt the Invariant Risk Minimization (IRM) method to extract more complete causal features in CAD. The role of IRM is to estimate invariant causal features from multiple training environments. As mentioned before, counterfactual augmentation does not follow the IID principle, which allows us to consider the original dataset and CAD as Table 1. Accuracy of different language models and datasets in SA. The best performance is bold. CAD h and CAD a represent manually annotated CAD and automatically generated CAD, respectively.
two different training environments E tr = {e ori , e CAD }, and then adopt the IRM method to fuse the advantages of both environments. Specifically, to induce the language model M to learn invariant causal features across environments, the additional constraint L IRM is designed as:
L IRM = e∈Etr ∇ ω|ω=1.0 R e (ω · M ) 2(3)
where R e (·) is the risk [18] under environment e, and ω = 1.0 as a scalar is a fixed 'dummy' classifier. The essence of L IRM is a gradient norm penalty that measures the optimality of the 'dummy' classifier in each environment, in order to find invariant causal features that match all environments.
Sentence-Level Constraint
Insight: the correlated features h r of counterfactual sentence pairs are not guaranteed to be aligned. In our assumptions, the correlated features h r of counterfactual sentence pairs are similar, because the augmentation operation only affects part of h c . However, this property is not guaranteed for language models trained directly on CAD, and this potential dissimilarity gives language models the convenience to utilize h r . Therefore, it is reasonable to design an explicit constraint on h r for counterfactual sentence pairs. However, h r and h c in CAD are hard to decouple in language models, so a sensible proxy for h r is needed. Noting that h r has little effect on the prediction in CAD, based on this property, we creatively construct the proxy of h r using the mechanism of feature classifier. Most feature classifiers are fully-connected layers, where each row of the weight matrix can be interpreted as a label vector h Y [19], and the label probability can be obtained by the dot product of the sentence vector h and each label vector h Y as: Table 2. Accuracy of different language models and datasets in NLI. The best performance is bold.
p(y k ) = exp(h k Y · h) N i=1 exp(h i Y · h)(4)
In this way, h can be decomposed along h Y , where the parallel component h Y determines the prediction and the orthogonal component h ⊥Y has no effect on the prediction. The commonality between h ⊥Y and h r makes h ⊥Y an ideal proxy for h r . Specifically, for a counterfactual sentence feature pair (h, h * ), we design L OCD to penalize their Orthogonal Component Distance as:
L OCD = h ⊥Y − h * ⊥Y * 2 (5)
This is a positive feedback process, so even if initially the classifier has large estimation errors, it will gradually become accurate with the help of the prediction loss and L OCD .
Training Process
Compared to the original prediction loss L P , the proposed ECF algorithm also combines dataset-level constraint L IRM and sentence-level constraint L OCD as:
L = L P + α · L IRM + β · L OCD(6)
where α, β are the weighting coefficients to balance the language model's In-Distribution predictive power and additional constraints introduced for OOD Generalization.
EXPERIMENTS
Datasets
We conducted experiments on two tasks: Sentiment Analysis (SA) and Natural Language Inference (NLI). Sentiment Analysis The seed dataset in SA was IMDb [20] dataset. [12] collected a subset of IMDb as a seed dataset, and manually annotated the corresponding counterfactual sentences to construct CAD h , while [14,13] utilized WordNet [21] to automatically generate counterfactual sentences and constructed CAD a . We evaluated each language model's OOD generalization capability on three OOD datasets: SST-2 [22], Amazon review [23], Yelp review [24]. Natural Language Inference [12] constructed CAD h by manually editing seed dataset from SNLI [25] dataset. Because the NLI task is more complex, there is little research related to the automatic generation of counterfactual sentences. We treated MNLI (split into matched and mismatched parts) [26] as our OOD dataset for evaluation.
Implementation Details
We implemented LSTM [27] and pre-trained models BERT [1], Roberta [2] as our backbones, and selected the best checkpoint on the training set for testing. For LSTM, The word embedding dimension was set to 300, the batch size was set to 32, and the learning rate of the Adam optimizer to 1e-3. We set α = 1.6 and β = 0.1. We trained each model for 100 epochs in SA/NLI task. For pre-trained models, we used the Hugging Face implementation to finetune the pre-trained models. The batch size was set to 8/5 for SA/NLI tasks respectively, and the learning rate of Adam optimizer to 1e-5. We set α = 0.1 and β = 0.1. We trained each model for 10 epochs.
Main Results
Results on SA The results are presented in Table 1, where the ECF algorithm beat all the compared backbones in terms of the average accuracy of OOD datasets. Specifically, CAD h was more effective for language models' generalization, while the ECF algorithm improved the average accuracy of LSTM and BERT on OOD datasets by 3.6% and 1.1%, respectively. The language models trained on CAD a were relatively weaker in generalization, and the ECF algorithm also helped LSTM and BERT improve their average accuracy by 1.0%/2.1% and 3.2%/1.0% on two CAD a , respectively. Results on NLI The results are presented in Table 2. The ECF algorithm improved the average accuracy of LSTM on OOD datasets by 1.5%. The ECF algorithm also helped pretrained models, improving the OOD generalization accuracy by 1.2% on BERT and by 1.7% on Roberta.
Ablation Study
We investigated the independent impact of each constraint in our ECF algorithm. We chose BERT as the backbone, and the results are reported in Fig. 3. When we removed L IRM and L OCD , the performance decreased significantly. This illustrated that the language models trained directly on CAD did not fully exploit the potential of CAD, and two additional constraints we proposed further unlocked CAD's potential.
Data Efficiency
Counterfactual augmentation expanded the size of the seed dataset, which also contributed to OOD generalization. To demonstrate the validity of CAD and our ECF algorithm for language models, we compared the generalization capability of multiple language models trained with the same amount of unaugmented data, as shown in Fig. 4. The experimental results illustrated that CAD cannot always outperform the same amount of unaugmented data, while our ECF algorithm could steadily improve the generalization capability.
RELATED WORK
CAD is an emerging technique in NLP field since [12], which aims to help language models extract causal features by changing data distribution. Some studies [15,16] pointed out CAD inefficiency in terms of empirical experimental results, and [6] attempted to provide a theoretical explanation for this inefficiency. Previous approaches to improving CAD efficiency fall into two categories: (1) improving the generation quality [13,14] of counterfactual texts. (2) debiasing for specific bias [17] (e.g., gender, race) in CAD. To the best of my knowledge, our paper is the first attempt to improve the efficiency of CAD by designing additional constraints, which neither change the dataset nor require additional information, and is the most general application scenario.
CONCLUSION
In this paper, we attributed the inefficiency of CAD to Myopia Phenomenon caused by counterfactual augmentation operations, and designed dataset-level and sentence-level constraints based on the structural properties of CAD to help language models to extract more complete causal features and then unlock the potential of CAD.
Fig. 1 .
1The motivation of CAD. Counterfactual augmentation of texts (
Fig. 2 .
2Premise: three children are watching a movie. Hypothesis: Children are watching a film. (!"#$%&'!"#) Editing 4: Six children are watching a film. (()"#*$+%(#%)") Editing 1: Brothers are watching a film. ("!,#*$&) Editing 2: Children are watching Batman. ("!,#*$&) Editing 3: Children are watching TV. (()"#*$+%(#%)") An example of multiple counterfactual augmentation results in Natural Language Inference. Editing different causal components in the Hypothesis can all serve the purpose of flipping the corresponding label.
Fig. 3 .
3Ablation analysis of two constraints on Sentiment Analysis. \ denotes the removing operation.
Fig. 4 .
4Data efficiency analysis of CAD.
Under the ideal conditions, each sentence consists of causal features hc whose joint distribution with labels is invariant, and correlated features hr whose joint distribution can vary.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, "Bert: Pre-training of deep bidi- rectional transformers for language understanding," in NAACL, 2019.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov, "Roberta: A robustly optimized bert pretraining approach," ArXiv, 2019.
Generalizing to unseen domains: A survey on domain generalization. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, IJCAI. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, and Tao Qin, "Generalizing to unseen domains: A survey on domain generalization," in IJCAI, 2021.
Towards out-ofdistribution generalization: A survey. Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui, ArXiv. Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui, "Towards out-of- distribution generalization: A survey," ArXiv, 2021.
Learning what makes a difference from counterfactual examples and gradient supervision. Damien Teney, Ehsan Abbasnejad, Anton Van Den, Hengel, ECCVDamien Teney, Ehsan Abbasnejad, and Anton van den Hengel, "Learning what makes a difference from coun- terfactual examples and gradient supervision," ECCV, 2020.
An investigation of the (in)effectiveness of counterfactually augmented data. Nitish Joshi, He He, ACL. Nitish Joshi and He He, "An investigation of the (in)effectiveness of counterfactually augmented data," ACL, 2022.
The need for biases in learning generalizations. Tom Michael, Mitchell , Department of Computer Science, Laboratory for Computer Science Research, Rutgers Univ.Tom Michael Mitchell, "The need for biases in learning generalizations," Department of Computer Science, Lab- oratory for Computer Science Research, Rutgers Univ., 2007.
Unbiased look at dataset bias. Antonio Torralba, Alexei A Efros, CVPRAntonio Torralba and Alexei A. Efros, "Unbiased look at dataset bias," CVPR, 2011.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. R , Thomas Mccoy, Ellie Pavlick, Tal Linzen, ACL. R. Thomas McCoy, Ellie Pavlick, and Tal Linzen, "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference," in ACL, 2019.
Identifying spurious correlations for robust text classification. Zhao Wang, Aron Culotta, Findings of EMNLP. Zhao Wang and Aron Culotta, "Identifying spurious correlations for robust text classification," in Findings of EMNLP, 2020.
Evaluation and selection of biases in machine learning. Diana F Spears, Marie Desjardins, Machine Learning. Diana F. Spears and Marie desJardins, "Evaluation and selection of biases in machine learning," Machine Learn- ing, 2004.
Learning the difference that makes a difference with counterfactually-augmented data. Divyansh Kaushik, Eduard H Hovy, Zachary Chase Lipton, ICLRDivyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton, "Learning the difference that makes a difference with counterfactually-augmented data," ICLR, 2020.
Robustness to spurious correlations in text classification via automatically generated counterfactuals. Zhao Wang, Aron Culotta, AAAIZhao Wang and Aron Culotta, "Robustness to spuri- ous correlations in text classification via automatically generated counterfactuals," in AAAI, 2021.
Exploring the efficacy of automatically generated counterfactuals for sentiment analysis. Linyi Yang, Jiazheng Li, Yue Cunningham, Barry Zhang, Ruihai Smyth, Dong, ACL. Linyi Yang, Jiazheng Li, P'adraig Cunningham, Yue Zhang, Barry Smyth, and Ruihai Dong, "Exploring the efficacy of automatically generated counterfactuals for sentiment analysis," ACL, 2021.
Counterfactually-augmented snli training data does not yield better generalization than unaugmented data. William Huang, Haokun Liu, Samuel R Bowman, INSIGHTS. William Huang, Haokun Liu, and Samuel R. Bowman, "Counterfactually-augmented snli training data does not yield better generalization than unaugmented data," in INSIGHTS, 2020.
More bang for your buck: Natural perturbation for robust question answering. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, EMNLP. Daniel Khashabi, Tushar Khot, and Ashish Sabharwal, "More bang for your buck: Natural perturbation for robust question answering," in EMNLP, 2020.
Can we improve model robustness through secondary attribute counterfactuals?. Ananth Balashankar, Xuezhi Wang, Ben Packer, Nithum Thain, Ed H Chi, Alex Beutel, EMNLP. Ananth Balashankar, Xuezhi Wang, Ben Packer, Nithum Thain, Ed H. Chi, and Alex Beutel, "Can we improve model robustness through secondary attribute counterfac- tuals?," in EMNLP, 2021.
The risks of invariant risk minimization. Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski, ICLRElan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski, "The risks of invariant risk minimization," ICLR, 2021.
Explicit interaction model towards text classification. Cunxiao Du, Zhaozheng Chen, Fuli Feng, Lei Zhu, Tian Gan, Liqiang Nie, AAAICunxiao Du, Zhaozheng Chen, Fuli Feng, Lei Zhu, Tian Gan, and Liqiang Nie, "Explicit interaction model to- wards text classification," in AAAI, 2019.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, A Ng, Christopher Potts, ACL. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, A. Ng, and Christopher Potts, "Learning word vectors for sentiment analysis," in ACL, 2011.
Wordnet : an electronic lexical database. Christiane D Fellbaum, LanguageChristiane D. Fellbaum, "Wordnet : an electronic lexical database," Language, 2000.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, A Ng, Christopher Potts, EMNLP. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, A. Ng, and Christopher Potts, "Recursive deep models for semantic compositionality over a sentiment treebank," in EMNLP, 2013.
Justifying recommendations using distantly-labeled reviews and fine-grained aspects. Jianmo Ni, Jiacheng Li, Julian Mcauley, EMNLP. Jianmo Ni, Jiacheng Li, and Julian McAuley, "Justifying recommendations using distantly-labeled reviews and fine-grained aspects," in EMNLP, 2019.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Jake Zhao, Yann Lecun, NIPS. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun, "Character-level convolutional networks for text classifi- cation," NIPS, 2015.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, EMNLP. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning, "A large annotated corpus for learning natural language inference," in EMNLP, 2015.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel R Bowman, NAACL. Adina Williams, Nikita Nangia, and Samuel R. Bow- man, "A broad-coverage challenge corpus for sentence understanding through inference," in NAACL, 2018.
Long shortterm memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. Sepp Hochreiter and Jürgen Schmidhuber, "Long short- term memory," Neural Computation, 1997.
| [] |
[
"Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation",
"Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation"
] | [
"Xingdi Yuan \nNational Chung Hsing University ♠ INRIA\n\n",
"♣ \nNational Chung Hsing University ♠ INRIA\n\n",
"Tong Wang tong.wang@microsoft.com \nNational Chung Hsing University ♠ INRIA\n\n",
"♣ \nNational Chung Hsing University ♠ INRIA\n\n",
"Yen-Hsiang Wang \nNational Chung Hsing University ♠ INRIA\n\n",
"♢ \nNational Chung Hsing University ♠ INRIA\n\n",
"Emery Fine \nNational Chung Hsing University ♠ INRIA\n\n",
"Rania Abdelghani \nNational Chung Hsing University ♠ INRIA\n\n",
"Pauline Lucas \nNational Chung Hsing University ♠ INRIA\n\n",
"Hélène Sauzéon \nNational Chung Hsing University ♠ INRIA\n\n",
"Pierre-Yves Oudeyer \nNational Chung Hsing University ♠ INRIA\n\n",
"♠ \nNational Chung Hsing University ♠ INRIA\n\n",
"MontréalMicrosoft Research \nNational Chung Hsing University ♠ INRIA\n\n"
] | [
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n",
"National Chung Hsing University ♠ INRIA\n"
] | [] | Large Language Models (LLMs) have in recent years demonstrated impressive prowess in natural language generation. A common practice to improve generation diversity is to sample multiple outputs from the model. However, there lacks a simple and robust way of selecting the best output from these stochastic samples. As a case study framed in the context of question generation, we propose two prompt-based approaches to selecting highquality questions from a set of LLM-generated candidates. Our method works under the constraints of 1) a black-box (non-modifiable) question generation model and 2) lack of access to human-annotated references -both of which are realistic limitations for real-world deployment of LLMs. With automatic as well as human evaluations, we empirically demonstrate that our approach can effectively select questions of higher qualities than greedy generation. * Equal contribution. 1 We open-source all code and annotated data on github. | 10.48550/arxiv.2209.11000 | [
"https://export.arxiv.org/pdf/2209.11000v1.pdf"
] | 252,439,215 | 2209.11000 | b4fcd453c04dc5312ebb5a33f248c9fbd112cf87 |
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
Xingdi Yuan
National Chung Hsing University ♠ INRIA
♣
National Chung Hsing University ♠ INRIA
Tong Wang tong.wang@microsoft.com
National Chung Hsing University ♠ INRIA
♣
National Chung Hsing University ♠ INRIA
Yen-Hsiang Wang
National Chung Hsing University ♠ INRIA
♢
National Chung Hsing University ♠ INRIA
Emery Fine
National Chung Hsing University ♠ INRIA
Rania Abdelghani
National Chung Hsing University ♠ INRIA
Pauline Lucas
National Chung Hsing University ♠ INRIA
Hélène Sauzéon
National Chung Hsing University ♠ INRIA
Pierre-Yves Oudeyer
National Chung Hsing University ♠ INRIA
♠
National Chung Hsing University ♠ INRIA
MontréalMicrosoft Research
National Chung Hsing University ♠ INRIA
Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation
1
Large Language Models (LLMs) have in recent years demonstrated impressive prowess in natural language generation. A common practice to improve generation diversity is to sample multiple outputs from the model. However, there lacks a simple and robust way of selecting the best output from these stochastic samples. As a case study framed in the context of question generation, we propose two prompt-based approaches to selecting highquality questions from a set of LLM-generated candidates. Our method works under the constraints of 1) a black-box (non-modifiable) question generation model and 2) lack of access to human-annotated references -both of which are realistic limitations for real-world deployment of LLMs. With automatic as well as human evaluations, we empirically demonstrate that our approach can effectively select questions of higher qualities than greedy generation. * Equal contribution. 1 We open-source all code and annotated data on github.
Introduction & Related Work
Large Language Models (LLMs) have recently gained tremendous popularity in the NLP community (Devlin et al., 2019;Liu et al., 2019;Bao et al., 2020;Brown et al., 2020). The ever-increasing size in both models and training data renders many traditional learning methods impractical/intractable. As a result, prompt-based learning has emerged as a new paradigm tailored specifically towards leveraging the power of LLMs (Radford et al., 2019;Petroni et al., 2019;Raffel et al., 2020;Brown et al., 2020;Schick and Schütze, 2021b;Gao et al., 2021;Liu et al., 2021). In the zero-shot setting (such as in this study), a data sample is first "verbalized" into an input prompt and a ground-truth response -both often in natural language forms. The prompt is then issued to a pre-trained LLM to obtain a predicted response, which can then be compared to the ground-truth for evaluation. This new technique has been successfully applied to many applications including text classification (Yin et al., 2019;Schick and Schütze, 2021a), QA (Jiang et al., 2021), natural language generation (Li and Liang, 2021) and NLG evaluation (Yuan et al., 2021).
Despite the impressive results on popular NLP benchmarks, however, the back-end LLMs are usually pre-trained with general-domain data, leading to sub-optimal performance in new domains for prompt-based learning. There are two major challenges in successful domain adaptation. Firstly, aside from the many known issues of LLMs (Webson and Pavlick, 2021;Min et al., 2022;Zhao et al., 2021;Lampinen et al., 2022), their sheer size and/or accessibility (e.g., served via API over the internet) makes it prohibitively expensive and impractical for domain adaptation. These limitations have inspired a recent line of work known as prompt editing/tuning (Gao et al., 2021;Li and Liang, 2021;Madaan et al., 2022). The general idea is to systematically study the correlation between prompt construction and the performance on a specific task. Prompt construction comes in a wide variety of flavours ranging from adapting real-valued prompt embeddings to the order/wording/etc. of few-shot in-context learning examples. Meanwhile, it also introduces a second challenge: prompt-tuning often relies on the availability of ground-truth labels of the data, which imposes much uncertainty in applications where labeled data are scarce.
Given the ubiquity of the aforementioned challenges, we focus our study on alleviating the constraints on both annotation availability and access to model parameters, and consequently making LLMs more accessible to be deployed and used in real-world applications. We take a mainstream NLG task, namely question generation, as a case study (Du et al., 2017;Yuan et al., 2017;Du and Cardie, 2018;Pan et al., 2019;Liu et al., 2020;Pyatkin et al., 2021). In this task, a model is trained to generate a natural language question conditioned on a context and an answer, such that the generated question can be answered by the provided answer using the context as supporting evidence. Question generation is the corner stone for many NLP applications including education (Kurdi et al., 2020;Abdelghani et al., 2022), automatically FAQ generation (Mass et al., 2020, information seeking (Qi et al., 2020), etc. In an educational setting, for example, a question generation system can generate demonstrations that inspire students' curiosity and thinking (teaching), or to help assess students' proficiency on certain knowledge or skills (examining). These use cases would benefit greatly from reduced dependency on computing resources, data availability, and the required expertise for fine-tuning an LM.
To align with these real-world scenarios, our goal is to obtain better outputs from an inferenceonly LLM (i.e., as a "black-box", which is relatively more accessible, e.g., through online APIs). In particular, given the common practice of sampling multiple outputs for improved generation diversity, we propose a method that aims at selecting the best candidate based on multiple aspects of question quality in a zero-shot manner -notably without model adaptation or human annotations. Our method can be seen as a post-hoc selection process within a larger NLG pipeline, and thus is orthogonal and applicable to zero-shot and incontext learning methods (Rubin et al., 2021;Lu et al., 2022;Liu et al., 2022).
Problem Setting
Notations Formally, we consider a dataset of context-answer pairs (c, a) both as strings. The task of question generation is to generate a question q that can be answered by a using c as supporting evidence. We use an off-the-shelf pretrained LLM-based question generator in a zeroshot setting (prompt construction detailed in Appendix A). To simulate the black-box generator scenario, we refrain from any form of model tuning. We do, however, assume access to a set of output sequences stochastically sampled from the question generator. We thus ground our study to this application scenario by sampling k questions Q = {q i ∶ i = 1, . . . , k}. For comparison as a baseline, we also denote q g as the question generated with a greedy algorithm (i.e., generating the most probable token at each time step).
Our goal is to devise an algorithm S which selects the best candidate q i * that maximizes some evaluation metric M ∶ Q ↦ R, i.e., S(Q) = i * = arg max i M (q i ). We use M s , M s , and M s to denote the mean, min, and max of {M (q) ∶ q ∈ Q}, resp., and M g for the greedy output M (q g ). Semantically, M s ≤M s ≤M s is tautologically true, and a positive result on the design of S would translate to M (q S(Q) ) outperforming both M s and M g .
Datasets In this work, we adopt two question generation datasets with distinctive characteristics, namely SQuAD (Rajpurkar et al., 2016) and Fairytale QA (Xu et al., 2022). SQuAD was originally proposed as an extractive question answering (QA) dataset. In the question generation literature (Du and Cardie, 2018;Yuan et al., 2017;Bao et al., 2020), it has been used as a sentence-level question generation task, i.e., a context c is a single sentence that contains the corresponding answer a as a substring. Fairytale QA has also been used for both question answering and question generation. It features paragraph-level question generation (with c being one or more paragraphs), and the answer a is not necessarily a sub-string of c. Since we do not perform any form of model/prompt tuning, we use the testing split for datasets, which consist of 11,877 data points for SQuAD and 1,007 for Fairytale QA.
Model We leverage a pre-trained GPT-3 model (Brown et al., 2020) for both question generation and selection (detailed in §3). In all our experiments, we prompt the GPT-3 model in a 0-shot manner. Details on all our prompts are provided in Appendix A.
Evaluation Metrics
We use two quantitative methods to evaluate the selected question q ′ = M (q S(Q) ):
• Reference-based evaluation: Following prior works, we use BLEU-4 for SQuAD (Du and Cardie, 2018;Bao et al., 2020) and ROUGE-L for Fairytale QA (Xu et al., 2022). These metrics compare q ′ against the reference questionq (a.k.a. the "groundtruth" question in the existing literature).
• Human evaluation: we solicit human annotations on a subset of the data. We postulate that an over-all score given holistically to rate a question would be highly subjective and thus less likely to induce agreement among annotators. Accordingly, [context] Old Dragonbeard must have been a master swordsman standing midway between those of the first and of the second order. Molo, however, of whom this story tells, was a sword hero. At that time there lived a young man named Tsui, whose father was a high official and the friend of the prince. And the father once sent his son to visit his princely friend, who was ill. The son was young, handsome and gifted. He went to carry out his father's instructions. When he entered the prince's palace, there stood three beautiful slave girls, who piled rosy peaches into a golden bowl, poured sugar over them and presented them to him. After he had eaten he took his leave, and his princely host ordered one of the slave girls, Rose-Red by name, to escort him to the gate. As they went along the young man kept looking back at her. And she smiled at him and made signs with her fingers. First she would stretch out three fingers, then she would turn her hand around three times, and finally she would point to a little mirror which she wore on her breast. When they parted she whispered to him: "Do not forget me!" [ we decompose the quality of a question into seven dimensions 2 , and ask human annotators to rate a question on each dimension followed by an overall rating of the question. We collect three annotations from different annotators for each data points. We provide details of the human study in Appendix B.
Method
In this section we propose three question selection methods. As described in §2, each method is used to score k sampled questions in Q and the candidate with the highest score is proposed as the final output.
n-gram similarity We use n-gram similarity between a question and its corresponding context to measure their relevance. This method reflects the intuitive assumption that favorable question be closely related to the information provided by the context. Specifically, we extract all unique ngrams 3 s n (c) from a given context c, s n (q) from a question q. The n-gram similarity score is then defined as:
sim n = |s n (c) ∩ s n (q)| |s n (q)| ,(1)
where |s| indicates the size of set s.
Round-trip
Intuitively, the answer to a generated question should be semantically equivalent to the answer that has been used to generated the question. Formally, a question generation model QG and a QA model (both with reasonable performance) should satisfy the following:
q ′ = QG(c, a); a ′ = QA(c, q ′ ); a ′ = a.(2)
2 Namely, grammatical correctness, offensiveness, clarity, relevance, importance, specificity, and answerability.
3 In all our experiments n ranges from 1 to 5.
SQuAD Fairytale QA (BLEU-4) (ROUGE-L)
prior works (models trained/fine-tuned on these datasets) (Du and Cardie, 2018) 0.152 - (Zhang and Bansal, 2019) 0.184 -UniLM Large (Bao et al., 2020) 0.228 -UniLM v2 Base (Bao et al., 2020) 0.244 -ERNIE-GEN Large (Xiao et al., 2021) 0.254 -BART (Xu et al., 2022 This idea is closely related to cycle consistency in the existing literature on image generation (Zhu et al., 2017), machine translation (Artetxe et al., 2018), and QA (Alberti et al., 2019;Shah et al., 2019)). Here, we use GPT-3 as an off-the-shelf QA model to obtain a ′ for each pair of c and q ′ , resulting in k answers A = {a ′ 1 , . . . , a ′ k } for the k sampled questions in Q. We then measure the similarity between each a ′ i and the ground-truth answer a using F 1 score for SQuAD and ROUGE-L for Fairytale QA (in accordance with the evaluation setup from the original papers for the two datasets). Finally, we select the question corresponding to the generated answer a ′ i * that overlaps the most with a (i.e., that can be best answered by GPT-3) Prompts used in these experiments are detailed in Appendix A.
Prompt-based Score We propose a two-step procedure ( Figure 1) for prompting GPT-3 to answer the same set of meta-questions (i.e., questions about the quality of a given question) used for human evaluation ( §2).
In step 1, given a context-question pair, GPT-3 is prompted to answer a meta-question as an open question (as opposed to choosing among a list of options) as well as to verbalize a reason for its answer. In step 2, GPT-3 is prompted to choose from a list of options representing the rating scale of the meta-question.
We empirically observe that without the first step, GPT-3 output tends to have a low-entropy distribution, i.e., often choosing the same option for a given meta-question disregarding the different context-question pairs. In contrast, the model appears to be better primed wrt output diversity with the additional first-step, which is inline with observations made in some existing studies (Nye et al., 2021;Wei et al., 2022).
Similar to human evaluation, we also prompt GPT-3 to generate an overall score to a question. We use overall prompt-based score (OPS) to denote this GPT-3-labeled score, and averaged promptbased score (APS) to denote the average score over all individual meta-questions.
Results and Discussion
To measure the performance of a selection method ( §3), we use it to select one out of k questions stochastically sampled from GPT-3, and score the selection with the evaluation metrics outlined in §2. We set k = 5 for all our experiments. Additionally, we test the ensemble performance with multiple methods. To ensure comparability, we normalize the scores obtained from each selection method into the range between 0 and 1, and use their average score to perform question selection.
Reference-based evaluation
Reference-based evaluation are automatic metrics that are applied to the entire test sets of SQuAD and Fairytale QA. We observe in Table 1 that on both datasets, all question selection methods outperform M s , the average score over all five sam-pled questions, validating the effectiveness of the proposed methods. While all individual methods outperform the greedy generation baseline M g on SQuAD, round-trip is the best performing one, outperforming M g on both datasets. It can be further improved via ensemble with n-gram and/or promptbased scores (using uniform weights).
Note that prior studies require a large amount of labeled data for model training/fine-tuning, while GPT-3 performs zero-shot inference. Despite this major difference in learning paradigm, most GPT-3-based models proposed here outperform previous results by significant margins on the SQuAD dataset -even the least performant samples M s (lowerbound) achieve competitive results. For Fairytale QA, however, only the best samples M s (upperbound) outperform previous results (Xu et al., 2022), indicating margins for improvement on question selection strategies for future work.
Human Evaluation
Human evaluation consists of 16, 800 annotations (collected from 87 annotators) evenly split across the two datasets (details in Appendix B). For question generation (among many language generation tasks), model outputs may exhibit linguistic diversity while maintaining semantic equivalence. It is thus highly problematic to evaluate such outputs against a single reference (i.e, "ground-truth" questions). Figure 2 empirically shows that the groundtruth (GT) questions provided in the datasets often fail to receive the highest human ratings, on many occasions scoring lower than stochastic samples from GPT-3 (M s ). Consequently, we strongly advocate for human evaluation, which we believe is higly effective in improving generalizability of our results to real-world applications.
Another prominent observation is that n-gram and APS perform quite differently on the two datasets. On SQuAD, n-gram similarity outperforms other individual methods, with further noticeable improvements via ensemble with roundtrip. APS, on the other hand, does not work nearly as well, performing the worst for almost all metaquestions. In contrast, n-gram (particularly trigram) similarity shows the worst performance on Fairytale QA, while APS outperforms all other methods by a noticeable margin.
We posit that the reversed trend in comparing n-gram and APS can be explained by the distinct natures of the datasets. For SQuAD, the sentencelevel contexts are relatively short and simple with strictly extractive answers (i.e., the answers being sub-strings of the corresponding contexts). As a result, paraphrasing the context can be a rather effective question generation strategy, hence the stronger correlation between question quality and the c-q n-gram similarity. On the other hand, with multiparagraph contexts and abstractive, open-ended answers, questions are more likely posed about abstract ideas rather than simple context paraphrasing. Consequently, n-gram similarity, which favors local context paraphrasing, can no longer serve as a good question selection strategy.
Limitations and Future Work
We acknowledge that our system has some limitations that warrants further investigation. For example, one needs to be mindful of the specific downstream applications of the proposed methods, both in terms of potentially large variance in outof-distribution performance (e.g. divergent question generation, Abdelghani et al. 2022) and of mitigating harmful/toxic contents in educational applications (Bender et al., 2021).
We also acknowledge the prohibitively restrictive access to the GPT-3 model at the time of writing. We do believe that this constraint will relax over time, and meanwhile, hoping that our proposal can shed light on research and applications with more accessible LLMs such as GPT-J (Wang and Komatsuzaki, 2021) and BLOOM (BigScience, 2022) for future work.
Conclusion
In this study, we investigate the practical problem of selecting the best output from multiple samples generated by an LLM. Using question generation as a case study, we propose two prompt-based approaches that select high-quality questions according to question quality from multiple perspectives. To alleviate real-world constraints on using large LMs such as computation resources and data availability, the proposed methods do not rely on model fine-tuning nor human annotation. Extensive experiments with both automatic and human evaluations evince the effectiveness of our approach on selecting high-quality questions from stochastic samples. Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.
Contents in Appendices:
• In Appendix A, we report all prompt templates we used in this work.
• In Appendix B, we provide details on the human study.
• In Appendix C, we provide the full set of our experiment results.
• In Appendix D, we report implementation details.
A Prompt Designs
We report an example of our prompt for question generation in Figure 3. We report an example of our prompt for QA (used in round-trip) in Figure 4.
We report an example of our prompt in obtaining prompt scores in Figure 1.
B Human Study
We randomly sample 50 documents from each of the two datasets SQuAD and Fairytale QA. Each document correspond to one ground-truth question and six questions generated by GPT-3 (five by stochastic sampling and one by greedy search). Each question is then rated by three human annotators wrt seven meta-questions and one over-all rating, altogether constituting 50 × 2 × (1 + 5 + 1) × 3 × (7 + 1) = 16, 800 annotations. There are in total 87 annotators involved in the annotation process, all annotators are English speakers, they are recruited from regions including Europe, the United States and United Kingdom. Each annotator on average performed 193 annotations and was paid on average $14.1 USD per hour.
We perform a basic spam filtering process on the raw annotations. We observe a 15.4% spam rate. All human scores reported in this paper are computed after spam removal.
We report the eight meta-questions we used for human annotation in Figure 5. The eight metaquestions correspond to columns in Figure 2. We collect three annotations from different annotators for every meta-question, we report the averaged human agreement rate in Table 2.
C Additional Results
In Table 3, we report the full experiment results for reference-based evaluation. Table 2: Averaged human agreements among three annotators. An agreement indicates that all three annotators selected the same option for a meta-question.
SQuAD
Fairytale QA (BLEU-4) (ROUGE-L)
prior works (models trained/fine-tuned on these datasets) (Du and Cardie, 2018) 0.152 - (Zhang and Bansal, 2019) 0.184 -UniLM Large (Bao et al., 2020) 0.228 -UniLM v2 Base (Bao et al., 2020) 0.244 -ERNIE-GEN Large (Xiao et al., 2021) 0.254 -BART (Xu et al., 2022 In Table 4, we report the full results for human evaluation on SQuAD.
In Table 5, we report the full results for human evaluation on Fairytale QA.
Story:
As soon as the lady had departed the fisher's son awoke, and the dark lad told him of her visit, and how he would never see her as long as he lived. At this the fisher's son felt the cold creeping up to his heart, yet he knew the fault had not been his that sleep had overtaken him. 'I will search the whole world through till I find her,' cried he, and the dark lad laughed as he heard him. But the fisher's son took no heed, and off he went, following the sun day after day, till his shoes were in holes and his feet were sore from the journey. Nought did he see but the birds that made their nests in the trees, not so much as a goat or a rabbit. On and on and on he went, till suddenly he came upon a little house, with a woman standing outside it. Instruction: Read the above story, ask a question and answer it. Question: GPT-3 FILLS IN THIS BLANK Answer: search the whole world through till he found her Figure 3: An example of prompting GPT-3 for question generation. We use the text before green as prompt, and text after green as suffix. We refer readers to the GPT-3 documentation for more details about GPT-3's inserting completion mode.
[Document]: is cheeks were red with passion, and his eyes were bright, for he could not but notice that, now that she was safe at Orphir under her true love's protection, the Lady Morna's manner had grown cold and distant again, and he was beginning to lose faith in Snorro's charm.
Angry and disappointed, he had sought his mother's room to pour out his story of vexation to her.
He stopped short, however, when he saw the wonderful waistcoat lying on the table, all gold and silver and shining colours. It was like a fairy garment, and its beauty took his breath away.
[Question]: Why did Harold lose faith in Snorro's charm?
[Answer]: Harold lost faith in Snorro's charm because the Lady Morna's manner had grown cold and distant again.
D Implementation Details
In all experiments, we use the text-davinci-002 (175B parameters) variant of GPT-3. It is currently the most capable GPT-3 model variant. Compared to other variants, text-davinci-002's support to inserting completions can better facilitate our question generation tasks (as shown in Figure 3).
We use a temperature of 0.7 during the sampling process of question generation. In all other use cases (e.g., QA round-trip, prompt score), we use greedy generation (temperature is set to 0).
1. Is the question gramatically correct? 1) It is grammatically incorrect 2) It has some grammatical issues 3) It is grammatically correct 2. Is the question offensive to people? 1) It is very offensive 2) It may be offensive 3) It is not at all offensive 3. Is the question clear? 1) It is not at all clear 2) It is mostly clear 3) Is is very clear 4. Is the question related to the context of the attached document? 1) It is not at all related 2) It is somewhat related 3) It is closely related 5. Is the question asking about an important aspect of the context of the attached document? 1) Not at all important 2) It may be important 3) It is very important 6. Is the question asking about a specific piece of information in the attached document? 1) The question is very generic 2) The question is somewhat generic 3) The question is very specific 7. Can the question be answered using information in the attached document? 1) No, answering the question requires completely different information 2) The question can be partially answered using information from the document 3) The question can be perfectly answered using information from the document 8. What is your overall rating of the question generated based on the attached document? 1) The question is very bad 2) The question is okay 3) The question is very good Table 4: Human eval results (SQuAD). Abbreviations in the first row denote Grammatical correctness, Offensiveness, Clarity, Relevance, Importance, Specificity, Answerability, Averaged Human Rating (over all dimensions to the left), Overall Human Rating (an overall score given by annotators). Best and second best numbers (excluding baselines) are highlighted with boldface and underline, respectively. Table 5: Human eval results (Fairytale QA). Abbreviations in the first row denote Grammatical correctness, Offensiveness, Clarity, Relevance, Importance, Specificity, Answerability, Averaged Human Rating (over all dimensions to the left), Overall Human Rating (an overall score given by annotators). Best and second best numbers (excluding baselines) are highlighted with boldface and underline, respectively.
Figure 1 :
1Prompting GPT-3 to rate a question's relevance. GPT-3 output is highlighted in green.
Figure 2 :
2Human evaluation results, averaged over three annotators' scores, normalized per column. Left: SQuAD; right: Fairytale QA. Abbreviations in x-axis denote Grammatical correctness, Offensiveness, Clarity, Relevance, Importance, Specificity, Answerability, Averaged Human Rating (over all dimensions to the left), Overall Human Rating (an overall score given by annotators). Exact scores are provided in Appendix C.
Figure 4 :
4An example of prompting GPT-3 for QA. GPT-3 output is highlighted in green.
Figure 5 :
5Meta
Yes, the question is related to the context. Rose-Red is one of the three beautiful slave girls who served the prince. The young man, Tsui, was attracted to her and she seemed to be attracted to him as well.question]
Who was Rose-Red?
[input1]
Is the [question] related to the [context]? Why?
[output1]
[input2]
Based on the above response [output1], which one of the following [options] best described [question] wrt [input1]?
[options]
1: They are not at all related; 2: They are remotely related;
3: They are somewhat related; 4: They are closely related.
[output2]
3: They are somewhat related.
Table 3 :
3Reference-based evaluation scores on various
question selection methods. Best and second best num-
bers (excluding baselines) are highlighted with bold-
face and underline, respectively.
Catherine de Vulpillieres, and Hélene Sauzéon. 2022. Conversational agents for fostering curiositydriven learning in children. Rania Abdelghani, Pierre-Yves Oudeyer, Edith Law, International Journal of Human-Computer Studies. Rania Abdelghani, Pierre-Yves Oudeyer, Edith Law, Catherine de Vulpillieres, and Hélene Sauzéon. 2022. Conversational agents for fostering curiosity- driven learning in children. International Journal of Human-Computer Studies.
Synthetic QA corpora generation with roundtrip consistency. Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, Michael Collins, 10.18653/v1/P19-1620Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChris Alberti, Daniel Andor, Emily Pitler, Jacob De- vlin, and Michael Collins. 2019. Synthetic QA cor- pora generation with roundtrip consistency. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6168- 6173, Florence, Italy. Association for Computa- tional Linguistics.
Unsupervised neural machine translation. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Kyunghyun Cho, International Conference on Learning Representations. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In International Conference on Learning Representations.
UniLMv2: Pseudo-masked language models for unified language model pre-training. Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, Hsiao-Wuen Hon, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Song- hao Piao, Ming Zhou, and Hsiao-Wuen Hon. 2020. UniLMv2: Pseudo-masked language models for uni- fied language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 642-652. PMLR.
Cycle-consistency for robust visual question answering. Meet Shah, Xinlei Chen, Marcus Rohrbach, Devi Parikh, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMeet Shah, Xinlei Chen, Marcus Rohrbach, and Devi Parikh. 2019. Cycle-consistency for robust vi- sual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 6649-6658.
GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. Ben Wang, Aran Komatsuzaki, Ben Wang and Aran Komatsuzaki. 2021. GPT- J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/ kingoflolz/mesh-transformer-jax.
Do promptbased models really understand the meaning of their prompts?. Albert Webson, Ellie Pavlick, arXiv:2109.01247arXiv preprintAlbert Webson and Ellie Pavlick. 2021. Do prompt- based models really understand the meaning of their prompts? arXiv preprint arXiv:2109.01247.
Chain of thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou, arXiv:2201.11903arXiv preprintJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation. Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hua Hao Tian, Haifeng Wu, Wang, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20. the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation. In Pro- ceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20.
Fantastic questions and where to find them: Fairytaleqaan authentic dataset for narrative comprehension. Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, arXiv:2203.13947arXiv preprintYing Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bing- sheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia- Jun Li, Nora Bradford, Branda Sun, et al. 2022. Fan- tastic questions and where to find them: Fairytaleqa- an authentic dataset for narrative comprehension. arXiv preprint arXiv:2203.13947.
Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. Wenpeng Yin, Jamaal Hay, Dan Roth, 10.18653/v1/D19-1404Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914-3923, Hong Kong, China. Association for Computational Linguistics.
Bartscore: Evaluating generated text as text generation. Weizhe Yuan, Graham Neubig, Pengfei Liu, Advances in Neural Information Processing Systems. Curran Associates, Inc34Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text gener- ation. In Advances in Neural Information Process- ing Systems, volume 34, pages 27263-27277. Cur- ran Associates, Inc.
Machine comprehension by text-to-text neural question generation. Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, Adam Trischler, In arXivXingdi Yuan, Tong Wang, Caglar Gulcehre, Alessan- dro Sordoni, Philip Bachman, Sandeep Subrama- nian, Saizheng Zhang, and Adam Trischler. 2017. Machine comprehension by text-to-text neural ques- tion generation. In arXiv.
Addressing semantic drift in question generation for semisupervised question answering. Shiyue Zhang, Mohit Bansal, 10.18653/v1/D19-1253Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsShiyue Zhang and Mohit Bansal. 2019. Address- ing semantic drift in question generation for semi- supervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495-2509, Hong Kong, China. Association for Computational Linguistics.
Calibrate before use: Improving few-shot performance of language models. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh, PMLRInternational Conference on Machine Learning. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improv- ing few-shot performance of language models. In In- ternational Conference on Machine Learning, pages 12697-12706. PMLR.
Unpaired image-to-image translation using cycle-consistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Pro- ceedings of the IEEE international conference on computer vision, pages 2223-2232.
| [] |
[] | [
"Ali Hürriyetoglu ali.hurriyetoglu@dh.huc.knaw.nl ",
"Osman Mutlu omutlu@ku.edu.tr ",
"Fırat Duruşan fdurusan@ku.edu.tr ",
"Onur Uca onuruca@mersin.edu.tr ",
"Alaeddin Selçuk alaeddinselcukgurel@gmail.com ",
"Gürel Huawei ",
"Benjamin Radford benjamin.radford@uncc.edu ",
"Yaoyao Dai yaoyao.dai@uncc.edu ",
"Hansi Hettiarachchi hansi.hettiarachchi@mail.bcu.ac.uk ",
"Niklas Stoehr niklas.stoehr@inf.ethz.ch ",
"Tadashi Nomoto nomoto@acm.org ",
"Milena Slavcheva milena@lml.bas.bg ",
"Francielle Vargas francielleavargas@usp.br ",
"Aaqib Javid ajavid20@ku.edu.tr ",
"Fatih Beyhan fatihbeyhan@sabanciuniv.edu ",
"Erdem Yörük ",
"\nKNAW Humanities Cluster DHLab\nKoc University\nKoc University\nMersin University\nUNC\nCharlotte\n",
"\nUNC\nCharlotte\n",
"\nBirmingham City University\nETH Zurich\nNational Institute of Japanese Literature\nBulgarian Academy of Sciences\nUniversity of São\nPaulo\n",
"\nKoc University\nSabanci University\nKoc University\n\n"
] | [
"KNAW Humanities Cluster DHLab\nKoc University\nKoc University\nMersin University\nUNC\nCharlotte",
"UNC\nCharlotte",
"Birmingham City University\nETH Zurich\nNational Institute of Japanese Literature\nBulgarian Academy of Sciences\nUniversity of São\nPaulo",
"Koc University\nSabanci University\nKoc University\n"
] | [
"Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE)"
] | We report results of the CASE 2022 Shared Task 1 on Multilingual Protest Event Detection. This task is a continuation of CASE 2021 that consists of four subtasks that are i) document classification, ii) sentence classification, iii) event sentence coreference identification, and iv) event extraction. The CASE 2022 extension consists of expanding the test data with more data in previously available languages, namely, English, Hindi, Portuguese, and Spanish, and adding new test data in Mandarin, Turkish, and Urdu for Sub-task 1, document classification. The training data from CASE 2021 in English, Portuguese and Spanish were utilized. Therefore, predicting document labels in Hindi, Mandarin, Turkish, and Urdu occurs in a zero-shot setting. The CASE 2022 workshop accepts reports on systems developed for predicting test data of CASE 2021 as well. We observe that the best systems submitted by CASE 2022 participants achieve between 79.71 and 84.06 F1-macro for new languages in a zero-shot setting. The winning approaches are mainly ensembling models and merging data in multiple languages. The best two submissions on CASE 2021 data outperform submissions from last year for Subtask 1 and Subtask 2 in all languages. Only the following scenarios were not outperformed by new submissions on CASE 2021: Subtask 3 Portuguese & Subtask 4 English. | 10.48550/arxiv.2211.11360 | [
"https://www.aclanthology.org/2022.case-1.31.pdf"
] | 253,734,830 | 2211.11360 | 51b258309f1e96028c8589ade222fbf574b9c57c |
December 7-8, 2022
Ali Hürriyetoglu ali.hurriyetoglu@dh.huc.knaw.nl
Osman Mutlu omutlu@ku.edu.tr
Fırat Duruşan fdurusan@ku.edu.tr
Onur Uca onuruca@mersin.edu.tr
Alaeddin Selçuk alaeddinselcukgurel@gmail.com
Gürel Huawei
Benjamin Radford benjamin.radford@uncc.edu
Yaoyao Dai yaoyao.dai@uncc.edu
Hansi Hettiarachchi hansi.hettiarachchi@mail.bcu.ac.uk
Niklas Stoehr niklas.stoehr@inf.ethz.ch
Tadashi Nomoto nomoto@acm.org
Milena Slavcheva milena@lml.bas.bg
Francielle Vargas francielleavargas@usp.br
Aaqib Javid ajavid20@ku.edu.tr
Fatih Beyhan fatihbeyhan@sabanciuniv.edu
Erdem Yörük
KNAW Humanities Cluster DHLab
Koc University
Koc University
Mersin University
UNC
Charlotte
UNC
Charlotte
Birmingham City University
ETH Zurich
National Institute of Japanese Literature
Bulgarian Academy of Sciences
University of São
Paulo
Koc University
Sabanci University
Koc University
Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE)
the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE)December 7-8, 2022Extended Multilingual Protest News Detection -Shared Task 1, CASE 2021 and 2022
We report results of the CASE 2022 Shared Task 1 on Multilingual Protest Event Detection. This task is a continuation of CASE 2021 that consists of four subtasks that are i) document classification, ii) sentence classification, iii) event sentence coreference identification, and iv) event extraction. The CASE 2022 extension consists of expanding the test data with more data in previously available languages, namely, English, Hindi, Portuguese, and Spanish, and adding new test data in Mandarin, Turkish, and Urdu for Sub-task 1, document classification. The training data from CASE 2021 in English, Portuguese and Spanish were utilized. Therefore, predicting document labels in Hindi, Mandarin, Turkish, and Urdu occurs in a zero-shot setting. The CASE 2022 workshop accepts reports on systems developed for predicting test data of CASE 2021 as well. We observe that the best systems submitted by CASE 2022 participants achieve between 79.71 and 84.06 F1-macro for new languages in a zero-shot setting. The winning approaches are mainly ensembling models and merging data in multiple languages. The best two submissions on CASE 2021 data outperform submissions from last year for Subtask 1 and Subtask 2 in all languages. Only the following scenarios were not outperformed by new submissions on CASE 2021: Subtask 3 Portuguese & Subtask 4 English.
Introduction
We aim at determining event trigger and its arguments in a text snippet in the scope of an event extraction task. The performance of an automated system depends on the target event type as it may be broad or potentially the event trigger(s) can be ambiguous. The context of the trigger occurrence may need to be handled as well. For instance, the 'protest' event type may be synonymous with 'demonstration' or not in a specific context. Moreover, the hypothetical cases such as future protest plans may need to be excluded from the results. Finally, the relevance of a protest depends on the actors as in a contentious political event only citizen-led events are in the scope. This challenge is even harder in a cross-lingual and zero-shot setting in case training data are not available in new languages.
We provide a benchmark that consists of four subtasks and multiple languages in the scope of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text at The 2022 Conference on Empirical Methods in Natural Language Processing (CASE @ EMNLP 2022) . 1 bgenfrhumil: To paraphrase: The work presented in this paper is a continuation of the work initiated in CASE 2021 Task 1 and consists in adding new documents in already available languages, as well as adding new languages to the evaluation data.
Task 1 consists of the following subtasks that ensure the task is tackled incrementally: i) Document classification, ii) Sentence classification, iii) event sentence coreference identification, and iv) event extraction. The training data consist of documents in English, Portuguese, and Spanish, while the evaluation texts are in English, Hindi, Mandarin, Portuguese, Spanish, Turkish, and Urdu. Subtask 1 ensures documents with relevant senses of event triggers are selected. Next, Subtask 2 focuses on identifying event sentences in a document. Discriminating sentences that are about separate events and grouping them is done in Subtask 3 (Hürriyetoglu et al., 2020. Finally, the sentences that are about the same events are processed to identify the event trigger and its arguments in Subtask 4. In addition to accomplishing the event extraction task, the subtask division improves significantly the annotation quality, as the annotation team can focus on a specific part of the task and errors in previous levels are corrected during the preparation of the following subtask . The significance of this specific task division is twofold: i) facilitating the work with a random sample of documents by first identifying relevant documents and sentences before annotating or processing a sample or a complete archive of documents respectively; ii) increasing the generalizability of the automated systems that may be developed using this data Mutlu, 2022).
The current report is about Task 1 in the scope of CASE 2022. Task 2 and Task 3 (Tan et al., 2022b,a) complement Task 1 by evaluating Task 1 systems on events related to COVID-19 and detecting causality respectively.
The following section, which is Section 2 describes the data we use for the shared task. Next we describe the evaluation setting in Section 3. The results are provided in Section 4. Finally, the Section 5 conclude this report.
Data
We used the CASE 2021 training data as those for CASE 2022. 2 The new document level data, which are used to extend CASE 2021 data, were randomly sampled from MOT v1.2 (Palen-Michel et al., 2022) 3 and were annotated by co-authors of this report. Documents were annotated by native speakers of the respective language. A single label was attached to each document. The annotation manual followed in the annotation process (Duruşan et al., 2022) was the same as that used in CASE 2021.
The total number of CASE 2022 documents with labels is 3,870 for English, 267 for Hindi, 300 for Mandarin, 670 for Portuguese, 399 for Spanish, 300 for Turkish, and 299 for Urdu.
Teams that developed systems for Subtasks 2, 3, and 4 evaluated their systems on CASE 2021 test data.
Evaluation setting
We utilized Codalab for evaluation of Task 1 for CASE 2022. 4 The evaluation for CASE 2021 was performed on an additional scoring page 5 of the original 6 CASE 2021 Codalab page. Moreover, we launched an additional scoring page for CASE 2022 after completion of the official evaluation period. 7 Five submissions per subtask and language pair could be submitted in total for CASE 2022. The additional scoring phase of both CASE 2021 and CASE 2022 allow only one submission per subtask and language combination per day. The test data of CASE 2021 were shared with participants at the same time with the training data. But the CASE 2022 evaluation data were shared around two weeks before the deadline for submission.
The same evaluation scores that are F1-macro for Subtasks 1 and 2, CoNLL-2012 8 for Subtask 3, and CoNLL-2000 9 script for Subtask 4 were utilized.
Results
Eighteen teams were registered for the task and obtained the training and test data for both CASE 2022 and CASE 2021. Ten and seven teams submitted their results for CASE 2021 and CASE 2022 respectively. Seven papers were submitted as system description papers to the CASE 2022 workshop in total. The scores of the submissions are calculated on two different Codalab pages for CASE 2021 10 and CASE 2022 11 . The teams that have participated are ARC-NLP (Sahin et al., 2022), CamPros (Kumari et al., 2022), CEIA-NLP (Fernandes et al., 2022), ClassBases (Wiriyathammabhum, 2022), EventGraph (You et al., 2022), NSUT-NLP (Suri et al., 2022), SPARTA (Müller and Dafnos, 2022). We provide details of the results and submissions of the participating teams for each subtask in the following subsections. 12
CASE 2022 Subtask 1
The results for CASE 2022 subtask 1 are provided in Table 1. ARC-NLP finetune an ensemble of transformer-based language models and use ensemble learning, varying training data for each target language. They also perform tests with automatic translation of both training and test sets. They achieve 1st place both in Turkish and Mandarin, 2nd place in Portuguese and 3rd to 5th place in other languages. CEIA-NLP finetune XLM-Roberta-base transformers model with all the training data to achieve 1st place in Portuguese, 3rd or 4th places in other languages. ClassBases achieve 1st place in Hindi test data finetuning XLM-Roberta-large model, 5th or 6th places in other languages.
CamPros finetune XLM-Roberta-base model with all training data, and NSUT-NLP finetune 9 https://github.com/sighsmile/conlleval, accessed on November 13, 2022. 10 https://codalab.lisn.upsaclay.fr/ competitions/7126#results, accessed on Nov 14, 2022. 11 https://codalab.lisn.upsaclay.fr/ competitions/7438#results, accessed on Nov 14, 2022.
12 The results and system descriptions from participants that did not submit a system description paper are provided as well. This shows the capacity of the state-of-the-art systems on our benchmark. These systems are provided with their codalab names that are colabhero, fine_sunny_day, gauravsingh, lapardnemihk9989, lizhuoqun2021_iscas. mBERT while augmenting the data by translating different languages into each other.
CASE 2021 Subtask 1
The extended results for CASE 2021 subtask 1 are provided in Table 2. The boldness indicates CASE 2022 entries. ClassBases finetune XLM-Roberta-large transformers model to perform 1st in Hindi and 2nd in Portuguese test data. They also achieve 5th and 6th places in Spanish and English respectively. Another team that submitted their model to CASE 2021 test data is ARC-NLP, taking 5th, 8th and 9th places in Portuguese, Spanish and English.
Subtask 2
The extended results for CASE 2021 subtask 2 are provided in Table 3. The boldness indicates CASE 2022 entries. ARC-NLP train an ensemble of transformers models using all training data to achieve 4th, 5th and 7th places in Spanish, English and Portuguese respectively. ClassBases finetune mLUKE-base for Portuguese and Spanish placing 5th in both, XLM-Roberta-large for English taking 8th place. 13
Subtask 3
The extended results for CASE 2021 subtask 3 are provided in Table 4. The boldness indicates CASE 2022 entries. ARC-NLP achieve 1st place in both English and Spanish, 2nd place in Portuguese. They use an ensemble of English transformers models for English, Portuguese and Spanish test data. They train with only English data and translating Portuguese test data into English during model prediction. For Spanish test data, they train with English, translated Portuguese and translated Spanish, and test on translated Spanish data.
Subtask 4
The extended results for CASE 2021 subtask 4 are provided in Table 5. The boldness indicates CASE 2022 entries. SPARTA employ two methods. Both of these methods build on pretrained XLM-Roberta-large and use a data augmentation technique (sentence reordering). For English and Portuguese, they gather articles that contain protest events from outside sources and use them for further pretraining. For Spanish, they use an XLM-Roberta-large model that was further pretrained on 13 CamPros do not describe their model for subtask 2.
Conclusion
The CASE 2022 extension consists of expanding the test data with more data in previously available languages, namely, English, Hindi, Portuguese, and Spanish, and adding new test data in Mandarin, Turkish, and Urdu for Sub-task 1, document classification. The training data from CASE 2021 in English, Portuguese and Spanish were utilized. Therefore, predicting document labels in Hindi, Mandarin, Turkish, and Urdu occurs in a zero-shot setting.
The CASE 2022 workshop accepts reports on systems developed for predicting test data of CASE 2021 as well. We observe that the best systems submitted by CASE 2022 participants achieve between 79.71 and 84.06 F1-macro for new languages in a zero-shot setting. The winning approaches are mainly ensembling models and merging data in multiple languages. The best two submissions on CASE 2021 data outperform submissions from last year for Subtask 1 and Subtask 2 in all languages. Only the following scenarios were not outperformed by new submissions on CASE 2021: Subtask 3 Portuguese & Subtask 4 English.
We aim at increasing number of languages and subtasks such as event coreference resolution and event type classification in the scope of following edition of this shared task.
The CASE 2022 test data are the2 https://github.com/emerging-welfare/
case-2021-shared-task
for
CASE
2021
union of CASE 2021 test data and additional new
documents in both available and new languages.
The new languages are Mandarin, Turkish, and
Urdu.
Table 2 :
2The performance of the submissions in terms of F1-macro and their ranks as a subscript for each language and each team participating in CASE 2021 subtask 1. Bold teams indicate CASE 2022 entries.Team
English Portuguese Spanish
ALEM
79.67 9
42.79 15
45.30 15
AMU-EuraNova
75.64 14
81.61 11
76.39 11
DaDeFrTi
79.28 10
86.62 6
85.17 6
FKIE_itf_2021
64.96 16
75.81 13
70.49 14
HSAIR
78.50 11
85.06 8
83.25 8
IBM MNLP IE
84.56 4
88.47 3
88.61 2
IIITT
82.91 7
79.51 12
75.78 12
SU-NLP
83.05 6
N/A
N/A
NoConflict
85.32 3
87.00 4
79.97 10
jiawei1998
76.14 13
84.67 9
83.05 9
jitin
66.96 15
69.02 14
72.94 13
ARC-NLP
83.77 5
86.53 7
87.20 4
CamPros
77.94 12
81.63 10
83.69 7
ClassBases
81.12 8
86.83 5
87.10 5
fine_sunny_day
85.75 2
89.67 1
88.78 1
lizhuoqun2021_iscas 85.93 1
88.86 2
88.61 2
Table 3 :
3The performance of the submissions in terms
of F1-macro and their ranks as a subscript for each
language and each team participating in subtask 2. Bold
teams indicate CASE 2022 entries.
CoNLL 2002 Spanish data. They take 1st place
both in Portuguese and Spanish, 3rd place in En-
glish.
ARC-NLP finetune an ensemble of transformers
models for each language. They use all training
Table 4 :
4data for Portuguese and Spanish, and only English for English test data. They achieve 2nd place in all languages. EventGraph aim to solve event extraction as semantic graph parsing. They use a graph encoding method where the labels for triggers and arguments are represented as node labels, also splitting multiple triggers. They use the pretrained XLM-Roberta-large as their encoder. They achieve 4th place both in English and Portuguese, 5th place in Spanish. ClassBases take 9th place in all languages finetuning XLM-Roberta-base transformers model.The performance of the submissions in terms
of CoNLL-2012 average score Pradhan et al. (2014) and
their ranks as a subscript for each language and each
team participating in subtask 3. Bold teams indicate
CASE 2022 entries.
Scores
Team
English Portuguese Spanish
AMU-EuraNova
69.96 7
61.87 8
56.64 8
Handshakes AI Research 73.53 5
68.15 6
62.21 6
IBM MNLP IE
78.11 1
73.24 3
66.20 3
SU-NLP
2.58 10
N/A
N/A
jitin
66.43 8
64.19 7
58.35 7
ARC-NLP
77.83 2
73.84 2
67.99 2
ClassBases
46.88 9
12.52 9
37.09 9
EventGraph
74.76 4
71.72 4
64.48 5
SPARTA
76.60 3
74.56 1
69.86 1
lapardnemihk9989
72.18 6
70.98 5
64.83 4
Table 5 :
5The performance of the submissions in terms of F1 score based on CoNLL-2003(Tjong Kim Sang and De Meulder, 2003) and their ranks as a subscript for each language and each team participating in subtask 4. Bold teams indicate CASE 2022 entries.226
https://emw.ku.edu.tr/case-2022/, accessed on November 13, 2022.
https://github.com/LoicGrobol/scorch, accessed on November 13, 2022.
. 10.48550/ARXIV.2206.102992022. References Fırat Duruşan. which is not accessible due to change of the servers of Codalab. and Alvaro Comin. 2022. Global contentious politics database (glocon) annotation manualshttps://github.com/emerging-welfare/ case-2022-multilingual-event for CASE 2022. 3 https://github.com/bltlab/mot 4 https://codalab.lisn.upsaclay.fr/ competitions/7438, accessed on November 13, 2022. 5 https://codalab.lisn.upsaclay.fr/ competitions/7126, accessed on November 13, 2022. 6 https://competitions.codalab.org/ competitions/31247, which is not accessible due to change of the servers of Codalab. 7 https://codalab.lisn.upsaclay.fr/ competitions/7768, accessed on November 13, 2022. References Fırat Duruşan, Ali Hürriyetoglu, Erdem Yörük, Osman Mutlu, Çagrı Yoltar, Burak Gürel, and Alvaro Comin. 2022. Global contentious politics database (glocon) annotation manuals.
CEIA-NLP at CASE 2022 Task 1: Protest News Detection for Portuguese. Diogo Fernandes, Adalberto Junior, Gabriel Da Mata, Anderson Marques, Silva Da, Arlindo Rodrigues Galvao Soares, Filho, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Diogo Fernandes, Adalberto Junior, Gabriel da Mata Marques, Anderson da Silva Soares, and Arlindo Rodrigues Galvao Filho. 2022. CEIA- NLP at CASE 2022 Task 1: Protest News Detection for Portuguese. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL).
Multilingual protest news detectionshared task 1, CASE 2021. Ali Hürriyetoglu, Osman Mutlu, Erdem Yörük, Ritesh Farhana Ferdousi Liza, Shyam Kumar, Ratan, 10.18653/v1/2021.case-1.11Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021). the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)Online. Association for Computational LinguisticsAli Hürriyetoglu, Osman Mutlu, Erdem Yörük, Farhana Ferdousi Liza, Ritesh Kumar, and Shyam Ratan. 2021. Multilingual protest news detection - shared task 1, CASE 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2021), pages 79-91, Online. Association for Computational Linguistics.
Challenges and applications of automated extraction of socio-political events from text (case 2022): Workshop and shared task report. Ali Hürriyetoglu, Hristo Tanev, Vanni Zavarella, Reyyan Yeniterzi, Osman Mutlu, Erdem Yörük, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Ali Hürriyetoglu, Hristo Tanev, Vanni Zavarella, Reyyan Yeniterzi, Osman Mutlu, and Erdem Yörük. 2022. Challenges and applications of automated extraction of socio-political events from text (case 2022): Work- shop and shared task report. In Proceedings of the 5th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL).
Reyyan Yeniterzi, and Erdem Yörük. 2022. Event coreference resolution for contentious politics events. Ali Hürriyetoglu, Osman Mutlu, Fatih Beyhan, Fırat Duruşan, Ali Safaya, 10.48550/ARXIV.2203.10123Ali Hürriyetoglu, Osman Mutlu, Fatih Beyhan, Fırat Duruşan, Ali Safaya, Reyyan Yeniterzi, and Erdem Yörük. 2022. Event coreference resolution for con- tentious politics events.
Cross-Context News Corpus for Protest Event-Related Knowledge Base Construction. Ali Hürriyetoglu, Erdem Yörük, Osman Mutlu, Fırat Duruşan, Çagrı Yoltar, Deniz Yüret, Burak Gürel, 10.1162/dint_a_00092Data Intelligence. 32Ali Hürriyetoglu, Erdem Yörük, Osman Mutlu, Fırat Duruşan, Çagrı Yoltar, Deniz Yüret, and Burak Gürel. 2021. Cross-Context News Corpus for Protest Event- Related Knowledge Base Construction. Data Intelli- gence, 3(2):308-335.
Automated extraction of socio-political events from news (AESPEN): Workshop and shared task report. Ali Hürriyetoglu, Vanni Zavarella, Hristo Tanev, Erdem Yörük, Ali Safaya, Osman Mutlu, Proceedings of the Workshop on Automated Extraction of Socio-political Events from News 2020. the Workshop on Automated Extraction of Socio-political Events from News 2020Marseille, FranceEuropean Language Resources Association (ELRAAli Hürriyetoglu, Vanni Zavarella, Hristo Tanev, Erdem Yörük, Ali Safaya, and Osman Mutlu. 2020. Auto- mated extraction of socio-political events from news (AESPEN): Workshop and shared task report. In Pro- ceedings of the Workshop on Automated Extraction of Socio-political Events from News 2020, pages 1- 6, Marseille, France. European Language Resources Association (ELRA).
CamPros at CASE 2022 Task 1: Transformer-based Multilingual Protest News Detection. Neha Kumari, Mrinal Anand, Tushar Mohan, Ponnurangam Kumaraguru, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Neha Kumari, Mrinal Anand, Tushar Mohan, and Pon- nurangam Kumaraguru. 2022. CamPros at CASE 2022 Task 1: Transformer-based Multilingual Protest News Detection. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extrac- tion of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL).
Utilizing coarse-grained data in low-data settings for event extraction. Osman Mutlu, 10.48550/ARXIV.2205.05468Osman Mutlu. 2022. Utilizing coarse-grained data in low-data settings for event extraction.
SPARTA at CASE 2021 Task 1: Evaluating Different Techniques to Improve Event Extraction. Arthur Müller, Andreas Dafnos, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Arthur Müller and Andreas Dafnos. 2022. SPARTA at CASE 2021 Task 1: Evaluating Different Techniques to Improve Event Extraction. In Proceedings of the 5th Workshop on Challenges and Applications of Au- tomated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computa- tional Linguistics (ACL).
Multilingual open text release 1: Public domain news in 44 languages. Chester Palen-Michel, June Kim, Constantine Lignos, Proceedings of the Thirteenth Language Resources and Evaluation Conference. the Thirteenth Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationChester Palen-Michel, June Kim, and Constantine Lig- nos. 2022. Multilingual open text release 1: Public domain news in 44 languages. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2080-2089, Marseille, France. Eu- ropean Language Resources Association.
Scoring coreference partitions of predicted mentions: A reference implementation. Xiaoqiang Sameer Pradhan, Marta Luo, Eduard Recasens, Vincent Hovy, Michael Ng, Strube, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland2Short Papers). Association for Computational LinguisticsSameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 30-35, Baltimore, Maryland. Association for Com- putational Linguistics.
ARC-NLP at CASE 2022 Task 1: Ensemble Learning for Multilingual Protest Event Detection. Umitcan Sahin, Oguzhan Ozcelik, Cagri Izzet Emre Kucukkaya, Toraman, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Umitcan Sahin, Oguzhan Ozcelik, Izzet Emre Ku- cukkaya, and Cagri Toraman. 2022. ARC-NLP at CASE 2022 Task 1: Ensemble Learning for Multi- lingual Protest Event Detection. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computa- tional Linguistics (ACL).
NSUT-NLP at CASE 2022 Task 1: Multilingual Protest Event Detection using Transformer-based Models. Manan Suri, Krish Chopra, Adwita Arora, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Manan Suri, Krish Chopra, and Adwita Arora. 2022. NSUT-NLP at CASE 2022 Task 1: Multilingual Protest Event Detection using Transformer-based Models. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extrac- tion of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL).
Onur Uca, and Farhana Ferdousi Liza. 2022a. Event causality identification with causal news corpus -shared task 3, CASE 2022. Fiona Anting Tan, Ali Hürriyetoglu, Tommaso Caselli, Nelleke Oostdijk, Hansi Hettiarachchi, Tadashi Nomoto, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text. the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from TextAssociation for Computational LinguisticsFiona Anting Tan, Ali Hürriyetoglu, Tommaso Caselli, Nelleke Oostdijk, Hansi Hettiarachchi, Tadashi Nomoto, Onur Uca, and Farhana Ferdousi Liza. 2022a. Event causality identification with causal news corpus -shared task 3, CASE 2022. In Pro- ceedings of the 5th Workshop on Challenges and Ap- plications of Automated Extraction of Socio-political Events from Text (CASE 2022), Online. Association for Computational Linguistics.
Hansi Hettiarachchi, Iqra Ameer, Onur Uca, Farhana Ferdousi Liza, and Tiancheng Hu. 2022b. The causal news corpus: Annotating causal relations in event sentences from news. Fiona Anting Tan, Ali Hürrriyetoglu, Tommaso Caselli, Nelleke Oostdijk, Tadashi Nomoto, Proceedings of the Language Resources and Evaluation Conference. the Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationFiona Anting Tan, Ali Hürrriyetoglu, Tommaso Caselli, Nelleke Oostdijk, Tadashi Nomoto, Hansi Het- tiarachchi, Iqra Ameer, Onur Uca, Farhana Ferdousi Liza, and Tiancheng Hu. 2022b. The causal news corpus: Annotating causal relations in event sen- tences from news. In Proceedings of the Language Resources and Evaluation Conference, pages 2298- 2310, Marseille, France. European Language Re- sources Association.
Introduction to the conll-2003 shared task: Languageindependent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, 10.3115/1119176.1119195Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 20034USA. Association for Computational LinguisticsCONLL '03Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceed- ings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 -Volume 4, CONLL '03, page 142-147, USA. Association for Computa- tional Linguistics.
ClassBases at the CASE-2022 Multilingual Protest Event Detection Task: Multilingual Protest News Detection and Automatically Replicating Manually Created Event Datasets. Peratham Wiriyathammabhum, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Peratham Wiriyathammabhum. 2022. ClassBases at the CASE-2022 Multilingual Protest Event Detec- tion Task: Multilingual Protest News Detection and Automatically Replicating Manually Created Event Datasets. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extrac- tion of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL).
EventGraph at CASE 2021 Task 1: A General Graph-based Approach to Protest Event Extraction. Huiling You, David Samuel, Samia Touileb, Lilja Øvrelid, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Huiling You, David Samuel, Samia Touileb, and Lilja Øvrelid. 2022. EventGraph at CASE 2021 Task 1: A General Graph-based Approach to Protest Event Extraction. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extrac- tion of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL).
Random sampling in corpus design: Cross-context generalizability in automated multicountry protest event collection. Erdem Yörük, Ali Hürriyetoglu, Fırat Duruşan, Çagrı Yoltar, 10.1177/00027642211021630American Behavioral Scientist. 0000027642211021630Erdem Yörük, Ali Hürriyetoglu, Fırat Duruşan, and Çagrı Yoltar. 2021. Random sampling in corpus de- sign: Cross-context generalizability in automated multicountry protest event collection. American Be- havioral Scientist, 0(0):00027642211021630.
Tracking COVID-19 protest events in the United States. Vanni Zavarella, Hristo Tanev, Ali Hürriyetoglu, Peratham Wiriyathammabhum, Bertrand De Longueville, Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL). the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL)Shared Task 2: Event Database Replication, CASE 2022Vanni Zavarella, Hristo Tanev, Ali Hürriyetoglu, Peratham Wiriyathammabhum, and Bertrand De Longueville. 2022. Tracking COVID-19 protest events in the United States. Shared Task 2: Event Database Replication, CASE 2022. In Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2022), online. Association for Computational Linguistics (ACL).
| [
"https://github.com/sighsmile/conlleval,",
"https://github.com/emerging-welfare/",
"https://github.com/LoicGrobol/scorch,",
"https://github.com/emerging-welfare/",
"https://github.com/bltlab/mot"
] |
[
"Multilingual Hate Speech and Offensive Content Detection using Modified Cross-entropy Loss",
"Multilingual Hate Speech and Offensive Content Detection using Modified Cross-entropy Loss"
] | [
"Arka Mitra \nIndian Institute of Technology\nKharagpurIndia\n",
"Priyanshu Sankhala \nNational Institute of Technology Raipur\nIndia\n"
] | [
"Indian Institute of Technology\nKharagpurIndia",
"National Institute of Technology Raipur\nIndia"
] | [] | The number of increased social media users has led to a lot of people misusing these platforms to spread offensive content and use hate speech. Manual tracking the vast amount of posts is impractical so it is necessary to devise automated methods to identify them quickly. Large language models are trained on a lot of data and they also make use of contextual embeddings. We fine-tune the large language models to help in our task. The data is also quite unbalanced; so we used a modified cross-entropy loss to tackle the issue. We observed that using a model which is fine-tuned in hindi corpora performs better. Our team (HNLP) achieved the macro F1-scores of 0.808, 0.639 in English Subtask A and English Subtask B respectively. For Hindi Subtask A, Hindi Subtask B our team achieved macro F1-scores of 0.737, 0.443 respectively in HASOC 2021. | null | [
"https://arxiv.org/pdf/2202.02635v1.pdf"
] | 246,634,213 | 2202.02635 | cf8d623eb1dd7262943c9a4c5e3b52fda9a49898 |
Multilingual Hate Speech and Offensive Content Detection using Modified Cross-entropy Loss
Arka Mitra
Indian Institute of Technology
KharagpurIndia
Priyanshu Sankhala
National Institute of Technology Raipur
India
Multilingual Hate Speech and Offensive Content Detection using Modified Cross-entropy Loss
Hate speech detectionText classificationDeep-learningTransfer learning
The number of increased social media users has led to a lot of people misusing these platforms to spread offensive content and use hate speech. Manual tracking the vast amount of posts is impractical so it is necessary to devise automated methods to identify them quickly. Large language models are trained on a lot of data and they also make use of contextual embeddings. We fine-tune the large language models to help in our task. The data is also quite unbalanced; so we used a modified cross-entropy loss to tackle the issue. We observed that using a model which is fine-tuned in hindi corpora performs better. Our team (HNLP) achieved the macro F1-scores of 0.808, 0.639 in English Subtask A and English Subtask B respectively. For Hindi Subtask A, Hindi Subtask B our team achieved macro F1-scores of 0.737, 0.443 respectively in HASOC 2021.
Introduction
With the increased use of social media platform like Twitter, Facebook, Instagram, and YouTube by users around the world, the platforms have had positive aspects including but not limited to social interaction, meeting like-minded people, giving a voice to each individual to share their opinions [1]. However, as a result, social media platforms can also be used to spread hate comments, hate posts by certain individuals or groups; which can lead to having anxiety, mental illness and severe stress to people who consume that hate content [2]. It becomes necessary to be able to detect such activities at its earliest to stop it from spreading, thereby making social media a healthy place to interact and share their views without a fear of getting hate comments [3]. The hate posts can be insults or racist or discriminating on the bases of a particular gender, religion, nationality, age bracket, ethnicity. Such comments can also lead to goading of violence amongst people. With the large number of posts being shared each minute, it is not possible to manually classify each of the posts. Thus, a pre-programmed system is required to distinguish Hate speech activities quickly as hate content gains a lot of attention and is subject to be shared fast as well [4]. Direct targeted abuses and profane content are not that difficult to classify. However, it is extremely hard to recognize indirect hate content often involving use of humour, irony, sarcasm even for an human annotator when the context of the posts are not provided. This makes the classification task additionally more difficult for most progressive frameworks. HASOC 2021 [5] is a shared task for the identification of hateful and offensive content in English and Indo-Aryan Languages. We participated in two sub-tasks for English and Hindi language [6]. The sub task A refers to classifying twitter samples into:
• HOF Hate and offensive :-contains hate speech/profane/offensive content. • NOT Non Hate-offensive :-which does not contain any hate speech, profane, offensive content.
The sub task B refers to classifying twitter samples into: For tasks pertaining to English language, we experimented with large language models like fine-tuning BERT (Bidirectional Encoder Representation from Transformer) [7], RoBERTa (A Robustly Optimized BERT Pretraining Approach) [8] and XLNet (Generalized Autoregressive Pretraining for Language Understanding) [9] out of which RoBERTa outperformed others with the macro F1-score of 0.8089 while BERT and XLNet had the macro F1-score of 0.8050 and 0.7757 respectively in Subtask A and for Subtask B the macro F1-score was 0.6396 with RoBERTa model respectively. For the tasks referring to Hindi language, the authors used a model which is fine-tuned on detecting Hinglish sentiments [10] and had the macro F1-score of 0.7379 for Subtask A and macro F1-score of 0.4431 for Subtask B.
•
Related Work
In this section, we will discuss the previous state of the art methods proposed for detection of hate speech. The use of BERT and other transfer learning algorithms, and deep neural models based on LSTMs and CNNs tend to perform similar but better than traditional classifiers such as SVM [11]. The number of papers, trying to automate Hate speech detection, that have been published in Web of Science has been increasing exponentially [12]. Waseem et al. [13] have classified hate speech into different categories and led to the Offensive Language Identification Dataset (OLID) [14]. There has been work in different sub fields of abuse like in sexism [15,16], cyberbullying [17], trolling [18] and so on. There are hate comments in most of the social media sites like Youtube [19], Instagram [20] which shows the importance of having a generalized Hate detection model [13]. Work done by Yin et al. [21] gives an overall idea of the generalizability of the different models that are present for hate speech detection. For the different models, the features from the input that are used have a great impact on the performance. Xu et al. [22] showed that part-of-speech tags are quite successful for improving the model; it is further improved by Table 1 The detailed data description is given in considering the sentiment values [23]. The sentences in the online platforms do not always follow the normal textual formats or correct spellings. Thus, Mehdad et al. [24] used a character level encoding rather than using the word level encoding proposed by Meyer et al. [25]. The type of architecture used also impacts on the performance on the model. Swamy et al. [26] performed a comprehensive study that shows how different models perform and generalize.
Methodology
HASOC 2021 [6] has been going on for two years now and a lot of different ways are uncovered to detect hate content [27,28]. This paper covers the use of large language models for classification of hate speech content.
Languages
The Hate speech and Offensive Content Identification in English and Indo-Aryan Languages HASOC 2021 [5,6] purposes two different tasks, in 3 different languages English, Hindi, Marathi. The authors participated in both tasks for English and Hindi languages.
Task description
The first task in all languages know as "Subtask A" refers to a classification problem of twitter samples which were labelled as HOF-Hate and offensive content and NOT-Not hate and offensive content. The second task, know as "Subtask B" refers to a classification of twitter samples which were labelled as PRFN-Profane Words, HATE-Hate speech, OFFN-Offensive Content, and NONE-Non-hate content. The detailed description of all columns present in a dataset is given in Table 1 and the number of twitter samples corresponding to each label is given in Table 2.
Approach
The dataset that is provided in all the subtasks has an unequal number of samples per class. Hindi. From the ratio, one can understand that it would be unjust for the loss for each class to be the same. The cross entropy loss assigns same value to a probability score irrespective of To mitigate this, the authors have used modified cross-entropy loss as shown in Eqn. 1; it assigns a greater loss whenever a class with smaller frequency is misclassified. The weights factor in Eqn. 1 has a higher value for a class if the class has a lower frequency. This penalizes the model whenever that class is wrongly predicted and helps to improve the performance of the model.
( , ) = ℎ [ ] * (− [ ] + ( ∑︁ ( [ ]))) (1)
The authors used large-language models since the models are trained on a large amount of data and thus can understand the semantic structure of sentences and the tokens that are sent as inputs to these models have a contextual embedding associated with them. The output of the model is taken and then pooled. The resulting output is then passed through a linear layer and a argmax is used to find the expected class of the sentence as shown in Figure. 1.
Results
The authors submitted four groups of results Table 3 gives the final results for our submission. The results has been evaluated on a test dataset, which is about one-third of the training data size, using the Macro F1 scores. The experiments showed that large cased BERT performed the best followed by RoBERTa and the lowest scores were obtained from the BERT base model. The maximum sequence length that is used has a direct impact on the performance; with a larger length having a better performance, with the training time increases at the same time. The methodology followed for both English and Hindi are the same, but the performance obtained for the English subtask is quite better than that for the Hindi subtask. This shows that the language models are pretty good in understanding the semantics for English but fail to do so for a low resource language like Hindi. The modified cross-entropy loss provided a better F1 score as compared to training with equal importance given to all of the separate classes.
Experimental Details
For English language we experimented with RoBERTa base pre-trained model [8], fine tuned BERT large cased architecture [7], and XLNet [9]-all for the same configuration, i.e, max length is set to 120, batch size to 8 and trained with 4 number of epochs. AdamW optimizer [29] with an initial learning rate of 2e-5 is used for training. Similarly for hindi language tasks we used a pre-trained model [10] from the Hugging face [30] library. The Max length has been set to 200, batch size was 8, and number of epochs was set to 4. There is a trade-off between the accuracy and the total number of tokens. The amount of time the model takes for training is proportional to the square of the number of tokens. As the number of tokens increases, the amount of time increases. However, when we truncate the maximum length, some of the information present in the sentence gets lost and the prediction for the sentence might be wrong. We had to consider a trade-poff between the accuracy and the time it takes for the model to train. For deciding the maximum sentence length, about 99% percentile of number of tokens in sentences is considered. For generating predictions we made a split of 90 % for training and 10 % validation to compare the performance of different models, for each specific task and based on F1 scores of a particular epoch we updated the model weights.
The weights corresponding to the best validation scores have been selected for inferring the test values. We observed that usually 3, 4 trained epochs had a higher F1 score. For reproducibility, the codes have been uploaded to github 1 . The random seed has been set to 42.
Conclusion
In this paper, we explain the shared tasks presented by HASOC in English and Indo-Aryan languages. We used large language models which are pre-trained on large corpora for hate speech detection tasks and to evaluate predictions by different models a validation dataset was created. In future work, we hope to try out more different fine tuned models.
HATE Hate speech :-Posts under this class contain Hate speech content. • OFFN Offensive :-Posts under this class contain offensive content. • PRFN Profane :-These posts contain profane words. • NONE Non-Hate :-These posts do not contain any hate speech content.
Figure 1 :Pipeline Figure 2 :
12Overall Language Model the number of times it is present in the training set.
table below:either tweet is HOF or NOT for Subtask A task2 label, either tweet is HATE, OFFN or PRFN for Subtask B ID unique hasoc ID for each tweet for Hindi data setColumns
Description
tweet_id
unique value for the tweets
text
full text of the tweets
task1
label,
Table
. 2 shows the overall distribution. For subtask A for English, the ratio of the classes (HOF
and NOT) is around 2:1 while for Hindi it is around 1:2. Again, for subtask B, the ratio of the
classes (PRFN, HATE, OFFN, NONE) is about 2:1:1:2 for english and approximately 2:5:6:30 for
Table 2
2Class division of both subtasks for Train and Test DatasetSubtasks
No. of posts
Train set
Test set
English Hindi English Hindi
Subtask A
HOF
2501
1433
798
1027
NOT
1342
3161
483
505
Subtask B
PRFN
1196
213
224
74
HATE
683
566
379
215
OFFN
622
654
95
216
NONE
1342
3161
483
1027
Total
3843
4594
1281
1532
Table 3
3Results from the official Test set from the leaderboard published from 15% the data setTask
Our Score (Macro average F1) Best Score Rank
English Subtask A
0.8089
0.8177
4
English Subtask B
0.6396
0.6657
6
Hindi Subtask A
0.7379
0.7825
22
Hindi Subtask B
0.4431
0.5603
16
AcknowledgmentsThe authors would like to thank the organizers of Hate Speech and Offensive Content Identification in Indo-Aryan Languages 2021[5]for conducting this data challenge. The authors gratefully acknowledge google colab for providing GPU's to do the computation. All pre-trained models is based upon work supported by Hugging Face[30].
Racist and Sexist Hate Speech Detection: Literature Review. O Istaiteh, R Al-Omoush, S Tedmori, 2020 International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (2020). O. Istaiteh, R. Al-Omoush, S. Tedmori, Racist and Sexist Hate Speech Detection: Literature Review, 2020 International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (2020) 95-99.
Analysis of foul language usage in social media text conversation. S Kawate, K Patil, Int. J. Soc. Media Interact. Learn. Environ. 5S. Kawate, K. Patil, Analysis of foul language usage in social media text conversation, Int. J. Soc. Media Interact. Learn. Environ. 5 (2017) 227-251.
S Jaki, T D Smedt, M Gwóźdź, R Panchal, A Rossa, G D Pauw, 10.1075/jlac.00026.jakOnline hatred of women in the Incels.me forum. 7S. Jaki, T. D. Smedt, M. Gwóźdź, R. Panchal, A. Rossa, G. D. Pauw, Online hatred of women in the Incels.me forum 7 (2019) 240-268. URL: https://doi.org/10.1075%2Fjlac.00026.jak. doi:10.1075/jlac.00026.jak.
Spread of Hate Speech in Online Social Media. B Mathew, R Dutt, P Goyal, A Mukherjee, Proceedings of the 10th ACM Conference on Web Science. the 10th ACM Conference on Web ScienceB. Mathew, R. Dutt, P. Goyal, A. Mukherjee, Spread of Hate Speech in Online Social Media, Proceedings of the 10th ACM Conference on Web Science (2019).
Overview of the HASOC Subtrack at FIRE 2021: Hate Speech and Offensive Content Identification in English and Indo-Aryan Languages and Conversational Hate Speech, in: FIRE 2021: Forum for Information Retrieval Evaluation, Virtual Event. S Modha, T Mandl, G K Shahi, H Madhu, S Satapara, T Ranasinghe, M Zampieri, ACM2021S. Modha, T. Mandl, G. K. Shahi, H. Madhu, S. Satapara, T. Ranasinghe, M. Zampieri, Overview of the HASOC Subtrack at FIRE 2021: Hate Speech and Offensive Content Identification in English and Indo-Aryan Languages and Conversational Hate Speech, in: FIRE 2021: Forum for Information Retrieval Evaluation, Virtual Event, 13th-17th December 2021, ACM, 2021.
T Mandl, S Modha, G K Shahi, H Madhu, S Satapara, P Majumder, J Schäfer, T Ranasinghe, M Zampieri, D Nandini, A K , Overview of the HASOC subtrack at FIRE 2021: Hate Speech and Offensive Content Identification in English and Indo-Aryan Lan. T. Mandl, S. Modha, G. K. Shahi, H. Madhu, S. Satapara, P. Majumder, J. Schäfer, T. Ranas- inghe, M. Zampieri, D. Nandini, A. K. Jaiswal, Overview of the HASOC subtrack at FIRE 2021: Hate Speech and Offensive Content Identification in English and Indo-Aryan Lan-
Working Notes of FIRE 2021 -Forum for Information Retrieval Evaluation, CEUR, 2021. https://github.com/priyanshusankhala/hasoc-hnlp guages, in: Working Notes of FIRE 2021 -Forum for Information Retrieval Evaluation, CEUR, 2021. URL: http://ceur-ws.org/.
J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACLJ. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, in: NAACL, 2019.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, Roberta , ArXiv abs/1907.11692A Robustly Optimized BERT Pretraining Approach. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, RoBERTa: A Robustly Optimized BERT Pretraining Approach, ArXiv abs/1907.11692 (2019).
Z Yang, Z Dai, Y Yang, J Carbonell, R Salakhutdinov, Q V Le, Xlnet , Generalized Autoregressive Pretraining for Language Understanding. NeurIPSZ. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, Q. V. Le, XLNet: Generalized Autoregressive Pretraining for Language Understanding, in: NeurIPS, 2019.
Hinglishnlp: Fine-tuned language models for hinglish sentiment detection. M Bhange, N , M. Bhange, N. Kasliwal, Hinglishnlp: Fine-tuned language models for hinglish sentiment detection (2020).
S Modha, T Mandl, P Majumder, D Patel, Tracking Hate in Social Media: Evaluation, Challenges and Approaches. 1105S. Modha, T. Mandl, P. Majumder, D. Patel, Tracking Hate in Social Media: Evaluation, Challenges and Approaches, SN Comput. Sci. 1 (2020) 105.
Hate speech: A systematized review. M A Paz, J Montero-Díaz, A Moreno-Delgado, SAGE Open. 10M. A. Paz, J. Montero-Díaz, A. Moreno-Delgado, Hate speech: A systematized review, SAGE Open 10 (2020).
Understanding abuse: A typology of abusive language detection subtasks. Z Waseem, T Davidson, D Warmsley, I Weber, 10.18653/v1/W17-3012Proceedings of the First Workshop on Abusive Language Online. the First Workshop on Abusive Language OnlineVancouver, BC, CanadaAssociation for Computational LinguisticsZ. Waseem, T. Davidson, D. Warmsley, I. Weber, Understanding abuse: A typology of abusive language detection subtasks, in: Proceedings of the First Workshop on Abusive Language Online, Association for Computational Linguistics, Vancouver, BC, Canada, 2017, pp. 78-84. URL: https://aclanthology.org/W17-3012. doi:10.18653/v1/W17-3012.
Predicting the type and target of offensive posts in social media. M Zampieri, S Malmasi, P Nakov, S Rosenthal, N Farra, R Kumar, 10.18653/v1/N19-1144Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1M. Zampieri, S. Malmasi, P. Nakov, S. Rosenthal, N. Farra, R. Kumar, Predicting the type and target of offensive posts in social media, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, 2019, pp. 1415-1420. URL: https://aclanthology.org/ N19-1144. doi:10.18653/v1/N19-1144.
Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. Z Waseem, D Hovy, 10.18653/v1/N16-2013Proceedings of the NAACL Student Research Workshop. the NAACL Student Research WorkshopSan Diego, CaliforniaAssociation for Computational LinguisticsZ. Waseem, D. Hovy, Hateful symbols or hateful people? predictive features for hate speech detection on Twitter, in: Proceedings of the NAACL Student Research Workshop, Association for Computational Linguistics, San Diego, California, 2016, pp. 88-93. URL: https://aclanthology.org/N16-2013. doi:10.18653/v1/N16-2013.
Online hatred of women in the Incels.me forum. S Jaki, T Smedt, M Gwózdz, R Panchal, A Rossa, G D Pauw, Journal of Language Aggression and Conflict. S. Jaki, T. de Smedt, M. Gwózdz, R. Panchal, A. Rossa, G. D. Pauw, Online hatred of women in the Incels.me forum, Journal of Language Aggression and Conflict (2019).
Improving cyberbullying detection with user context. M Dadvar, D Trieschnigg, R Ordelman, F Jong, Advances in Information Retrieval. P. Serdyukov, P. Braslavski, S. O. Kuznetsov, J. Kamps, S. Rüger, E. Agichtein, I. Segalovich, E. YilmazBerlin Heidelberg; Berlin, HeidelbergSpringerM. Dadvar, D. Trieschnigg, R. Ordelman, F. de Jong, Improving cyberbullying detection with user context, in: P. Serdyukov, P. Braslavski, S. O. Kuznetsov, J. Kamps, S. Rüger, E. Agichtein, I. Segalovich, E. Yilmaz (Eds.), Advances in Information Retrieval, Springer Berlin Heidelberg, Berlin, Heidelberg, 2013, pp. 693-696.
R Kumar, A K Ojha, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018). M. Zampieri, S. Malmasithe First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)Santa Fe, New Mexico, USAAssociation for Computational LinguisticsR. Kumar, A. K. Ojha, M. Zampieri, S. Malmasi (Eds.), Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2018. URL: https://aclanthology.org/W18-4400.
Modeling the Detection of Textual Cyberbullying. K Dinakar, R Reichart, H Lieberman, The Social Mobile WebK. Dinakar, R. Reichart, H. Lieberman, Modeling the Detection of Textual Cyberbullying, in: The Social Mobile Web, 2011.
H Zhong, H Li, A C Squicciarini, S M Rajtmajer, C Griffin, D J Miller, C Caragea, Content-Driven Detection of Cyberbullying on the Instagram Social Network. IJCAIH. Zhong, H. Li, A. C. Squicciarini, S. M. Rajtmajer, C. Griffin, D. J. Miller, C. Caragea, Content-Driven Detection of Cyberbullying on the Instagram Social Network, in: IJCAI, 2016.
Towards generalisable hate speech detection: a review on obstacles and solutions. W Yin, A Zubiaga, PeerJ. Computer science. 7W. Yin, A. Zubiaga, Towards generalisable hate speech detection: a review on obstacles and solutions, PeerJ. Computer science 7 (2021) e598-e598.
Learning from bullying traces in social media. J.-M Xu, K.-S Jun, X Zhu, A Bellmore, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaAssociation for Computational LinguisticsJ.-M. Xu, K.-S. Jun, X. Zhu, A. Bellmore, Learning from bullying traces in social media, in: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, Montréal, Canada, 2012, pp. 656-666. URL: https://aclanthology.org/N12-1084.
Automated Hate Speech Detection and the Problem of Offensive Language. T Davidson, D Warmsley, M W Macy, I Weber, ICWSMT. Davidson, D. Warmsley, M. W. Macy, I. Weber, Automated Hate Speech Detection and the Problem of Offensive Language, in: ICWSM, 2017.
Do characters abuse more than words?. Y Mehdad, J Tetreault, 10.18653/v1/W16-3638Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 17th Annual Meeting of the Special Interest Group on Discourse and DialogueLos AngelesAssociation for Computational LinguisticsY. Mehdad, J. Tetreault, Do characters abuse more than words?, in: Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Association for Computational Linguistics, Los Angeles, 2016, pp. 299-303. URL: https://aclanthology.org/ W16-3638. doi:10.18653/v1/W16-3638.
A platform agnostic dual-strand hate speech detector. J S Meyer, B Gambäck, 10.18653/v1/W19-3516Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineFlorence, ItalyAssociation for Computational LinguisticsJ. S. Meyer, B. Gambäck, A platform agnostic dual-strand hate speech detector, in: Proceed- ings of the Third Workshop on Abusive Language Online, Association for Computational Linguistics, Florence, Italy, 2019, pp. 146-156. URL: https://aclanthology.org/W19-3516. doi:10.18653/v1/W19-3516.
Studying generalisability across abusive language detection datasets. S D Swamy, A Jamatia, B Gambäck, 10.18653/v1/K19-1088Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Hong Kong, ChinaAssociation for Computational LinguisticsS. D. Swamy, A. Jamatia, B. Gambäck, Studying generalisability across abusive language detection datasets, in: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 940-950. URL: https://aclanthology.org/K19-1088. doi:10.18653/v1/ K19-1088.
Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages. T Mandl, S Modha, P Majumder, D Patel, M Dave, C Mandalia, A Patel, Proceedings of the 11th Forum for Information Retrieval Evaluation. the 11th Forum for Information Retrieval EvaluationT. Mandl, S. Modha, P. Majumder, D. Patel, M. Dave, C. Mandalia, A. Patel, Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages, Proceedings of the 11th Forum for Information Retrieval Evaluation (2019).
Overview of the HASOC track at FIRE 2020: Hate Speech and Offensive Content Identification in Indo-European Languages. T Mandl, S Modha, G K Shahi, A K Jaiswal, D Nandini, D Patel, P Majumder, J Schäfer, Proceedings of the 12th Forum for Information Retrieval Evaluation. the 12th Forum for Information Retrieval EvaluationT. Mandl, S. Modha, G. K. Shahi, A. K. Jaiswal, D. Nandini, D. Patel, P. Majumder, J. Schäfer, Overview of the HASOC track at FIRE 2020: Hate Speech and Offensive Content Iden- tification in Indo-European Languages, Proceedings of the 12th Forum for Information Retrieval Evaluation (2020).
Decoupled Weight Decay Regularization. I Loshchilov, F Hutter, ICLRI. Loshchilov, F. Hutter, Decoupled Weight Decay Regularization, in: ICLR, 2019.
T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T L Scao, S Gugger, M Drame, Q Lhoest, A M Rush, arXiv:1910.03771HuggingFace's Transformers: State-of-the-art Natural Language Processing. 2020T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, A. M. Rush, HuggingFace's Transformers: State-of-the-art Natural Language Processing, 2020. arXiv:1910.03771.
| [
"https://github.com/priyanshusankhala/hasoc-hnlp"
] |
[
"SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing",
"SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing"
] | [
"Junyi Ao \nDepartment of Computer Science and Engineering\nSouthern University of Science and Technology\n\n\nDepartment of Computing\nThe Hong Kong Polytechnic University\n\n",
"Rui Wang \nDepartment of Computer Science and Technology\nTongji University\n\n",
"Long Zhou ",
"Chengyi Wang ",
"Shuo Ren ",
"Yu Wu ",
"Shujie Liu ",
"Tom Ko \nDepartment of Computer Science and Engineering\nSouthern University of Science and Technology\n\n",
"Qing Li \nDepartment of Computing\nThe Hong Kong Polytechnic University\n\n",
"Yu Zhang \nDepartment of Computer Science and Engineering\nSouthern University of Science and Technology\n\n",
"Zhihua Wei \nDepartment of Computer Science and Technology\nTongji University\n\n",
"Yao Qian ",
"Jinyu Li ",
"Furu Wei ",
"Microsoft ",
"Peng Cheng Laboratory "
] | [
"Department of Computer Science and Engineering\nSouthern University of Science and Technology\n",
"Department of Computing\nThe Hong Kong Polytechnic University\n",
"Department of Computer Science and Technology\nTongji University\n",
"Department of Computer Science and Engineering\nSouthern University of Science and Technology\n",
"Department of Computing\nThe Hong Kong Polytechnic University\n",
"Department of Computer Science and Engineering\nSouthern University of Science and Technology\n",
"Department of Computer Science and Technology\nTongji University\n"
] | [
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] | Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification. We release our code and model at https://github.com/microsoft/ SpeechT5. | 10.18653/v1/2022.acl-long.393 | [
"https://www.aclanthology.org/2022.acl-long.393.pdf"
] | 238,856,828 | 2110.07205 | 2f734203e5e4648b246adbcee0904592fad1d422 |
SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Junyi Ao
Department of Computer Science and Engineering
Southern University of Science and Technology
Department of Computing
The Hong Kong Polytechnic University
Rui Wang
Department of Computer Science and Technology
Tongji University
Long Zhou
Chengyi Wang
Shuo Ren
Yu Wu
Shujie Liu
Tom Ko
Department of Computer Science and Engineering
Southern University of Science and Technology
Qing Li
Department of Computing
The Hong Kong Polytechnic University
Yu Zhang
Department of Computer Science and Engineering
Southern University of Science and Technology
Zhihua Wei
Department of Computer Science and Technology
Tongji University
Yao Qian
Jinyu Li
Furu Wei
Microsoft
Peng Cheng Laboratory
SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification. We release our code and model at https://github.com/microsoft/ SpeechT5.
Introduction
Starting with ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), substantial work has shown that pre-trained models can significantly improve in various natural language processing (NLP) tasks Figure 1: An illustration of the SpeechT5 framework, which treats spoken language processing tasks as a speech/text to speech/text format, including automatic speech recognition (ASR), speech translation (ST), speech identification (SID), text to speech (TTS), voice conversion (VC), and speech enhancement (SE). (Radford et al., 2019;CONNEAU and Lample, 2019;Lewis et al., 2020). Following the pre-training techniques in NLP, self-supervised speech representation learning has also been investigated and shown promising results, benefiting from richly learned representations (Chung and Glass, 2018;Chuang et al., 2020;Song et al., 2019;Baevski et al., 2020;Hsu et al., 2021;Chung et al., 2021a), such as wav2vec 2.0 (Baevski et al., 2020) and HuBERT (Hsu et al., 2021).
However, previous speech pre-training work suffers from two problems: (1) most of them learn the speech representation with only unlabeled speech data but ignore the importance of textual data to spoken language tasks (e.g., automatic speech recognition) which require the modality transformation; (2) most of these models solely rely on a pre-trained speech encoder for various downstream tasks, leaving the decoder not pre-trained for the sequence-to-sequence generation tasks. How to design a unified encoder-decoder model that can take advantage of both unlabeled speech and text data to improve various spoken language processing tasks is not well explored.
Inspired by the T5 method (Raffel et al., 2020), we attempt to formulate each spoken language processing task as a speech/text to speech/text problem via an encoder-decoder framework, which enables us to use the same pre-trained model with bimodal data across diverse tasks, as shown in Figure 1. To achieve this, we propose a unifiedmodal pre-training framework, SpeechT5, containing an encoder-decoder backbone network and modal-specific pre/post-nets. With the pre-nets, the input speech/text is embedded in a shared space, and the encoder-decoder backbone network models the sequence-to-sequence conversion, from which the model-specific post-nets generate the speech/text output. Particularly, SpeechT5 is mainly pre-trained with a denoising sequence-tosequence method by leveraging large-scale unlabeled text and speech corpus. To align the textual and acoustic information into a unified semantic space, the proposed SpeechT5 model (1) maps text and speech representations into a shared vector quantization space, and (2) randomly mixes up the quantized latent representations and the contextual states, which can better guide the quantizer to learn the cross-modal features.
We fine-tune SpeechT5 on a wide variety of downstream spoken language processing tasks, including automatic speech recognition (ASR), textto-speech (TTS), speech translation (ST), voice conversion (VC), speech enhancement (SE), and speaker identification (SID). Massive experiments show that the proposed SpeechT5 model achieves a significant improvement on these spoken language processing tasks compared with the state-of-theart baselines. Specifically, the proposed SpeechT5 outperforms wav2vec 2.0 (Baevski et al., 2020) and HuBERT (Hsu et al., 2021) with the BASE model on the ASR task and also performs better than the state-of-the-art voice Transformer network on the VC task. Besides, SpeechT5 is significantly superior to SpeechNet (Chen et al., 2021b) and pre-trained models from SUPERB (Yang et al., 2021) and achieves the stateof-the-art performance (i.e., 96.49%) on the SID task. We further provide an empirical comparison of the pre-training tasks and modules, and the ablation study demonstrates the effectiveness of the proposed joint speech-text pre-training method.
The contributions of this paper are summarized as follows.
• To the best of our knowledge, this is the first work to investigate a unified encoder-decoder framework for various spoken language processing tasks.
• We propose a cross-modal vector quantization approach, which learns the implicit alignment between acoustic and textual representation with large-scale unlabeled speech and text data.
• Extensive experiments on spoken language processing tasks demonstrate the effectiveness and superiority of the proposed SpeechT5 model.
SpeechT5
In this section, we propose SpeechT5, a unifiedmodal framework for learning joint contextual representations for speech and text data via a shared encoder-decoder structure.
Model Architecture
Figure 2(a) shows the model architecture of the proposed SpeechT5 model. It consists of an encoderdecoder module and six modal-specific pre/postnets. The pre-nets convert the input speech X s ∈ D s or text X t ∈ D t to a unified space of hidden representations and then feed them into the shared encoder-decoder to perform the sequence-tosequence conversion. Finally, the post-nets generate the output in the speech or text modality, based on the decoder output.
Input/Output Representations To train a single model for a diverse set of spoken language processing tasks, we formulate them as "speech/text to speech/text" tasks, where the model is fed with speech/text as the input and generates the corresponding output in the speech/text format. Specifically, a text is split into a sequence of characters X t = (x t 1 , ..., x t N t ) as the input and output. For speech modality, the raw waveform X s = (x s 1 , ..., x s N s ) is used as the input, and a sequence of the log Mel-filterbank features X f = (x f 1 , ..., x f N f ) extracted from raw audio using librosa tool 1 is adopted as the target output. A vocoder (Kong et al., 2020) is leveraged to generate the final waveform from the generated features.
Encoder-Decoder Backbone The Transformer encoder-decoder model (Vaswani et al., 2017) is used as the backbone network of SpeechT5. Please Figure 2: (a) The model architecture of SpeechT5, which contains an encoder-decoder module and six modalspecific pre/post-nets. Most spoken language processing tasks can be learned by concatenating the encoder-decoder module and the corresponding pre-net and post-net. (b) By sharing discrete tokens across modalities, the joint pretraining approach builds bridges between speech and text. Hidden states and latent units are mixed up and used as the inputs of the cross-attention module in the decoder. refer to Vaswani et al. (2017) for more details. We employ the relative position embedding (Shaw et al., 2018) to help capture the relative position differences between elements in the input. Specifically, we only add the relative position embedding to the dot-product weights of the self-attention.
Speech Pre/Post-Net The convolutional feature extractor of wav2vec 2.0 (Baevski et al., 2020) serves as the speech-encoder pre-net to downsample raw waveform X s and produce a sequence of a speech utterance H = (h 1 , ..., h N h ). The speechdecoder pre-net is a neural network composed of three fully connected layers with the ReLU activation, fed with the log Mel-filterbank X f . To support multi-speaker TTS and VC, the speaker embedding extracted with the x-vector (Snyder et al., 2018) is concatenated with the output of the speech-decoder pre-net followed by a linear layer. The speech-decoder post-net consists of two modules. The first module uses a linear layer fed with the decoder output to predict the log Melfilterbank Y f = (y f 1 , ..., y f N f ), followed by five 1-dimensional convolutional layers to produce a residual to refine the predicted Y f . Another linear module is added to project the decoder output to a scalar for predicting the stop token.
Text Pre/Post-Net We use shared embeddings as the text-encoder pre-net and text-decoder pre/postnets. The pre-net transforms a token index into an embedding vector. The post-net transforms the hidden state into the probability distribution of tokens, normalized by the softmax function.
Pre-Training
The proposed SpeechT5 model can be pre-trained with large-scale collections of unlabeled speech and text corpus. The proposed joint pre-training method can align the textual and acoustic information into a unified semantic space.
Speech Pre-Training Leveraging unlabeled speech data D s to learn general speech representations for both classification and generation tasks, SpeechT5 is trained with two types of tasks: bidirectional masked prediction and sequence-to-sequence generation.
Following HuBERT (Hsu et al., 2021), the bidirectional masked prediction leverages a masked language model similar to BERT (Devlin et al., 2019) for the encoder, in which an acoustic unit discovery model provides the frame-level targets Z = (z 1 , ..., z N h ) 2 . Specifically, we apply span mask strategies to the output H from speechencoder pre-net, where 8% of timesteps are randomly selected as start indices, and spans of 10 steps are masked. The Transformer encoder takes masked H as the input and produces hidden representations U = (u 1 , ..., u N h ). Based on these hidden representations, the cross-entropy loss is computed over masked timesteps as
L s mlm = n∈M log p(z n |Ĥ, n),(1)
whereĤ denotes the masked version of H, M denotes the set of masked timesteps, and z n denotes the frame-level target at timestep n from Z. Furthermore, we propose to reconstruct the original speech via a sequence-to-sequence generation task, given the randomly masked input as introduced in bidirectional masked prediction. Following seq2seq TTS models (Li et al., 2019), we enforce the corresponding predicted output Y f , which is generated through the speech-decoder prenet, Transformer decoder, and speech-decoder postnet, to be close to the original X f by minimizing their L 1 -distance as
L s 1 = N f n=1 y f n − x f n 1 ,(2)
where x f n denotes n-th an 80-dimensional log Melfilterbank from X f . Besides, we use the binary cross-entropy (BCE) loss L s bce for the stop token. Text Pre-Training With unlabeled text data D t , SpeechT5 is trained to reconstruct the model output Y t = (y t 1 , ..., y t N t ) to the original text X t , using the corrupted textX t = (x t 1 , ...,x t M ) as the input generated with a mask-based noising function. Following the text infilling approach in BART 3 (Lewis et al., 2020), we randomly sample 30% of text spans to mask, where the span length of text spans draws from a Poisson distribution (λ = 3.5), and each span is replaced with a single mask token. Formally, SpeechT5, including text-encoder pre-net, encoder-decoder model, and text-decoder pre/post nets, is optimized to generate the original sequence with the maximum likelihood estimation as
L t mle = N t n=1 log p(y t n |y t <n ,X t ),(3)
Joint Pre-Training The above pre-training methods can only leverage speech or text data to model acoustic or language information individually. To build a cross-modality mapping between speech and text, which is essential for tasks such as ASR and TTS, we propose a cross-modal vector quantization method to learn representations capturing the modality-invariant information. Specifically, we utilize vector quantized embeddings as a bridge to align the speech representation and text representation through a shared codebook, as shown in Figure 2(b). Inspired by VQ-VAE (Oord et al., 2017) and SemFace (Ren et al., 2021), we first use the quantizer to convert these continuous speech/text representations u i from the output of the encoder into discrete representations c i from a fixed-size codebook C K , which contains K learnable embeddings. Then, the nearest neighbor search is performed between the encoder output and the embedding of each latent code via the L 2 distance as
c i = arg min j∈[K] u i − c j 2 ,(4)
where c j is the j-th quantized vector in the codebook. Note that we do the same operation for the output of the speech and text encoders with a shared codebook. Then, we randomly replace a proportion (10%) of the contextual representations with quantized latent representations in the corresponding time steps and calculate the cross-attention upon the mixed representations, which can explicitly guide the quantizer to utilize the cross-modal information. The diversity loss is used to encourage sharing more codes by maximizing the entropy of the averaged Softmax distribution as
L d = 1 K K k=1 p k log p k ,(5)
where p k is the averaged probability of choosing the k-th code in the codebook. The final pre-training loss with unlabeled speech and text data can be formulated as
L = L s mlm + L s 1 + L s bce + L t mle + γL d . (6)
where γ is set to 0.1 during pre-training.
Fine-Tuning
After pre-training, we fine-tune the encoderdecoder backbone via the loss of the downstream task. The goal is to measure the learning abilities of SpeechT5, and we study the performance on a diverse set of downstream tasks such as ASR, TTS, ST, VC, SE, and SID. All of the spoken language processing tasks that we consider can be learned by concatenating the outputs of the encoder-decoder backbone and corresponding pre-net and post-net. Taking ASR as an example, the final model consists of the speech-encoder pre-net, encoder-decoder, text-decoder pre-net, and text-decoder post-net,
Experiments
Pre-Training Setup
All models are implemented in Fairseq 4 (Ott et al., 2019). The encoder-decoder backbone contains 12 Transformer encoder blocks and 6 Transformer decoder blocks, where the model dimension is 768, the inner dimension (FFN) is 3,072, and the number of attention heads is 12. The above encoder setting is the same as that in wav2vec 2.0 BASE and HuBERT BASE. The speech-encoder pre-net contains 7 blocks of temporal convolutions, each of which is composed of 512 channels with strides (5, 2, 2, 2, 2, 2, 2) and kernel sizes (10, 3, 3, 3, 3, 2, 2). For the speech-decoder pre-net and post-net, we use the same setting as the pre-net and post-net in Shen et al. (2018) except that the number of channels of the post-net is 256. For textencoder/decoder pre/post-net, a shared embedding layer with dimension 768 is used. For the vector quantization, we use two codebooks with 100 entries for the shared codebook module, resulting in a theoretical maximum of K = 10 4 code entries.
For speech pre-training, we use the full 960 hours of LibriSpeech audio (Panayotov et al., 2015). 4 https://github.com/pytorch/fairseq For text pre-training, we use the normalized language model training text of LibriSpeech as unlabeled data, which contains 400M sentences. 5 We optimize the model with Adam (Kingma and Ba, 2014) by warming up the learning rate for the first 8% of updates to a peak of 2×10 −4 , which is linear decayed for the following updates. We pre-train the proposed SpeechT5 model on 32 V100 GPUs with a batch size of around 90s samples per GPU for speech and 12k tokens per GPU for text and set the update frequency to 2 for 500k steps.
Evaluation on ASR
We fine-tune the ASR model with the LibriSpeech 100/960 hours data and train the language model (LM) with the LibriSpeech LM text data, which is used for shallow fusion (Gulcehre et al., 2015) during the ASR inference. Besides the cross-entropy loss for the decoder, we add an extra linear layer to calculate the connectionist temporal classification (CTC) loss on the top of the encoder (Watanabe et al., 2017), so that we can apply the joint CTC/attention decoding (Hori et al., 2017) to boost the performance. We measure the performance of ASR by the word error rate (WER). The implementation details can be found in Appendix B.1.
The results of ASR on the 100 hours set of Lib-riSpeech are reported in Table 1. We compare with several state-of-the-art self-supervised approaches, including DiscreteBERT , wav2vec 2.0 (Baevski et al., 2020), andHuBERT (Hsu et al., 2021). Without LM fusion, the baseline outperforms wav2vec 2.0 BASE and HuBERT BASE with the help of the joint CTC/attention decoding, which shows the importance of the decoder.
Evaluation on TTS
We fine-tune the pre-trained model on the 460hours LibriTTS clean sets (Zen et al., 2019) with the L 1 loss, L s bce loss, and attention loss (Tachibana et al., 2018). We utilize the HiFi- GAN (Kong et al., 2020) vocoder to convert the log Mel-filterbank to the raw waveform. We evaluate the Naturalness with the open-source NISQA-TTS (Mittag and Möller, 2020), the mean option score (MOS), and the comparison mean option score (CMOS) by native speakers on the randomly selected 200 sentences with various lengths (no overlapping with training data) generated by different models, in which case we keep the text content consistent. More details can be found in Appendix B.2. Table 3 shows the experimental results of TTS. The proposed SpeechT5 trained without L s mlm is considered because the bidirectional masked prediction loss is proposed to help the encoder learn to encode the speech signal, and this variant achieves superior Naturalness, as shown in Table 13 (in Appendix D). The proposed SpeechT5 model behaves better than baseline and achieves a performance of 2.91 Naturalness and 3.65 MOS. Furthermore, our proposed SpeechT5 obtains a gain of +0.29 in CMOS with respect to the baseline model, which suggests the proposed pre-training method significantly improves the speech generation quality.
Evaluation on ST
We evaluate the ST task on the MUST-C dataset (Di Gangi et al., 2019), including English-German (EN-DE) and English-French (EN-FR) translation tasks. We use the default training setting of speech translation in Fairseq ST (Wang et al., 2020), and we also average the last 10 checkpoints and use a beam size of 5 for decoding. Translation results are evaluated with case-sensitive BLEU (Papineni et al., 2002). Details about the dataset and fine-tune setting are introduced in Appendix B.3. We list the BLEU scores of ST in Table 4. The result of SpeechT5 without initializing the decoder is also reported since we do not pre-train the decoder with German or French data, and it outperforms the strong baseline whose encoder is initialized by HuBERT encoder. The proposed SpeechT5 further beats the SpeechT5 without initializing the decoder, and achieves a significant improvement of 1.75 and 1.54 BLEU scores than baseline in EN-DE and EN-FR tasks, respectively, which demonstrates the effectiveness and superiority of our method. Besides, our SpeechT5 model outperforms existing models such as Fairseq ST (Wang et al., 2020), ESPnet ST (Inaguma et al., 2020), and Adapter Tuning (Le et al., 2021) that employs adapter modules to be further specialized in each language pair from different pre-trained models.
Evaluation on VC
VC aims to convert a speaker-dependent source speech waveform into a different one while preserving linguistic information of the source speech waveform. We follow the many-to-many setting and utilize speech recordings of four speakers in the CMU Arctic (Kominek and Black, 2004), including clb, bdl, slt, and rms. For the waveform synthesis, we use the Parallel WaveGAN (Yamamoto et al., 2020), a non-autoregressive variant of the WaveNet vocoder. We employ the average of MCD (Mel-Cepstral Distortion) and WER as the metrics for the VC task. More details about the dataset and fine-tune setting are given in Appendix B.4.
We show the results of VC in Table 2, where we list the conversion from speaker bdl to slt and clb to slt as used in the voice Transformer network (VTN) (Huang et al., 2021). The experimental results demonstrate that the proposed SpeechT5 model achieves a significant gain than the strong baseline model. The proposed SpeechT5 model also outperforms the state-of-the-art VTN variants in terms of MCD, including VTN fine-tuned from ASR or TTS (Huang et al., 2021) and many-tomany VTN (Kameoka et al., 2021).
Evaluation on SE
SE is the task of removing background noise from a degraded speech signal and improving the intelligibility and the perceived quality of the signal. We use the WSJ0 Hipster Ambient Mixtures (WHAM!) dataset (Wichern et al., 2019) and conduct the 16 kHz max enhance-single task that recovers the signal from a mixture of only the first WSJ0 speaker and noise. We utilize HiFi-GAN to transform the log Mel-filterbank to the raw waveform. Since the input and output lengths are probably different in the encoder-decoder model, we can not evaluate it by PESQ (Rix et al., 2001) and ESTOI (Jensen and Taal, 2016), so we evaluate the negative impact on the ASR performance by WER. The implementation details of SE are in Appendix B.5.
As shown in Table 5, our strong baseline model recovers contents from the noisy speech, achieving 10.9% WER from 76.1% WER. Moreover, the proposed SpeechT5 model gets a relative 9% WER reduction compared to the strong baseline model. The results suggest that although the noisy speech with WHAM! is challenging as summarized in Table 12 (in Appendix B.5), the proposed encoderdecoder framework can effectively suppress the noise and recover the content.
Evaluation on SID
We convert SID, a multi-class classification task of classifying each utterance for its speaker identity, to a speech to text task by sequence to sequence model. Compared to the ASR task, the text embedding As shown in Table 6, our baseline is superior to existing Transformer-based methods such as SpeechNet (Chen et al., 2021b) and pre-trained models from SUPERB (Yang et al., 2021). Moreover, it outperforms ResNet-based architectures such as Thin ResNet-34 (Chung et al., 2020), indicating the superiority of the encoder-decoder ar-chitecture for the SID task. The SpeechT5 further improves the performance compared to baseline and achieves the state-of-the-art performance (i.e., 96.49% accuracy), which demonstrates the effectiveness of the proposed pre-training technique.
Ablation Study
To better understand why the proposed SpeechT5 model is effective, we investigate the influence of the pre-training methods by removing each of them independently. As shown in Table 7, we can draw the following conclusions: (1) The pre-training methods, including speech pre-training, text pre-training, and joint pre-training method, are important to SpeechT5 since without each of them, the performance of all tasks will degrade significantly; (2) Speech pretraining is more critical than text pre-training on these tasks that need to encode speech, and the ASR model fine-tuned from SpeechT5 without speech pre-training even can not converge; (3) Without the joint pre-training method, the performance of the ASR model decreases, which demonstrates that the learned alignment from joint pre-training brings benefits for cross-modality tasks; (4) The masked language model learning L s mlm of speech data is mainly responsible for extracting acoustic features and learning better speech representation, which is beneficial to ASR and SID tasks.
Related Work
Large-scale pre-training models such as BERT (Devlin et al., 2019), T5 (Raffel et al., 2020), wav2vec 2.0 (Baevski et al., 2020), andHuBERT (Hsu et al., 2021) have drawn much attention in the NLP and speech communities, due to its strong capabil-ity of generalization and efficient usage of largescale data (Devlin et al., 2019;Liu et al., 2019;Lewis et al., 2020;Chen et al., 2021c;Baevski et al., 2020;Lakhotia et al., 2021;Kharitonov et al., 2021;Chen et al., 2021a). However, the research mentioned above effects gear towards single-modal learning, hence they can only be used in either text or speech modeling. Although some speech-language pre-training work (Chung et al., 2021b;Kim et al., 2021; attempts to improve spoken language understanding tasks, these methods only focus on an encoder with task-specific layers for different tasks and do not pre-train a decoder for generation tasks such as speech synthesis or text generation. Besides, a series of research work begins to investigate joint text and speech training (Han et al., 2021;Ye et al., 2021;Tang et al., 2021a;Zheng et al., 2021;Tang et al., 2021b), but they are mainly designed for speech to text tasks.
The proposed SpeechT5 method is most related to T5 (Raffel et al., 2020). The core idea of the T5 model, a unified framework for a variety of text-based language problems, is to treat every text processing problem as a "text-to-text" problem. SpeechT5 is also related to Speech Chain (Tjandra et al., 2020), which leverages the ASR model and TTS model to build a closed-loop machine speech chain to train models on the concatenation of both labeled and unlabeled data, and SpeechNet (Chen et al., 2021b), which designs a universal modularized model to perform multiple speech processing tasks with multi-task learning. The differences from the above models are that (1) SpeechT5 is a shared cross-modal encoder-decoder framework, whose input and output are speech or text through multiple pre/post-nets; (2) SpeechT5 attempts to pre-train and improve the universal model with large-scale unlabeled text and speech data.
Another related work is SUPERB (Yang et al., 2021), a benchmark to examine the capability of pre-trained models such as HuBERT (Hsu et al., 2021). SUPERB focuses on investigating a simple framework to learn SUPERB tasks with a frozen and shared pre-trained encoder and lightweight prediction modules fine-tuned for each task. In contrast, the goal of SpeechT5 is to learn all spoken language processing tasks by fine-tuning a unifiedmodal encoder-decoder model, which is pre-trained on unlabeled speech and text corpus.
Conclusion
In this paper, we have proposed SpeechT5 as a pretrained encoder-decoder model for various spoken language processing tasks. We convert all spoken language processing tasks into a speech/text to speech/text format and propose a novel joint pretraining method to utilize cross-modal information by leveraging the unlabeled speech and text data. The proposed unified encoder-decoder model can support generation tasks such as speech translation and voice conversion. Massive experiments show that SpeechT5 significantly outperforms all baselines in several spoken language processing tasks. In the future, we are going to pre-train the SpeechT5 with a larger model and more unlabeled data. We are also interested in extending the proposed SpeechT5 framework to address multilingual spoken language processing tasks for future work.
A Comparisons of Text Mask Strategies
We compare the performance when using the BART (Lewis et al., 2020) or T5 (Raffel et al., 2020) strategies for text masking on the ASR task, as reported in Table 10. The BART strategy achieves comparable or better performance than the T5 strategy under different inference settings.
B Implementation Details B.1 ASR Dataset We use the LibriSpeech corpus and finetune on two labeled data settings: 960 hours of transcribed Librispeech and the train-clean-100 subset comprising 100 hours (100 hours labeled). We train the language model by the LibriSpeech language model (LM) text data, which is used for shallow fusion (Gulcehre et al., 2015) during the ASR inference.
Fine-Tuning Details
We fine-tune the model with the CTC loss and the cross-entropy loss, where the loss weights are 0.5 for both of them. We train on 8 V100 GPUs with a batch size of up to 256k audio samples per GPU. The learning rate is warmed up for the first 10% steps, hold as a constant for the following 40% steps, and is decayed linearly for the rest steps. Language Model and Decoding We train a character-level LM for the ASR inference. The model has the same architecture as the Transformer LM in Synnaeve et al. (2020), which is used for decoding of wav2vec 2.0 (Baevski et al., 2020) and HuBERT (Hsu et al., 2021 Table 9: Word-level perplexities of language models on dev-clean/other sets of LibriSpeech.
perplexities of these LMs on the LibriSpeech devclean/other sets as shown in Table 9. The Transformer LM used for SpeechT5 gets 56.5 perplexity on the dev-clean set and 59.3 perplexity on the devother set, which are higher than the perplexities of word Transformer LM in Synnaeve et al. (2020). It suggests that we may achieve better performance on the ASR task if the perplexities of our LM are similar to the LM in Synnaeve et al. (2020). During decoding, the beam size is set to 30 for all experiments. We select the model with the highest accuracy on dev-other set for inference and apply the joint CTC/attention decoding (Hori et al., 2017) to further improve the performance. The model generates the output transcription by the beam search algorithm, which aims to maximize α log P Dec + (1 − α) log P CT C + β log P LM (7) where α and β are weights for the log probabilities, P Dec , P CT C , and P LM are the probabilities of the decoder, CTC, and LM, respectively. We set α to 0.5 and β to 1.0 for fine-tuning experiments of 100 hours set, and set α to 0.9 and β to 0.7 for fine-tuning experiments of 960 hours set.
B.2 TTS
Dataset and Evaluation Metrics We use the 460-hours LibriTTS clean sets (Zen et al., 2019), a multispeaker corpus of read English speech from the audiobooks of the LibriVox project, as TTS training dataset. We trim the waveform as ESPnet recipe (Watanabe et al., 2018). The WER is evaluated by using the open-source ASR model wav2vec 2.0 CTC 6 . The naturalness of synthetic speech is estimated by using the open-source TTS naturalness prediction model NISQA-TTS 7 (Mittag and Möller, 2020). Fine-Tuning Details Besides the L 1 loss and BCE loss, we add an additional attention loss (Tachibana et al., 2018) to speed up the model convergence. The model is trained on 8 V100 GPUs by the Adam optimizer with a batch size of 20000 tokens per GPU. We assign the learning rate based on the inverse square root with the maximum learning rate of 10 −4 within 60k steps and apply 6k warm-up steps.
Fine-Tuning Details
B.5 SE
Dataset and Evaluation Metrics We aim to recover the content of signals contaminated by various noises and reduce the negative impact on the performance of ASR systems. The 16 kHz enhancesingle task of the WHAM! dataset (Wichern et al., 2019) is used as the SE dataset. It contains 20,000 training utterances, 5,000 validation utterances, and 3,000 test utterances, where the input waveform is a mixture of only the first WSJ0 9 speaker and noise. We trim the noisy segment without contents. The WER is evaluated by using the open-source ASR model 10 because lengths of inputs and outputs are probably different in the encoder-decoder model. Since lengths of noisy speech utterances are the same as lengths of clean utterances, we measure the test set via speech quality (PESQ) (Rix et al., 2001), extended short-time objective intelligibility (ESTOI) (Jensen and Taal, 2016), and WER to quantify the difficulty of noisy speech, as shown in Table 12. NSNet2 is the baseline model on the 2020 Deep Noise Suppression (DNS) challenge (Reddy et al., 2021) and obtains WER of 45.8%, probably due to the mismatch between the noise intensity of the WHAM! and DNS corpus.
Fine-Tuning Details
We employ the loss function as used in the fine-tuning of the VC task. The model is trained on 8 V100 GPUs by the Adam optimizer with a batch size of 16000 tokens per GPU. We assign the learning rate based on the inverse square root with the maximum learning rate 10 −4 within 100k steps and apply 10k warm-up steps. classifying an utterance into the ground-truth category. Specifically, the whole utterance is taken as an input to the model to determine the speaker identity.
Fine-Tuning Details We use the cross-entropy loss and fine-tune all models on 8 V100 GPUs by the Adam optimizer with a batch size of 64 segments per GPU and the inputs of 3 seconds. The learning rate is set based on one cycle of a triangular cyclical schedule between 10 −8 and 5 × 10 −4 in 60k steps. We initialize the weights of the text embeddings layer because there are no overlapping text tokens between the vocabularies during the pre-training and the SID fine-tuning.
C Results for 960 Hours Set of LibriSpeech
We also fine-tune the model on the 960 hours set of LibriSpeech, as reported in Table 11. Experiments show that the proposed SpeechT5 model achieves significant improvement even without LM fusion, and it performs comparable or even better than wav2vec 2.0 with LM fusion. We use the automatic evaluation tool NISQA-TTS to verify the performance of TTS results here, because it is convenient and cheap compared with MOS and CMOS, which need to be evaluated by humans. As shown in Table 13, the variant of
Table 2 :
2Results of VC (speech to speech) on the CMU Arctic. The bdl, clb, and slt denote three speakers. BASE on all sets and achieves the state-of-the-art performance. Due to space constraints, the results of 960h fine-tuning experiments are reported in Appendix C.The proposed SpeechT5 model achieves significant
improvements on all settings compared to wav2vec
2.0 BASE, HuBERT BASE and our strong base-
lines, demonstrating the superiority of the proposed
pre-training method. Furthermore, when decoding
with LM fusion, SpeechT5 obtains the lower WERs
than wav2vec 2.0
Table 3 :
3Results of TTS (text to speech) on the Lib-riTTS.
Table 4 :
4Results of ST (speech to text) on the MUST-C EN-DE and EN-FR.
Table 5 :
5Results of SE (speech to speech) on the WHAM!.
Table 6 :
6Results of SID (speech to text) on the Vox-Celeb1. The SUPERB fine-tuning freezes the encoder.
Table 7 :
7Ablation study for the SpeechT5 model. Different variants of the SpeechT5 model, including the SpeechT5 model without speech pre-training (PT), text pre-training, joint pre-training method, or the bidirectional masked prediction loss, are evaluated on the ASR (test subsets with WER), VC (bdl to slt with MCD), and SID (test set with ACC) tasks.
Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535.Chi Han, Mingxuan Wang, Heng Ji, and Lei Li. 2021.
Learning shared semantic space for speech-to-text
translation. In Proceedings of the 2021 Findings
of the Association for Computational Linguistics,
pages 2214-2225.
Takaaki Hori, Shinji Watanabe, and John Hershey.
2017. Joint CTC/attention decoding for end-to-end
speech recognition. In Proceedings of the 55th An-
nual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 518-
529.
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hu-
bert Tsai, Kushal Lakhotia, Ruslan Salakhutdi-
nov, and Abdelrahman Mohamed. 2021. Hubert:
Self-supervised speech representation learning by
masked prediction of hidden units. IEEE/ACM
Transactions on Audio, Speech, and Language Pro-
cessing, 29:3451-3460.
Wen-Chin Huang, Tomoki Hayashi, Yi-Chiao Wu, Hi-
rokazu Kameoka, and Tomoki Toda. 2021. Pretrain-
ing techniques for sequence-to-sequence voice con-
version. IEEE/ACM Transactions on Audio, Speech,
and Language Processing, 29:745-755.
Hirofumi Inaguma, Shun Kiyono, Kevin Duh, Shigeki
Karita, Nelson Yalta, Tomoki Hayashi, and Shinji
Watanabe. 2020. Espnet-st: All-in-one speech trans-
lation toolkit. In Proceedings of the 58th Annual
Meeting of the Association for Computational Lin-
guistics: System Demonstrations, pages 302-311.
Jesper Jensen and Cees H. Taal. 2016. An Algorithm
for Predicting the Intelligibility of Speech Masked
by Modulated Noise Maskers. IEEE/ACM Trans-
actions on Audio Speech and Language Processing,
24(11):2009-2022.
Hirokazu Kameoka, Wen-Chin Huang, Kou Tanaka,
Takuhiro Kaneko, Nobukatsu Hojo, and Tomoki
Toda. 2021. Many-to-many voice transformer net-
work. IEEE/ACM Transactions on Audio, Speech,
and Language Processing, 29:656-670.
Eugene Kharitonov, Ann Lee, Adam Polyak, Yossi Adi,
Jade Copet, Kushal Lakhotia, Tu-Anh Nguyen, Mor-
gane Rivière, Abdelrahman Mohamed, Emmanuel
Dupoux, et al. 2021. Text-free prosody-aware gen-
erative spoken language modeling. arXiv preprint
arXiv:2109.03264.
Minjeong Kim, Gyuwan Kim, Sang-Woo Lee, and
Jung-Woo Ha. 2021. St-bert: Cross-modal language
model pre-training for end-to-end spoken language
understanding. In Proceedings of the 2021 IEEE
International Conference on Acoustics, Speech and
Signal Processing, pages 7478-7482.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
John Kominek and Alan W Black. 2004. The cmu arc-
tic speech databases. In Proceedings of the Fifth
ISCA workshop on speech synthesis.
Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020.
Hifi-gan: Generative adversarial networks for effi-
cient and high fidelity speech synthesis. In Pro-
ceedings of the 34th Conference on Neural Informa-
tion Processing Systems, volume 33, pages 17022-
17033.
Kushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu,
Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh
Nguyen, Jade Copet, Alexei Baevski, Adelrahman
Mohamed, and Emmanuel Dupoux. 2021. On gen-
erative spoken language modeling from raw audio.
Transactions of the Association for Computational
Linguistics, 9:1336-1354.
Hang Le, Juan Pino, Changhan Wang, Jiatao
Gu, Didier Schwab, and Laurent Besacier. 2021.
Lightweight adapter tuning for multilingual speech
translation. In Proceedings of the 59th Annual Meet-
ing of the Association for Computational Linguistics
and the 11th International Joint Conference on Nat-
ural Language Processing (Volume 2: Short Papers),
pages 817-824.
Mike Lewis, Yinhan Liu, Naman Goyal, Mar-
jan Ghazvininejad, Abdelrahman Mohamed, Omer
Levy, Veselin Stoyanov, and Luke Zettlemoyer.
2020. BART: Denoising sequence-to-sequence pre-
training for natural language generation, translation,
and comprehension. In Proceedings of the 58th An-
nual Meeting of the Association for Computational
Linguistics, pages 7871-7880.
Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and
Ming Liu. 2019. Neural speech synthesis with trans-
former network. In Proceedings of the AAAI Confer-
ence on Artificial Intelligence, pages 6706-6713.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining ap-
proach. arXiv preprint arXiv:1907.11692.
Gabriel Mittag and Sebastian Möller. 2020. Deep learn-
ing based assessment of synthetic speech natural-
ness. In Proceedings of Interspeech 2020, pages
1748-1752.
Arsha Nagrani, Joon Son Chung, and Andrew Zisser-
man. 2017. Voxceleb: A large-scale speaker identi-
fication dataset. In Proceedings of the Interspeech
2017, pages 2616-2620.
Aaron van den Oord, Oriol Vinyals, and Koray
Kavukcuoglu. 2017. Neural discrete representation
learning. In Proceedings of the 31st Conference on
Neural Information Processing Systems, volume 30.
Table 8
8summarizes the hyperparam-
eters for ASR experiments of 100 hours and 960
hours sets.
Hyperparameter
100 hours 960 hours
updates
80k
320k
learning rate
6e-5
1.3e-4
time-step mask prob.
0.075
0.05
channel mask prob
0.008
0.0016
Table 8 :
8The setting of hyperparameters for ASR finetuning.
Besides the L 1 loss and BCE loss, we add an additional attention lossMask Strategies
CTC
LM
dev
test
clean
other
clean
other
BART (Lewis et al., 2020)
-
-
5.4
10.7
5.8
10.7
-
4.3
10.3
4.4
10.4
2.1
5.5
2.4
5.8
T5 (Raffel et al., 2020)
-
-
5.4
11.3
5.7
11.3
-
4.3
10.7
4.4
10.7
2.3
5.8
2.3
5.8
Table 10 :
10Comparisons of mask strategies for the text pre-training under different inference settings. Models are pre-trained on the 960 hours speech data of LibriSpeech and 400M text sentences of LibriSpeech-LM corpus, and fine-tuned on the 100 hours labeled data of LibriSpeech. CTC and LM mean the Joint CTC/attention decoding(Hori et al., 2017), and language model fusion, respectively.(Tachibana et al., 2018) to speed up model convergence. We train on 8 V100 GPUs in a speakerindependent manner by using the training data of the LibriTTS. The model is updated for 120k steps with a learning rate of 0.0004, while each GPU processes up to 45,000 tokens for a batch. The learning rate is warmed up for the first 10k steps and decayed in an inverse square root manner for the rest steps.B.3 ST Dataset and Evaluation Metrics We evaluate the ST task on the MUST-C dataset (Di Gangi et al., 2019), including English-German (EN-DE) and English-French (EN-FR) translation tasks. The EN-DE/EN-FR language pair consists of 408/492 hours of speech data aligned with 234K/280K translated sentences. We report the results on EN-DE and EN-FR tst-COMMON set (2641 and 2632 utterances). Translation results are computed with case-sensitive BLEU(Papineni et al., 2002).Fine-Tuning Details ST translates speech signals in a language to text in another language. We use raw audio as speech inputs in our experiments. The training setting is the same as that in S2T model in Fairseq. We set training steps to 80K and warm-up steps to 10K. Baseline and SpeechT5 models are trained with 8 GPUs via Adam optimizer. We use 8K unigram vocabulary for both EN-DE and EN-FR. Following Fairseq ST(Wang et al., 2020), we average the last 10 checkpoints and use a beam size of 5 for decoding.Dataset and Evaluation MetricsWe consider the many-to-many setting for the CMU Arctic(Kominek and Black, 2004), which contains speech recordings of four speakers, such as clb (female), bdl (male), slt (female), and rms (male), who read the same 1,132 phonetically balanced English utterances. Thus, there are twelve different combinations of source and target speakers. For each speaker, the first 932, the last 100, and the rest 100 sentences of the 1,132 sentences are used for training, test, and validation as (Huang et al., 2021), respectively. The average of MCD is estimated by using the DTW (dynamic time warping) path between the output and ground-truth Mel-cepstra. A smaller MCD indicates better performance. The WER is evaluated by using the public ASR model HuBERT LARGE 8 , where the WER of the test set with this ASR model is comparable to that ofVTN (Huang et al., 2021).B.4 VC
Model LMdev-clean dev-other test-clean test-other BASE(Baevski et al., 2020) Transf.wav2vec 2.0 BASE (Baevski et al., 2020)
-
3.2
8.9
3.4
8.5
Baseline (w/o CTC)
-
3.1
7.8
3.1
7.6
Baseline
-
2.8
7.6
2.8
7.4
SpeechT5 (w/o CTC)
-
2.8
7.6
3.1
7.3
SpeechT5
-
2.5
7.4
2.7
7.1
wav2vec 2.0 BASE (Baevski et al., 2020) 4-gram
2.0
5.9
2.6
6.1
wav2vec 2.0 1.8
4.7
2.1
4.8
Baseline
Transf.
2.0
4.5
1.9
4.5
SpeechT5
Transf.
1.8
4.3
1.9
4.4
Table 11 :
11WER of ASR when training on the 960 hours labeled data of LibriSpeech.Metric
WHAM!
PESQ
1.12
ESTOI
0.48
WER (NSNet2 (Sebastian and Ivan, 2020))
45.8%
Table 12 :
12Results of noisy speech utterances on the test set in terms of PEQS, ESTOI, and WER.
B . 6
.SID Dataset and Evaluation Metrics We use the official split of the VoxCeleb1 dataset (Nagrani et al., 2017) for the SID task, where the test set contains 8,251 utterances from these 1,251 celebrities. The capability of identifying speakers is assessed by 10 https://doi.org/10.5281/zenodo.4243201
D Results of the SpeechT5 without L s mlm on the TTS taskModel
Naturalness
SpeechT5
2.79
w/o L s
mlm
2.91
Table 13 :
13Comparisons between SpeechT5 and its variant without using L s mlm .
https://librosa.org/doc/latest/index.html.
The target labels are generated by clustering outputs of the 6-th Transformer layer in the first iteration HuBERT BASE model via the k-means clustering method with 500 clusters.
We conducted experiments to compare the BART (Lewis et al., 2020) and T5(Raffel et al., 2020) mask strategies, which can be found in Appendix A.
https://www.openslr.org/11
https://huggingface.co/facebook/wav2vec2-base-960h 7 https://github.com/gabrielmittag/NISQA
https://huggingface.co/facebook/hubert-xlarge-ls960-ft 9 https://catalog.ldc.upenn.edu/LDC93S6A
AcknowledgmentsWe thank Yanqing Liu and Sheng Zhao for their help in TTS human evaluation. We also want to thank the anonymous reviewers for insightful comments and suggestions.
Effectiveness of self-supervised pretraining for speech recognition. Alexei Baevski, Michael Auli, Abdelrahman Mohamed, arXiv:1911.03912arXiv preprintAlexei Baevski, Michael Auli, and Abdelrahman Mo- hamed. 2019. Effectiveness of self-supervised pre- training for speech recognition. arXiv preprint arXiv:1911.03912.
2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, Michael Auli, Proceedings of the 34th Conference on Neural Information Processing Systems. the 34th Conference on Neural Information Processing Systems33Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A frame- work for self-supervised learning of speech represen- tations. In Proceedings of the 34th Conference on Neural Information Processing Systems, volume 33, pages 12449-12460.
Wavlm: Large-scale self-supervised pre-training for full stack speech processing. Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei, arXiv:2110.13900arXiv preprintSanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, and Furu Wei. 2021a. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. arXiv preprint arXiv:2110.13900.
Yi-Chen Chen, Po-Han Chi, Shu-Wen Yang, Kai-Wei Chang, Jheng-Hao Lin, Sung-Feng Huang, Da-Rong Liu, Chi-Liang Liu, arXiv:2105.03070Cheng-Kuang Lee, and Hungyi Lee. 2021b. Speechnet: A universal modularized model for speech processing tasks. arXiv preprintYi-Chen Chen, Po-Han Chi, Shu-wen Yang, Kai-Wei Chang, Jheng-hao Lin, Sung-Feng Huang, Da-Rong Liu, Chi-Liang Liu, Cheng-Kuang Lee, and Hung- yi Lee. 2021b. Speechnet: A universal modularized model for speech processing tasks. arXiv preprint arXiv:2105.03070.
Injecting text in self-supervised speech pretraining. Zhehuai Chen, Yu Zhang, Andrew Rosenberg, Bhuvana Ramabhadran, Gary Wang, Pedro Moreno, arXiv:2108.12226arXiv preprintZhehuai Chen, Yu Zhang, Andrew Rosenberg, Bhu- vana Ramabhadran, Gary Wang, and Pedro Moreno. 2021c. Injecting text in self-supervised speech pre- training. arXiv preprint arXiv:2108.12226.
SpeechBERT: An Audio-and-Text Jointly Learned Language Model for End-to-End Spoken Question Answering. Yung-Sung Chuang, Chi-Liang Liu, Hung Yi Lee, Lin Shan Lee, 10.21437/Interspeech.2020-1570Proceedings of Interspeech 2020. Interspeech 2020Yung-Sung Chuang, Chi-Liang Liu, Hung yi Lee, and Lin shan Lee. 2020. SpeechBERT: An Audio-and- Text Jointly Learned Language Model for End-to- End Spoken Question Answering. In Proceedings of Interspeech 2020, pages 4168-4172.
Delving into VoxCeleb: Environment invariant speaker recognition. Jaesung Joon Son Chung, Seongkyu Huh, Mun, Proceedings of Odyssey 2020 The Speaker and Language Recognition Workshop. Odyssey 2020 The Speaker and Language Recognition WorkshopJoon Son Chung, Jaesung Huh, and Seongkyu Mun. 2020. Delving into VoxCeleb: Environment invari- ant speaker recognition. In Proceedings of Odyssey 2020 The Speaker and Language Recognition Work- shop, pages 349-356.
Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech. Yu-An Chung, James Glass, 10.21437/Interspeech.2018-2341Proceedings of Interspeech. InterspeechYu-An Chung and James Glass. 2018. Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech. In Proceedings of Interspeech 2018, pages 811-815.
Ruoming Pang, and Yonghui Wu. 2021a. w2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, 10.1109/ASRU51503.2021.9688253Proceedings of the 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). the 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, Ruoming Pang, and Yonghui Wu. 2021a. w2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. In Proceedings of the 2021 IEEE Automatic Speech Recognition and Under- standing Workshop (ASRU), pages 244-250.
SPLAT: Speech-language joint pre-training for spoken language understanding. Yu-An Chung, Chenguang Zhu, Michael Zeng, 10.18653/v1/2021.naacl-main.152Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesYu-An Chung, Chenguang Zhu, and Michael Zeng. 2021b. SPLAT: Speech-language joint pre-training for spoken language understanding. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1897-1907.
Cross-lingual language model pretraining. Alexis Conneau, Guillaume Lample, Proceedings of the 33rd Conference on Neural Information Processing Systems. the 33rd Conference on Neural Information Processing Systems32Alexis CONNEAU and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Pro- ceedings of the 33rd Conference on Neural Informa- tion Processing Systems, volume 32.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
MuST-C: a Multilingual Speech Translation Corpus. A Di Mattia, Roldano Gangi, Luisa Cattoni, Matteo Bentivogli, Marco Negri, Turchi, 10.18653/v1/N19-1202Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong and Short PapersMattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 2012-2017.
Unified language model pre-training for natural language understanding and generation. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon, Proceedings of the 33rd Conference on Neural Information Processing Systems. the 33rd Conference on Neural Information Processing Systems32Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In Proceedings of the 33rd Con- ference on Neural Information Processing Systems, volume 32, pages 13063-13075.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53.
Librispeech: an asr corpus based on public domain audio books. Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, 10.1109/ICASSP.2015.7178964Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2015 IEEE International Conference on Acoustics, Speech and Signal ProcessingIEEEVassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr cor- pus based on public domain audio books. In Pro- ceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 5206-5210. IEEE.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.
Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.
Speech-language pre-training for end-to-end spoken language understanding. Yao Qian, Ximo Bianv, Yu Shi, Naoyuki Kanda, Leo Shen, Zhen Xiao, Michael Zeng, 10.1109/ICASSP39728.2021.9414900Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2021 IEEE International Conference on Acoustics, Speech and Signal ProcessingYao Qian, Ximo Bianv, Yu Shi, Naoyuki Kanda, Leo Shen, Zhen Xiao, and Michael Zeng. 2021. Speech-language pre-training for end-to-end spoken language understanding. In Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 7458-7462.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Lan- guage models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Exploring the limits of transfer learning with a unified text-totext transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.
Hannes Gamper, Robert Aichner, and Sriram Srinivasan. 2021. ICASSP 2021 deep noise suppression challenge. K A Chandan, Harishchandra Reddy, Vishak Dubey, Ross Gopal, Sebastian Cutler, Braun, 10.1109/ICASSP39728.2021.9415105Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing2021Chandan K. A. Reddy, Harishchandra Dubey, Vishak Gopal, Ross Cutler, Sebastian Braun, Hannes Gam- per, Robert Aichner, and Sriram Srinivasan. 2021. ICASSP 2021 deep noise suppression challenge. In Proceedings of the 2021 IEEE International Confer- ence on Acoustics, Speech and Signal Processing, volume 2021-June, pages 6623-6627.
Semface: Pre-training encoder and decoder with a semantic interface for neural machine translation. Long Shuo Ren, Shujie Zhou, Furu Liu, Ming Wei, Shuai Zhou, Ma, 10.18653/v1/2021.acl-long.348Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingShuo Ren, Long Zhou, Shujie Liu, Furu Wei, Ming Zhou, and Shuai Ma. 2021. Semface: Pre-training encoder and decoder with a semantic interface for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 4518-4527.
Perceptual evaluation of speech quality (PESQ) -A new method for speech quality assessment of telephone networks and codecs. A W Rix, J G Beerends, M P Hollier, A P , 10.1109/ICASSP.2001.941023Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing2HekstraA. W. Rix, J. G. Beerends, M. P. Hollier, and A. P. Hek- stra. 2001. Perceptual evaluation of speech quality (PESQ) -A new method for speech quality assess- ment of telephone networks and codecs. In Proceed- ings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 2, pages 749-752.
Data augmentation and loss normalization for deep noise suppression. Braun Sebastian, Tashev Ivan, Proceedings of Speech and Computer. Speech and ComputerBraun Sebastian and Tashev Ivan. 2020. Data aug- mentation and loss normalization for deep noise sup- pression. In Proceedings of Speech and Computer, pages 79-86.
Self-attention with relative position representations. Peter Shaw, Jakob Uszkoreit, Ashish Vaswani, 10.18653/v1/N18-2074Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersPeter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 464-468.
Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, Rif A Saurous, Yannis Agiomvrgiannakis, Yonghui Wu, 10.1109/ICASSP.2018.8461368Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2018 IEEE International Conference on Acoustics, Speech and Signal ProcessingJonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, Rif A. Saurous, Yannis Agiomvrgiannakis, and Yonghui Wu. 2018. Natural tts synthesis by con- ditioning wavenet on mel spectrogram predictions. In Proceedings of the 2018 IEEE International Con- ference on Acoustics, Speech and Signal Processing, pages 4779-4783.
Xvectors: Robust DNN embeddings for speaker recognition. David Snyder, Daniel Garcia-Romero, Gregory Sell, Daniel Povey, Sanjeev Khudanpur, file:/localhost/opt/grobid/grobid-home/tmp/10.1109/ICASSP.2018.8461375Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2018 IEEE International Conference on Acoustics, Speech and Signal ProcessingDavid Snyder, Daniel Garcia-Romero, Gregory Sell, Daniel Povey, and Sanjeev Khudanpur. 2018. X- vectors: Robust DNN embeddings for speaker recog- nition. In Proceedings of the 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing, pages 5329-5333.
Speech-xlnet: Unsupervised acoustic model pretraining for self-attention networks. Xingchen Song, Guangsen Wang, Zhiyong Wu, Yiheng Huang, Dan Su, Dong Yu, Helen Meng, Proceedings of the Interspeech 2020. the Interspeech 2020Xingchen Song, Guangsen Wang, Zhiyong Wu, Yi- heng Huang, Dan Su, Dong Yu, and Helen Meng. 2019. Speech-xlnet: Unsupervised acoustic model pretraining for self-attention networks. In Proceed- ings of the Interspeech 2020, pages 3765-3769.
End-to-end asr: from supervised to semi-supervised learning with modern architectures. Gabriel Synnaeve, Qiantong Xu, Jacob Kahn, Tatiana Likhomanenko, Edouard Grave, Vineel Pratap, Anuroop Sriram, Vitaliy Liptchinsky, Ronan Collobert, arXiv:1911.08460arXiv preprintGabriel Synnaeve, Qiantong Xu, Jacob Kahn, Ta- tiana Likhomanenko, Edouard Grave, Vineel Pratap, Anuroop Sriram, Vitaliy Liptchinsky, and Ronan Collobert. 2020. End-to-end asr: from supervised to semi-supervised learning with modern architectures. arXiv preprint arXiv:1911.08460.
Efficiently trainable text-tospeech system based on deep convolutional networks with guided attention. Hideyuki Tachibana, Katsuya Uenoyama, Shunsuke Aihara, 10.1109/ICASSP.2018.8461829Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2018 IEEE International Conference on Acoustics, Speech and Signal ProcessingHideyuki Tachibana, Katsuya Uenoyama, and Shun- suke Aihara. 2018. Efficiently trainable text-to- speech system based on deep convolutional net- works with guided attention. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4784-4788.
Improving speech translation by understanding and learning from the auxiliary text translation task. Yun Tang, Juan Pino, Xian Li, Changhan Wang, Dmitriy Genzel, 10.18653/v1/2021.acl-long.328Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. the 59th Annual Meeting of the Association for Computational LinguisticsLong Papers1Yun Tang, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. 2021a. Improving speech transla- tion by understanding and learning from the aux- iliary text translation task. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 4252-4261.
A general multi-task learning framework to leverage text data for speech to text tasks. Yun Tang, Juan Pino, Changhan Wang, Xutai Ma, Dmitriy Genzel, Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2021 IEEE International Conference on Acoustics, Speech and Signal ProcessingYun Tang, Juan Pino, Changhan Wang, Xutai Ma, and Dmitriy Genzel. 2021b. A general multi-task learn- ing framework to leverage text data for speech to text tasks. In Proceedings of the 2021 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing, pages 6209-6213.
Machine speech chain. Andros Tjandra, Sakriani Sakti, Satoshi Nakamura, 10.1109/TASLP.2020.2977776IEEE/ACM Transactions on Audio, Speech, and Language Processing. 28Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2020. Machine speech chain. IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, 28:976-989.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, https:/dl.acm.org/doi/pdf/10.5555/3295222.3295349Proceedings of the 31st Conference on Neural Information Processing Systems. the 31st Conference on Neural Information Processing Systems30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems, volume 30, pages 6000-6010.
Fairseq s2t: Fast speech-to-text modeling with fairseq. Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System DemonstrationsChanghan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, and Juan Pino. 2020. Fairseq s2t: Fast speech-to-text modeling with fairseq. In Pro- ceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Lin- guistics and the 10th International Joint Conference on Natural Language Processing: System Demon- strations, pages 33-39.
Unispeech: Unified speech representation learning with labeled and unlabeled data. Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang, Proceedings of the 2021 International Conference on Machine Learning. the 2021 International Conference on Machine LearningChengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, and Xuedong Huang. 2021. Unispeech: Unified speech represen- tation learning with labeled and unlabeled data. In Proceedings of the 2021 International Conference on Machine Learning, pages 10937-10947.
Espnet: End-to-end speech processing toolkit. Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Proceedings of the Interspeech 2018. Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiaithe Interspeech 2018Nelson Enrique Yalta SoplinShinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson En- rique Yalta Soplin, Jahn Heymann, Matthew Wies- ner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. Espnet: End-to-end speech processing toolkit. In Proceedings of the Inter- speech 2018, pages 2207-2211.
Hybrid ctc/attention architecture for end-to-end speech recognition. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, Tomoki Hayashi, 10.1109/JSTSP.2017.2763455IEEE Journal of Selected Topics in Signal Processing. 118Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, and Tomoki Hayashi. 2017. Hybrid ctc/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Sig- nal Processing, 11(8):1240-1253.
WHAM!: Extending speech separation to noisy environments. Gordon Wichern, Joe Antognini, Michael Flynn, Licheng Richard Zhu, Emmett Mcquinn, Dwight Crow, Ethan Manilow, Jonathan Le Roux, Proceedings of Interspeech 2019. Interspeech 2019Gordon Wichern, Joe Antognini, Michael Flynn, Licheng Richard Zhu, Emmett McQuinn, Dwight Crow, Ethan Manilow, and Jonathan Le Roux. 2019. WHAM!: Extending speech separation to noisy en- vironments. In Proceedings of Interspeech 2019, pages 1368-1372.
Parallel Wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. Ryuichi Yamamoto, Eunwoo Song, Jae-Min Kim, 10.1109/ICASSP40776.2020.9053795Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing. the 2020 IEEE International Conference on Acoustics, Speech and Signal ProcessingRyuichi Yamamoto, Eunwoo Song, and Jae-Min Kim. 2020. Parallel Wavegan: A fast waveform gen- eration model based on generative adversarial net- works with multi-resolution spectrogram. In Pro- ceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6199-6203.
Superb: Speech processing universal performance benchmark. Po-Han Shu-Wen Yang, Yung-Sung Chi, Chuang, -I Jeff Cheng, Kushal Lai, Lakhotia, Y Yist, Andy T Lin, Jiatong Liu, Xuankai Shi, Guan-Ting Chang, Lin, arXiv:2105.01051arXiv preprintShu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y Lin, Andy T Liu, Jiatong Shi, Xuankai Chang, Guan- Ting Lin, et al. 2021. Superb: Speech processing universal performance benchmark. arXiv preprint arXiv:2105.01051.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Proceedings of the 33rd Conference on Neural Information Processing Systems. the 33rd Conference on Neural Information Processing Systems32Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Proceedings of the 33rd Conference on Neural Information Processing Sys- tems, 32.
End-to-End Speech Translation via Cross-Modal Progressive Training. Rong Ye, Mingxuan Wang, Lei Li, 10.21437/Interspeech.2021-1065Proceedings of the Interspeech 2021. the Interspeech 2021Rong Ye, Mingxuan Wang, and Lei Li. 2021. End-to- End Speech Translation via Cross-Modal Progres- sive Training. In Proceedings of the Interspeech 2021, pages 2267-2271.
Libritts: A corpus derived from librispeech for textto-speech. Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, Yonghui Wu, Proceedings of the Interspeech. the InterspeechHeiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. 2019. Libritts: A corpus derived from librispeech for text- to-speech. In Proceedings of the Interspeech 2019, pages 1526-1530.
Fused acoustic and text encoding for multimodal bilingual pretraining and speech translation. Renjie Zheng, Junkun Chen, Mingbo Ma, Liang Huang, Proceedings of the 2021 International Conference on Machine Learning. the 2021 International Conference on Machine Learningwhen compared with the SpeechT5. It suggests that the pretraining without the speech-specific loss brings a significant gain. Thus, we select the SpeechT5 without the loss L s mlm for MOS and CMOS evaluationsRenjie Zheng, Junkun Chen, Mingbo Ma, and Liang Huang. 2021. Fused acoustic and text encoding for multimodal bilingual pretraining and speech transla- tion. In Proceedings of the 2021 International Con- ference on Machine Learning, pages 12736-12746. when com- pared with the SpeechT5. It suggests that the pre- training without the speech-specific loss brings a significant gain. Thus, we select the SpeechT5 without the loss L s mlm for MOS and CMOS evalu- ations.
| [
"https://github.com/microsoft/",
"https://github.com/pytorch/fairseq",
"https://github.com/gabrielmittag/NISQA"
] |
[
"A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors",
"A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors"
] | [
"Mikhail Khodak mkhodak@princeton.edu \nFacebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n\n",
"Nikunj Saunshi nsaunshi@princeton.edu \nFacebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n\n",
"Yingyu Liang yliang@cs.wisc.edu \nFacebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n\n",
"Tengyu Ma tengyuma@stanford.edu \nFacebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n\n",
"Brandon Stewart \nFacebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n\n",
"Sanjeev Arora arora@princeton.edu \nFacebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n\n"
] | [
"Facebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n",
"Facebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n",
"Facebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n",
"Facebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n",
"Facebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n",
"Facebook AI Research\nPrinceton University\nUniversity of Wisconsin-Madison\nPrinceton University\n"
] | [] | Motivations like domain adaptation, transfer learning, and feature learning have fueled interest in inducing embeddings for rare or unseen words, n-grams, synsets, and other textual features. This paper introducesà la carte embedding, a simple and general alternative to the usual word2vec-based approaches for building such representations that is based upon recent theoretical results for GloVe-like embeddings. Our method relies mainly on a linear transformation that is efficiently learnable using pretrained word vectors and linear regression. This transform is applicable "on the fly" in the future when a new text feature or rare word is encountered, even if only a single usage example is available. We introduce a new dataset showing how theà la carte method requires fewer examples of words in context to learn high-quality embeddings and we obtain state-of-the-art results on a nonce task and some unsupervised document classification tasks. | 10.18653/v1/p18-1002 | [
"https://arxiv.org/pdf/1805.05388v1.pdf"
] | 21,669,304 | 1805.05388 | 101539071f684dfdef635fd42b6cdd8c478a4f01 |
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
Mikhail Khodak mkhodak@princeton.edu
Facebook AI Research
Princeton University
University of Wisconsin-Madison
Princeton University
Nikunj Saunshi nsaunshi@princeton.edu
Facebook AI Research
Princeton University
University of Wisconsin-Madison
Princeton University
Yingyu Liang yliang@cs.wisc.edu
Facebook AI Research
Princeton University
University of Wisconsin-Madison
Princeton University
Tengyu Ma tengyuma@stanford.edu
Facebook AI Research
Princeton University
University of Wisconsin-Madison
Princeton University
Brandon Stewart
Facebook AI Research
Princeton University
University of Wisconsin-Madison
Princeton University
Sanjeev Arora arora@princeton.edu
Facebook AI Research
Princeton University
University of Wisconsin-Madison
Princeton University
A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
Motivations like domain adaptation, transfer learning, and feature learning have fueled interest in inducing embeddings for rare or unseen words, n-grams, synsets, and other textual features. This paper introducesà la carte embedding, a simple and general alternative to the usual word2vec-based approaches for building such representations that is based upon recent theoretical results for GloVe-like embeddings. Our method relies mainly on a linear transformation that is efficiently learnable using pretrained word vectors and linear regression. This transform is applicable "on the fly" in the future when a new text feature or rare word is encountered, even if only a single usage example is available. We introduce a new dataset showing how theà la carte method requires fewer examples of words in context to learn high-quality embeddings and we obtain state-of-the-art results on a nonce task and some unsupervised document classification tasks.
Introduction
Distributional word embeddings, which represent the "meaning" of a word via a low-dimensional vector, have been widely applied by many natural language processing (NLP) pipelines and algorithms (Goldberg, 2016). Following the success of recent neural (Mikolov et al., 2013) and matrixfactorization (Pennington et al., 2014) methods, researchers have sought to extend the approach to other text features, from subword elements to n-grams to sentences (Bojanowski et al., 2016;Poliak et al., 2017;Kiros et al., 2015). However, the performance of both word embeddings and their extensions is known to degrade in small corpus settings or when embedding sparse, low-frequency features (Lazaridou et al., 2017). Attempts to address these issues often involve task-specific approaches (Rothe and Schütze, 2015;Iacobacci et al., 2015;Pagliardini et al., 2018) or extensively tuning existing architectures such as skip-gram (Poliak et al., 2017;Herbelot and Baroni, 2017).
For computational efficiency it is desirable that methods be able to induce embeddings for only those features (e.g. bigrams or synsets) needed by the downstream task, rather than having to pay a computational prix fixe to learn embeddings for all features occurring frequently-enough in a corpus. We propose an alternative, novel solution vià a la carte embedding, a method which bootstraps existing high-quality word vectors to learn a feature representation in the same semantic space via a linear transformation of the average word embeddings in the feature's available contexts. This can be seen as a shallow extension of the distributional hypothesis (Harris, 1954), "a feature is characterized by the words in its context," rather than the computationally more-expensive "a feature is characterized by the features in its context" that has been used implicitly by past work (Rothe and Schütze, 2015;Logeswaran and Lee, 2018).
Despite its elementary formulation, we demonstrate that theà la carte method can learn faithful word embeddings from single examples and feature vectors improving performance on important downstream tasks. Furthermore, the approach is resource-efficient, needing only pretrained embed-dings of common words and the text corpus used to train them, and easy to implement and compute via vector addition and linear regression. After motivating and specifying the method, we illustrate these benefits through several applications:
• Embeddings of rare words: we introduce a dataset 1 for few-shot learning of word vectors and achieve state-of-the-art results on the task of representing unseen words using only the definition (Herbelot and Baroni, 2017).
• Synset embeddings: we show how the method can be applied to learn more finegrained lexico-semantic representations and give evidence of its usefulness for standard word-sense disambiguation tasks (Navigli et al., 2013;Moro and Navigli, 2015).
• n-gram embeddings: we build seven million n-gram embeddings from large text corpora and use them to construct document embeddings that are competitive with unsupervised deep learning approaches when evaluated on linear text classification.
Our experimental results 2 clearly demonstrate the advantages ofà la carte embedding. For word embeddings, the approach is an easy way to get a good vector for a new word from its definition or a few examples in context. For feature embeddings, the method can embed anything that does not need labeling (such as a bigram) or occurs in an annotated corpus (such as a word-sense). Our document embeddings, constructed directly using a la carte n-gram vectors, compete well with recent deep neural representations; this provides further evidence that simple methods can outperform modern deep learning on many NLP benchmarks Mu and Viswanath, 2018;Arora et al., 2018a,b;Pagliardini et al., 2018).
Related Work
Many methods have been proposed for extending word embeddings to semantic feature vectors, with the aim of using them as interpretable and structure-aware building blocks of NLP pipelines (Kiros et al., 2015;Yamada et al., 2016). Many exploit the structure and resources available for specific feature types, such as methods for sense, synsets, and lexemes (Rothe and Schütze, 2015;Iacobacci et al., 2015) that make heavy use of the graph structure of the Princeton WordNet (PWN) and similar resources (Fellbaum, 1998). By contrast, our work is more general, with incorporation of structure left as an open problem. Embeddings of n-grams are of special interest because they do not need annotation or expert knowledge and can often be effective on downstream tasks. Their computation has been studied both explicitly (Yin and Schutze, 2014;Poliak et al., 2017) and as an implicit part of models for document embeddings (Hill et al., 2016;Pagliardini et al., 2018), which we use for comparison. Supervised and multitask learning of text embeddings has also been attempted (Wang et al., 2017;Wu et al., 2017). A main motivation of our work is to learn good embeddings, of both words and features, from only one or a few examples. Efforts in this area can in many cases be split into contextual approaches (Lazaridou et al., 2017;Herbelot and Baroni, 2017) and morphological methods (Luong et al., 2013;Bojanowski et al., 2016;Pado et al., 2016). The current paper provides a more effective formulation for context-based embeddings, which are often simpler to implement, can improve with more context information, and do not require morphological annotation. Subword approaches, on the other hand, are often more compositional and flexible, and we leave the extension of our method to handle subword information to future work. Our work is also related to some methods in domain adaptation and multi-lingual correlation, such as that of Bollegala et al. (2014).
Mathematically, this work builds upon the linear algebraic understanding of modern word embeddings developed by Arora et al. (2018b) via an extension to the latent-variable embedding model of Arora et al. (2016). Although there have been several other applications of this model for natural language representation Mu and Viswanath, 2018), ours is the first to provide a general approach for learning semantic features using corpus context. cation information using a standard algorithm (e.g. word2vec / GloVe). Our goal is to construct a good embedding v f ∈ R d of a text feature f given a set C f of contexts it occurs in. Both f and its contexts are assumed to arise via the same process that generates the large corpus C V . In many settings below, the number |C f | of contexts available for a feature f of interest is much smaller than the number |C w | of contexts that the typical word w ∈ V occurs in. This could be because the feature is rare (e.g. unseen words, n-grams) or due to limited human annotation (e.g. word senses, named entities).
A Linear Approach
A naive first approach to construct feature embeddings using context is additive, i.e. taking the average over all contexts of a feature f of the average word vector in each context:
v additive f = 1 |C f | c∈C f 1 |c| w∈c v w(1)
This formulation reflects the training of commonly used embeddings, which employs additive composition to represent the context (Mikolov et al., 2013;Pennington et al., 2014). It has proved successful in the bag-of-embeddings approach to sentence representation (Wieting et al., 2016;, which can compete with LSTM representations, and has also been given theoretical justification as the maximum a posteriori (MAP) context vector under a generative model related to popular embedding objectives (Arora et al., 2016). Lazaridou et al. (2017) use this approach to learn embeddings of unknown word amalgamations, or chimeras, given a few context examples. The additive approach has some limitations because the set of all word vectors is seen to share a few common directions. Simple addition amplifies the component in these directions, at the expense of less common directions that presumably carry more "signal." Stop-word removal can help to ameliorate this (Lazaridou et al., 2017;Herbelot and Baroni, 2017), but does not deal with the fact that content-words also have significant components in the same direction as these deleted words. Another mathematical framework to address this lacuna is to remove the top one or top few principal components, either from the word embeddings themselves (Mu and Viswanath, 2018) or from their summations . However, this approach is liable to either not remove Change in Embedding Norm under Transform Figure 1: Plot of the ratio of embedding norms after transformation as a function of word count. While All-but-the-Top tends to affect only very frequent words,à la carte learns to remove components even from less common words. enough noise or cause too much information loss without careful tuning (c.f. Figure 1).
We now note that removing the component along the top few principal directions is tantamount to multiplying the additive composition by a fixed (but data-dependent) matrix. Thus a natural extension is to use an arbitrary linear transformation which will be learned from the data, and hence guaranteed to do at least as well as any of the above ideas. Specifically, we find the transform that can best recover existing word vectors v w -which are presumed to be of high qualityfrom their additive context embeddings v additive w . This can be posed as the following linear regression problem
v w ≈ Av additive w = A 1 |C w | c∈Cw w ∈c v w(2)
where A ∈ R d×d is learned and we assume for simplicity that 1 |c| is constant (e.g. if c has a fixed window size) and is thus subsumed by the transform. After learning the matrix, we can embed any text feature in the same semantic space as the word embeddings via the following expression:
v f = Av additive f = A 1 |C f | c∈C f w∈c v w (3)
Note that A is fixed for a given corpus and set of pretrained word embeddings and so does not need to be re-computed to embed different features or feature types.
Algorithm 1: The basicà la carte feature embedding induction method. All contexts c consist of sequences of words drawn from the vocabulary V. (2) holds exactly in expectation for some matrix A when contexts c ∈ C are generated by sampling a context vector v c ∈ R d from a zero-mean Gaussian with fixed covariance and drawing |c| words using P(w|v c ) ∝ exp v c , v w . The correctness (again in expectation) of (3) under this model is a direct extension. Arora et al. (2018b) use large text corpora to verify their model assumptions, providing theoretical justification for our approach. We observe that the best linear transform A can recover vectors with mean cosine similarity as high as 0.9 or more with the embeddings used to learn it, thus also justifying the method empirically.
Data: vocabulary V, corpus C V , vectors v w ∈ R d ∀ w ∈ V, feature f , corpus C f of contexts of f Result: feature embedding v f ∈ R d 1 for w ∈ V do 2 let C w ⊂ C V be the subcorpus of contexts of w 3 u w ← 1 |Cw| c∈Cw w ∈c v w // compute each word's context embedding u w 4 A ← arg min A∈R d×d w∈V v w − Au w 2 2 // compute context-to-feature transform A 5 u f ← 1 |C f | c∈C f w∈c v w // compute feature's context embedding u f 6 v f ← Au f // transform
Practical Details
The basicà la carte method, as motivated in Section 3.1 and specified in Algorithm 1, is straightforward and parameter-free (the dimension d is assumed to have been chosen beforehand, along with the other parameters of the original word embeddings). In practice we may wish to modify the regression step in an attempt to learn a better transformation matrix A. However, the standard first approach of using 2 -regularized (Ridge) regression instead of simple linear regression gives little benefit, even when we have more parameters than word embeddings (i.e. when d 2 > |V|).
A more useful modification is to weight each point by some non-decreasing function α of each word's corpus count c w , i.e. to solve
A = arg min A∈R d×d w∈V α(c w ) v w − Au w 2 2 (4)
where u w is the additive context embedding. This reflects the fact that more frequent words likely have better pretrained embeddings. In settings where |V| is large we find that a hard threshold (α(c) = 1 c≥τ for some τ ≥ 1) is often useful. When we do not have many embeddings we can still give more importance to words with better embeddings via a function such as α(c) = log c, which we use in Section 5.1.
One-Shot and Few-Shot Learning of Word Embeddings
While we can use our method to embed any type of text feature, its simplicity and effectiveness is rooted in word-level semantics: the approach assumes pre-existing high quality word embeddings and only considers collocations of features with words rather than with other features. Thus to verify that our approach is reasonable we first check how it performs on word representation tasks, specifically those where word embeddings need to be learned from very few examples. In this section we first investigate how representation quality varies with number of occurrences, as measured by performance on a similarity task that we introduce. We then apply theà la carte method to two tasks measuring the ability to learn new or synthetic words from context, achieving strong results on the nonce task of Herbelot and Baroni (2017).
Similarity Correlation vs. Sample Size
Performance on pairwise word similarity tasks is a standard way to evaluate word embeddings, with success measured via the Spearman correlation between a human score and the cosine similarity between word vectors. An overview of widely used datasets is given by Faruqui and Dyer (2014). However, none of these datasets can be used directly to measure the effect of word frequency on embedding quality, which would help us understand the data requirements of our approach. We address this issue by introducing the Contextual Rare Words (CRW) dataset, a subset of 562 pairs from the Rare Word (RW) dataset (Luong et al., 2013) supplemented by 255 sentences (contexts) for each rare word sampled from the Westbury Wikipedia Corpus (WWC) (Shaoul and Westbury, 2010). In addition we provide a subset of the WWC from which all sentences containing these rare words have been removed. The task is to use embeddings trained on this subcorpus to induce rare word embeddings from the sampled contexts. More specifically, the CRW dataset is constructed using all pairs from the RW dataset where the rarer word occurs between 512 and 10000 times in WWC; this yields a set of 455 distinct rare words. The lower bound ensures that we have a sufficient number of rare word contexts, while the upper bound ensures that a significant fraction of the sentences from the original WWC remain in the subcorpus we provide. In CRW, the first word in every pair is the more frequent word and occurs in the subcorpus, while the second word occurs in the 255 sampled contexts but not in the subcorpus. We provide word2vec embeddings trained on all words occurring at least 100 times in the WWC subcorpus; these vectors include those assigned to the first (non-rare) words in the evaluation pairs.
Evaluation: For every rare word the method under consideration is given eight disjoint subsets containing 1, 2, 4, . . . , 128 example contexts. The method induces an embedding of the rare word for each subset, letting us track how the quality of rare word vectors changes with more examples. We report the Spearman ρ (as described above) at each sample size, averaged over 100 trials obtained by shuffling each rare word's 255 contexts.
The results in Figure 2 show that ourà la carte method significantly outperforms the additive baseline (1) and its variants, including stopword removal, SIF-weighting , and top principal component removal (Mu and Viswanath, 2018). We find that combining SIFweighting and top component removal also beats these baselines, but still does worse than our method. These experiments consolidate our intuitions from Section 3 that removing common components and frequent words is important and that learning a data-dependent transformation is an effective way to do this. However, if we train Figure 2: Spearman correlation between cosine similarity and human scores for pairs of words in the CRW dataset given an increasing number of contexts per rare word. Ourà la carte method outperforms all previous approaches, even when restricted to only eight example contexts.
word2vec embeddings from scratch on the subcorpus together with the sampled contexts we achieve a Spearman correlation of 0.45; this gap between word2vec and our method shows that there remains room for even better approaches for fewshot learning of word embeddings.
Learning Embeddings of New Concepts: Nonces and Chimeras
We now evaluate our work directly on the tasks posed by Herbelot and Baroni (2017), who developed simple datasets and methods to "simulate the process by which a competent speaker encounters a new word in known contexts." The general goal will be to construct embeddings of new concepts in the same semantic space as a known embedding vocabulary using contextual information consisting of definitions or example sentences.
Nonces: We first discuss the definitional nonce dataset made by the authors themselves, which has a test-set consisting of 300 single-word concepts and their definitions. The task of learning each concept's embedding is simulated by removing or randomly re-initializing its vector and requiring the system to use the remaining embeddings and the definition to make a new vector that is close to the original. Because the embeddings were constructed using data that includes these concepts, an implicit assumption is made that including or excluding one word does not greatly affect the se- (Herbelot and Baroni, 2017) on few-shot embedding tasks. Performance on the chimeras task is measured using the Spearman correlation with human ratings. Note that the additive baseline requires removing stop-words in order to improve with more data. mantic space; this assumption is necessary in order to have a good target vector for the system to be evaluated against.
Using 259,376 word2vec embeddings trained on Wikipedia as the base vectors, Herbelot and Baroni (2017) heavily modify the skip-gram algorithm to successfully learn on one definition, creating the nonce2vec system. The original skipgram algorithm and v additive w are used as baselines, with performance measured as the mean reciprocal rank and median rank of the concept's original vector among the nearest neighbors of the output.
To compare directly to their approach, we use their word2vec embeddings along with contexts from the Wikipedia corpus to construct context vectors u w for all words w apart from the 300 nonces. We then learn theà la carte transform A, weighting the data points in the regression (4) using a hard threshold of at least 1000 occurrences in Wikipedia. An embedding for each nonce can then be constructed by multiplying A by the sum over all word embeddings in the nonce's definition. As can be seen in Table 1, this approach significantly improves over both baselines and nonce2vec; the median rank of 165.5 of the original embedding among the nearest neighbors of the nonce vector is very low considering the vocabulary size is more than 250,000, and is also significantly lower than that of all previous methods.
Chimeras: The second dataset Herbelot and Baroni (2017) consider is that of Lazaridou et al. (2017), who construct unseen concepts by combining two related words into a fake nonce word (the "chimera") and provide two, four, or six example sentences for this nonce drawn from sentences containing one of the two component words. The desired nonce embeddings is then evaluated via the correlation of its cosine similar-ity with the embeddings of several other words, with ratings provided by human judges.
We use the same approach as in the nonce task, except that the chimera embedding is the result of summing over multiple sentences. From Table 1 we see that, while our method is consistently better than both the additive baseline and nonce2vec, removing stop-words from the additive baseline leads to stronger performance for more sentences. Since theà la carte algorithm explicitly trains the transform to match the true word embedding rather than human similarity measures, it is perhaps not surprising that our approach is much more dominant on the definitional nonce task.
Building Feature Embeddings using Large Corpora
Having witnessed its success at representing unseen words, we now apply theà la carte method to two types of feature embeddings: synset embeddings and n-gram embeddings. Using these two examples we demonstrate the flexibility and adaptability of our approach when handling different corpora, base word embeddings, and downstream applications.
Supervised Synset Embeddings for Word-Sense Disambiguation
Embeddings of synsets, or sets of cognitive synonyms, and related entities such as senses and lexemes have been widely studied, often due to the desire to account for polysemy (Rothe and Schütze, 2015;Iacobacci et al., 2015). Such representations can be evaluated in several ways, including via their use for word-sense disambiguation (WSD), the task of determining a word's sense from context. While current state-of-theart methods often use powerful recurrent models (Raganato et al., 2017), we will instead use a sim- Raganato et al. (2017) 66.9 72.4 Table 2: Application ofà la carte synset embeddings to two standard WSD tasks. As all systems always return exactly one answer, performance is measured in terms of accuracy. Results due to Raganato et al. (2017), who use a bi-LSTM for this task, are given as the recent state-of-the-art result.
ple similarity-based approach that heavily depends on the synset embedding itself and thus serves as a more useful indicator of representation quality. A major target for our simple systems is to beat the most-frequent sense (MFS) method, which returns for each word the sense that occurs most frequently in a corpus such as SemCor. This baseline is "notoriously hard-to-beat," routinely besting many systems in SemEval WSD competitions (Navigli et al., 2013).
Synset Embeddings: We use SemCor (Langone et al., 2004), a subset of the Brown Corpus (BC) (Francis and Kucera, 1979) annotated using PWN synsets. However, because the corpus is quite small we use GloVe trained on Wikipedia instead of on BC itself. The transform A is learned using context embeddings u w computed with windows of size ten around occurrences of w in BC and weighting each word by the log of its count during the regression stage (4). Then we set the context embedding u s of each synset s to be the average sum of word embeddings representation over all sentences in SemCor containing s. Finally, we apply theà la carte transform to get the synset embedding v s = Au s .
Sense Disambiguation:
To determine the sense of a word w given its context c, we convert c into a vector using theà la carte transform A on the sum of its word embeddings and return the synset s of w whose embedding v s is most similar to this vector. We try two different synset embeddings: those induced from SemCor as above and those obtained by embedding a synset using its gloss, or PWN-provided definition, in the same way as a nonce in Section 4.2. We also consider a combined approach in which we fall back on the gloss vector if the synset does not appear in SemCor and thus has no induced embedding.
As shown in Table 2, synset embeddings induced from SemCor alone beat MFS overall, largely due to good noun results. The method improves further when combined with the gloss approach. While we do not match the state-of-theart, our success in besting a difficult baseline using very little fine-tuning and exploiting none of the underlying graph structure suggests that theà la carte method can learn useful synset embeddings, even from relatively small data.
N-Gram Embeddings for Classification
As some of the simplest and most useful linguistic features, n-grams have long been a focus of embedding studies. Compositional approaches, such as sums and products of unigram vectors, are often used and work well on some evaluations, but are often order-insensitive or very high-dimensional (Mitchell and Lapata, 2010). Recent work by Poliak et al. (2017) works around this while staying compositional; however, as we will see their approach does not seem to capture a bigram's meaning much better than the sum of its word vectors. n-grams embeddings have also gained interest for low-dimensional document representation schemes (Hill et al., 2016;Pagliardini et al., 2018;Arora et al., 2018a), largely due to the success of their sparse high-dimensional Bag-of-n-Grams (BonG) counterparts (Wang and Manning, 2012). This setting of document embeddings derived from n-gram features will be used for quantitative evaluation in this section.
We build n-gram embeddings using two corpora: 300-dimensional Wikipedia embeddings, which we evaluate qualitatively, and 1600dimensional embeddings on the Amazon Product Corpus (McAuley et al., 2015), which we use for document classification. For both we use as source embeddings GloVe vectors trained on the respec- tive corpora over words occurring at least a hundred times. Context embeddings are constructed using a window of size ten and a hard threshold at 1000 occurrences is used as the word-weighting function in the regression (4). Unlike Poliak et al. (2017), who can construct arbitrary embeddings but need to train at least two sets of vectors of dimension at least 2d to do so, and Yin and Schutze (2014), who determine which n-grams to represent via corpus counts, ourà la carte approach allows us to train exactly those embeddings that we need for downstream tasks. This, combined with our method's efficiency, allows us to construct more than two million bigram embeddings and more than five million trigram embeddings, constrained only by their presence in the large source corpus.
Qualitative Evaluation: We first compare bigram embedding methods by picking some idiomatic and entity-related bigrams and examining the closest word vectors to their representations. These word-pairs are picked because we expect sophisticated feature embedding methods to encode a better vector than the sum of the two embeddings, which we use as a baseline. From Table 3 we see that embeddings based on corpora rather than composition are better able to embed these bigrams to be close to concepts that are semantically similar. On the other hand, as discussed in Section 3 and evident from these results, the additive context approach is liable to emphasize stop-word directions due to their high frequency.
Document Embedding: Our main application and quantitative evaluation of n-gram vectors is to use them to construct document embeddings. Given a length L document D = {w 1 , . . . , w L }, we define its embedding v D as a weighted con-catenation over sums of our induced n-gram embeddings, i.e.
v T D = L t=1 v T wt · · · 1 n L−n+1 t=1 v T (wt,...,w t+n−1 )
where v (wt,...,w t+n−1 ) is the embedding of the ngram (w t , . . . , w t+n−1 ). Following Arora et al.
(2018a), we weight each n-gram component by 1 n to reflect the fact that higher-order n-grams have lower quality embeddings because they occur less often in the source corpus. While we concatenate across unigram, bigram, and trigram embeddings to construct our text representations, separate experiments show that simply adding up the vectors of all features also yields a smaller but still substantial improvement over the unigram performance. The higher embedding dimension due to concatenation is in line with previous methods and can also be theoretically supported as yielding a less lossy compression of the n-gram information (Arora et al., 2018a).
In Table 4 we display the result of running cross-validated, 2 -regularized logistic regression on documents from MR movie reviews (Pang and Lee, 2005), CR customer reviews (Hu and Liu, 2004), SUBJ subjectivity dataset (Pang and Lee, 2004), MPQA opinion polarity subtask (Wiebe et al., 2005), TREC question classification (Li and Roth, 2002), SST sentiment classification (binary and fine-grained) , and IMDB movie reviews (Maas et al., 2011). The first four are evaluated using tenfold cross-validation, while the others have train-test splits.
Despite the simplicity of our embeddings (a concatenation over sums ofà la carte n-gram vectors), we find that our results are very competitive with many recent unsupervised methods, achieving the best word-level results on two of the tested (Pagliardini et al., 2018;Kiros et al., 2015;Radford et al., 2017) Evaluation conducted using latest pretrained models.
Note that the latest available skip-thoughts implementation returns an error on the IMDB task. 2,4,5,6 (Arora et al., 2018a;Hill et al., 2016;Gan et al., 2017;Logeswaran and Lee, 2018) Best results from publication. Table 4: Performance of document embeddings built usingà la carte n-gram vectors and recent unsupervised word-level approaches on classification tasks, with the character LSTM of (Radford et al., 2017) shown for comparison. Top three results are bolded and the best word-level performance is underlined.
datasets. The fact that we do especially well on the sentiment tasks indicates strong exploitation of the Amazon review corpus, which was also used by DisC, CNN-LSTM, and byte mLSTM. At the same time, the fact that our results are comparable to neural approaches indicates that local wordorder may contain much of the information needed to do well on these tasks. On the other hand, separate experiments do not show a substantial improvement from our approach over unigram methods such as SIF on sentence similarity tasks such as STS (Cer et al., 2017). This could reflect either noise in the n-gram embeddings themselves or the comparative lower importance of local word-order for textual similarity compared to classification.
Conclusion
We have introducedà la carte embedding, a simple method for representing semantic features using unsupervised context information. A natural and principled integration of recent ideas for composing word vectors, the approach achieves strong performance on several tasks and promises to be useful in many linguistic settings and to yield many further research directions. Of particular interest is the replacement of simple window contexts by other structures, such as dependency parses, that could yield results in domains such as question answering or semantic role labeling. Ex-tensions of the mathematical formulation, such as the use of word weighting when building context vectors as in Arora et al. (2018b) or of spectral information along the lines of Mu and Viswanath (2018), are also worthy of further study.
More practically, the Contextual Rare Words (CRW) dataset we provide will support research on few-shot learning of word embeddings. Both in this area and for n-grams there is great scope for combining our approach with compositional approaches (Bojanowski et al., 2016;Poliak et al., 2017) that can handle settings such as zero-shot learning. More work is needed to understand the usefulness of our method for representing (potentially cross-lingual) entities such as synsets, whose embeddings have found use in enhancing WordNet and related knowledge bases (Camacho-Collados et al., 2016;Khodak et al., 2017). Finally, there remain many language features, such as named entities and morphological forms, whose representation by our method remains unexplored.
feature's context embedding Theoretical Justification: As shown by Arora et al. (2018b, Theorem 1), the approximation
Table 3 :
3Closest word embeddings (measured via cosine similarity) to the embeddings of four idiomatic
or entity-associated bigrams. From these examples we see that purely compositional methods may strug-
gle to construct context-aware bigram embeddings, even when the features are present in the corpus.
On the other hand, adding up corpus contexts (1) is dominated by stop-word information. Sent2Vec is
successful on half the examples, reflecting its focus on good sentence, not bigram, embeddings.
Vocabulary sizes (i.e. BonG dimensions) vary by task; usually 10K-100K. 1,3,7Representation
n
d *
MR
CR
SUBJ MPQA TREC SST (±1) SST IMDB
BonG
1
V1
77.1 77.0
91.0
85.1
86.8
80.7
36.8
88.3
2
V1 + V2
77.8 78.1
91.8
85.8
90.0
80.9
39.0
90.0
3
V1 + V2 + V3
77.8 78.3
91.4
85.6
89.8
80.1
42.3
89.8
a la carte
1
1600
79.8 81.3
92.6
87.4
85.6
84.1
46.7
89.0
2
3200
81.3 83.7
93.5
87.6
89.0
85.8
47.8
90.3
3
4800
81.8 84.3
93.8
87.6
89.0
86.7
48.1
90.9
Sent2Vec 1
1-2
700
76.3 79.1
91.2
87.2
85.8
80.2
31.0
85.5
DisC 2
2-3
3200-4800
80.1 81.5
92.6
87.9
90.0
85.5
46.7
89.6
skip-thoughts 3
4800
80.3 83.8
94.2
88.9
93.0
85.1
45.8
SDAE 4
2400
74.6 78.0
90.8
86.9
78.4
CNN-LSTM 5
4800
77.8 82.0
93.6
89.4
92.6
MC-QT 6
4800
82.4 86.0
94.8
90.2
92.4
87.6
byte mLSTM 7
4096
86.8 90.6
94.7
88.8
90.4
91.7
54.6
92.2
*
Dataset: nlp.cs.princeton.edu/CRW 2 Code: www.github.com/NLPrinceton/ALaCarte
Method SpecificationWe begin by assuming a large text corpus C V consisting of contexts c of words w in a vocabulary V, with the contexts themselves being sequences of words in V (e.g. a fixed-size window around the word or feature). We further assume that we have trained word embeddings v w ∈ R d on this collo-
AcknowledgmentsWe thank Karthik Narasimhan and our three anonymous reviewers for helpful suggestions. The work in this paper was in part supported by SRC JUMP, Mozilla Research, NSF grants CCF-1302518 and CCF-1527371, Simons Investigator Award, Simons Collaboration Grant, and ONR-N00014-16-1-2329.
Cross-lingual word embeddings for low-resource language modeling. Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, Trevor Cohn, Proc. EACL. EACLOliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017. Cross-lingual word embeddings for low-resource language model- ing. In Proc. EACL.
A compressed sensing view of unsupervised text embeddings, bag-of-ngrams, and lstms. Sanjeev Arora, Mikhail Khodak, Nikunj Saunshi, Kiran Vodrahalli, Proc. ICLR. ICLRSanjeev Arora, Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2018a. A compressed sensing view of unsupervised text embeddings, bag-of-n- grams, and lstms. In Proc. ICLR.
A latent variable model approach to pmi-based word embeddings. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski, TACLSanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. TACL.
Linear algebraic structure of word senses. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski, with applications to polysemy. TACLSanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018b. Linear algebraic struc- ture of word senses, with applications to polysemy. TACL.
A simple but tough-to-beat baseline for sentence embeddings. Sanjeev Arora, Yingyu Liang, Tengyu Ma, Proc. ICLR. ICLRSanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In Proc. ICLR.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, ArXivPiotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. ArXiv.
Learning to predict distributions of words across domains. Danushka Bollegala, David Weir, John Carroll, Proc. ACL. ACLDanushka Bollegala, David Weir, and John Carroll. 2014. Learning to predict distributions of words across domains. In Proc. ACL.
Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities. José Camacho-Collados, Mohammad Taher Pilehvar, Roberto Navigli, AIJosé Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating ex- plicit knowledge and corpus statistics for a multilin- gual representation of concepts and entities. AI.
Semeval-2017 task 1: Semantic textual similarity multilingual and cross-lingual focused evaluation. Daniel Cer, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia, Proc. SemEval. SemEvalDaniel Cer, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Seman- tic textual similarity multilingual and cross-lingual focused evaluation. In Proc. SemEval.
Community evaluation and exchange of word vectors at wordvectors.org. Manaal Faruqui, Chris Dyer, Proc. ACL: System Demonstrations. ACL: System DemonstrationsManaal Faruqui and Chris Dyer. 2014. Community evaluation and exchange of word vectors at word- vectors.org. In Proc. ACL: System Demonstrations.
WordNet: An Electronic Lexical Database. Christiane Fellbaum, MIT PressChristiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press.
. W , Nelson Francis, Henry Kucera, Brown Corpus Manual. Brown UniversityW. Nelson Francis and Henry Kucera. 1979. Brown Corpus Manual. Brown University.
Learning generic sentence representations using convolutional neural networks. Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, Lawrence Carin, Proc. EMNLP. EMNLPZhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, and Lawrence Carin. 2017. Learning generic sentence representations using convolutional neural networks. In Proc. EMNLP.
A primer on neural network models for natural language processing. Yoav Goldberg, JAIRYoav Goldberg. 2016. A primer on neural network models for natural language processing. JAIR.
Zellig Harris, Distributional structure. Word. 10Zellig Harris. 1954. Distributional structure. Word, 10:146-162.
High-risk learning: Acquiring new word vectors from tiny data. Aurélie Herbelot, Marco Baroni, Proc. EMNLP. EMNLPAurélie Herbelot and Marco Baroni. 2017. High-risk learning: Acquiring new word vectors from tiny data. In Proc. EMNLP.
Learning distributed representations of sentences from unlabelled data. Felix Hill, Kyunghyun Cho, Anna Korhonen, Proc. NAACL. NAACLFelix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proc. NAACL.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proc. KDD. KDDMinqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proc. KDD.
Sensembed: Learning sense embeddings for word and relational similarity. Ignacio Iacobacci, Mohammad Taher Pilehvar, Roberto Navigli, Proc. ACL-IJCNLP. ACL-IJCNLPIgnacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: Learning sense embeddings for word and relational similarity. In Proc. ACL-IJCNLP.
Automated wordnet construction using word embeddings. Mikhail Khodak, Andrej Risteski, Christiane Fellbaum, Sanjeev Arora, Proc. Workshop on Sense, Concept and Entity Representations and their Applications. Workshop on Sense, Concept and Entity Representations and their ApplicationsMikhail Khodak, Andrej Risteski, Christiane Fell- baum, and Sanjeev Arora. 2017. Automated word- net construction using word embeddings. In Proc. Workshop on Sense, Concept and Entity Represen- tations and their Applications.
Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Adv. NIPSRyan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urta- sun, and Sanja Fidler. 2015. Skip-thought vectors. In Adv. NIPS.
Annotating wordnet. Helen Langone, Benjamin R Haskell, George A Miller, Proc. Workshop on Frontiers in Corpus Annotation. Workshop on Frontiers in Corpus AnnotationHelen Langone, Benjamin R. Haskell, and George A. Miller. 2004. Annotating wordnet. In Proc. Work- shop on Frontiers in Corpus Annotation.
Multimodal word meaning induction from minimal exposure to natural text. Angeliki Lazaridou, Marco Marelli, Marco Baroni, Cognitive Science. Angeliki Lazaridou, Marco Marelli, and Marco Baroni. 2017. Multimodal word meaning induction from minimal exposure to natural text. Cognitive Science.
Learning question classifiers. Xin Li, Dan Roth, Proc. COLING. COLINGXin Li and Dan Roth. 2002. Learning question classi- fiers. In Proc. COLING.
An efficient framework for learning sentence representations. Lajanugen Logeswaran, Honglak Lee, Proc. ICLR. ICLRLajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence represen- tations. In Proc. ICLR.
Better word representations with recursive neural networks for morphology. Thang Luong, Richard Socher, Christopher Manning, Proc. CoNLL. CoNLLThang Luong, Richard Socher, and Christopher Man- ning. 2013. Better word representations with re- cursive neural networks for morphology. In Proc. CoNLL.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proc. ACL-HLT. ACL-HLTAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proc. ACL-HLT.
Inferring networks of substitutable and complementary products. Julian Mcauley, Rahul Pandey, Jure Leskovec, Proc. KDD. KDDJulian McAuley, Rahul Pandey, and Jure Leskovec. 2015. Inferring networks of substitutable and com- plementary products. In Proc. KDD.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeffrey Dean, Adv. NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Adv. NIPS.
Composition in distributional models of semantics. Jeff Mitchell, Mirella Lapata, Cognitive Science. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Sci- ence.
Semeval-2015 task 13: Multilingual all-words sense disambiguation and entity linking. Andrea Moro, Roberto Navigli, Proc. SemEval. SemEvalAndrea Moro and Roberto Navigli. 2015. Semeval- 2015 task 13: Multilingual all-words sense disam- biguation and entity linking. In Proc. SemEval.
All-but-thetop: Simple and effective post-processing for word representations. Jiaqi Mu, Pramod Viswanath, Proc. ICLR. ICLRJiaqi Mu and Pramod Viswanath. 2018. All-but-the- top: Simple and effective post-processing for word representations. In Proc. ICLR.
Semeval-2013 task 12: Multilingual word sense disambiguation. Roberto Navigli, David Jurgens, Daniele Vannella, Proc. SemEval. SemEvalRoberto Navigli, David Jurgens, and Daniele Vannella. 2013. Semeval-2013 task 12: Multilingual word sense disambiguation. In Proc. SemEval.
Predictability of distributional semantics in derivational word formation. Sebastian Pado, Aurelie Herbelot, Max Kisselew, Proc. COLING. COLINGSebastian Pado, Aurelie Herbelot, Max Kisselew, and Jan Snajder. 2016. Predictability of distributional semantics in derivational word formation. In Proc. COLING.
Unsupervised learning of sentence embeddings using compositional n-gram features. Matteo Pagliardini, Prakhar Gupta, Martin Jaggi, Proc. NAACL. NAACLMatteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embed- dings using compositional n-gram features. In Proc. NAACL.
A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. Bo Pang, Lillian Lee, Proc. ACL. ACLBo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summa- rization based on minimum cuts. In Proc. ACL.
Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. Bo Pang, Lillian Lee, Proc. ACL. ACLBo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proc. ACL.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proc. EMNLP. EMNLPJeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. EMNLP.
Efficient, compositional, order-sensitive n-gram embeddings. Adam Poliak, Pushpendre Rastogia, M Patrick Martin, Benjamin Van Durme, Proc. EACL. EACLAdam Poliak, Pushpendre Rastogia, M. Patrick Martin, and Benjamin Van Durme. 2017. Efficient, com- positional, order-sensitive n-gram embeddings. In Proc. EACL.
Learning to generate reviews and discovering sentiment. Alec Radford, Rafal Jozefowicz, Ilya Sutskever, ArXivAlec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. ArXiv.
Neural sequence learning models for word sense disambiguation. Alessandro Raganato, Claudio Delli Bovi, Roberto Navigli, Proc. EMNLP. EMNLPAlessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017. Neural sequence learning models for word sense disambiguation. In Proc. EMNLP.
Autoextend: Extending word embeddings to embeddings for synsets and lexemes. Sascha Rothe, Hinrich Schütze, Proc. ACL-IJCNLP. ACL-IJCNLPSascha Rothe and Hinrich Schütze. 2015. Autoex- tend: Extending word embeddings to embeddings for synsets and lexemes. In Proc. ACL-IJCNLP.
The westbury lab wikipedia corpus. Cyrus Shaoul, Chris Westbury, Cyrus Shaoul and Chris Westbury. 2010. The westbury lab wikipedia corpus.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, Proc. EMNLP. EMNLPRichard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proc. EMNLP.
A multi-task learning approach to adapting bilingual word embeddings for cross-lingual named entity recognition. Dingquan Wang, Nanyun Peng, Kevin Duh, Proc. IJCNLP. IJCNLPDingquan Wang, Nanyun Peng, and Kevin Duh. 2017. A multi-task learning approach to adapting bilin- gual word embeddings for cross-lingual named en- tity recognition. In Proc. IJCNLP.
Baselines and bigrams: Simple, good sentiment and topic classification. Sida Wang, Christopher D Manning, Proc. ACL. ACLSida Wang and Christopher D. Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In Proc. ACL.
Annotating expressions of opinions and emotions in language. Janyce Wiebe, Theresa Wilson, Claire Cardie, Proc. LREC. LRECJanyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Proc. LREC.
Towards universal paraphrastic sentence embeddings. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, Proc. ICLR. ICLRJohn Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. In Proc. ICLR.
Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, Jason Weston, Starspace: Embed all the things! ArXiv. Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2017. Starspace: Embed all the things! ArXiv.
Joint learning of the embedding of words and entities for named entity disambiguation. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Proc. CoNLL. CoNLLIkuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the em- bedding of words and entities for named entity dis- ambiguation. In Proc. CoNLL.
An exploration of embeddings for generalized phrases. Wenpeng Yin, Hinrich Schutze, Proc. ACL 2014 Student Research Workshop. ACL 2014 Student Research WorkshopWenpeng Yin and Hinrich Schutze. 2014. An explo- ration of embeddings for generalized phrases. In Proc. ACL 2014 Student Research Workshop.
| [] |
[
"Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time",
"Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time"
] | [
"Pankaj Gupta pankaj.gupta@siemens.com \nCorporate Technology, Machine-Intelligence (MIC-DE)\nSiemens AG Munich\nGermany\n\nCIS\nUniversity of Munich (LMU) Munich\nGermany\n",
"Subburam Rajaram subburam.rajaram@siemens.com \nCorporate Technology, Machine-Intelligence (MIC-DE)\nSiemens AG Munich\nGermany\n",
"Hinrich Schütze \nCIS\nUniversity of Munich (LMU) Munich\nGermany\n",
"Bernt Andrassy bernt.andrassy@siemens.com \nCorporate Technology, Machine-Intelligence (MIC-DE)\nSiemens AG Munich\nGermany\n"
] | [
"Corporate Technology, Machine-Intelligence (MIC-DE)\nSiemens AG Munich\nGermany",
"CIS\nUniversity of Munich (LMU) Munich\nGermany",
"Corporate Technology, Machine-Intelligence (MIC-DE)\nSiemens AG Munich\nGermany",
"CIS\nUniversity of Munich (LMU) Munich\nGermany",
"Corporate Technology, Machine-Intelligence (MIC-DE)\nSiemens AG Munich\nGermany"
] | [
"Proceedings of NAACL-HLT 2018"
] | Dynamic topic modeling facilitates the identification of topical trends over time in temporal collections of unstructured documents. We introduce a novel unsupervised neural dynamic topic model named as Recurrent Neural Network-Replicated Softmax Model (RNN-RSM), where the discovered topics at each time influence the topic discovery in the subsequent time steps. We account for the temporal ordering of documents by explicitly modeling a joint distribution of latent topical dependencies over time, using distributional estimators with temporal recurrent connections. Applying RNN-RSM to 19 years of articles on NLP research, we demonstrate that compared to state-of-the art topic models, RNN-RSM shows better generalization, topic interpretation, evolution and trends. We also introduce a metric (named as SPAN) to quantify the capability of dynamic topic model to capture word evolution in topics over time. | 10.18653/v1/n18-1098 | [
"https://www.aclweb.org/anthology/N18-1098.pdf"
] | 13,905,238 | 1711.05626 | 5f4743aba8f02bf8bdcc8c263332b1d5d1c9a616 |
Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 1 -6. 2018. 2018
Pankaj Gupta pankaj.gupta@siemens.com
Corporate Technology, Machine-Intelligence (MIC-DE)
Siemens AG Munich
Germany
CIS
University of Munich (LMU) Munich
Germany
Subburam Rajaram subburam.rajaram@siemens.com
Corporate Technology, Machine-Intelligence (MIC-DE)
Siemens AG Munich
Germany
Hinrich Schütze
CIS
University of Munich (LMU) Munich
Germany
Bernt Andrassy bernt.andrassy@siemens.com
Corporate Technology, Machine-Intelligence (MIC-DE)
Siemens AG Munich
Germany
Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time
Proceedings of NAACL-HLT 2018
NAACL-HLT 2018New Orleans, LouisianaAssociation for Computational LinguisticsJune 1 -6. 2018. 2018
Dynamic topic modeling facilitates the identification of topical trends over time in temporal collections of unstructured documents. We introduce a novel unsupervised neural dynamic topic model named as Recurrent Neural Network-Replicated Softmax Model (RNN-RSM), where the discovered topics at each time influence the topic discovery in the subsequent time steps. We account for the temporal ordering of documents by explicitly modeling a joint distribution of latent topical dependencies over time, using distributional estimators with temporal recurrent connections. Applying RNN-RSM to 19 years of articles on NLP research, we demonstrate that compared to state-of-the art topic models, RNN-RSM shows better generalization, topic interpretation, evolution and trends. We also introduce a metric (named as SPAN) to quantify the capability of dynamic topic model to capture word evolution in topics over time.
Introduction
Topic Detection and Tracking (Allan et al., 1998) is an important area of natural language processing to find topically related ideas that evolve over time in a sequence of text collections and exhibit temporal relationships. The temporal aspects of these collections can present valuable insight into the topical structure of the collections and can be quantified by modeling the dynamics of the underlying topics discovered over time.
Problem Statement: We aim to generate temporal topical trends or automatic overview timelines of topics for a time sequence collection of documents. This involves the following three tasks in dynamic topic analysis: and recognizing how it grows or decays over time (Allan, 2002).
(3) Temporal Topic Characterization (TTC): Identifying the characteristics for each of the main topics in order to track the words' usage (keyword trends) for a topic over time i.e. topical trend analysis for word evolution (Fig 1, Left).
Probabilistic static topic models, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and its variants (Wang and McCallum, 2006;Hall et al., 2008;Gollapalli and Li, 2015) have been investigated to examine the emergence of topics from historical documents. Another variant known as Replicated Softmax (RSM) (Hinton and Salakhutdinov, 2009) has demonstrated better generalization in log-probability and retrieval, compared to LDA. Prior works (Iwata et al., 2010;Pruteanu-Malinici et al., 2010;Saha and Sindhwani, 2012;Schein et al., 2016) have investigated Bayesian modeling of topics in time-stamped documents. Particularly, Blei and Lafferty (2006) developed a LDA based dynamic topic model (DTM) to capture the evolution of topics in a time sequence collection of documents; however they do not capture explicitly the topic popularity and usage of specific terms over time. We propose a family of probabilistic time series models with distributional estimators to explicitly model the dynamics of the underlying topics, introducing temporal latent topic dependencies (Fig 1, Right).
To model temporal dependencies in high dimen- sional sequences, such as polyphonic music, the temporal stack of RBMs (Smolensky, 1986;Hinton, 2002) has been investigated to model complex distributions. The Temporal RBM (Taylor et al., 2007;Sutskever and Hinton, 2007), Recurrent Temporal RBM (RTRBM) (Sutskever et al., 2009) and RNN-RBM (Boulanger-Lewandowski et al., 2012) show success in modeling the temporal dependencies in such symbolic sequences. In addition, RNNs (Gupta et al., 2015a;Vu et al., 2016a,b;Gupta et al., 2016) have been recognized for sentence modeling in natural language tasks. We aspire to build neural dynamic topic model called RNN-RSM to model document collections over time and learn temporal topic correlations.
We consider RSM for TSD and introduce the explicit latent topical dependencies for TED and TTC tasks. Fig 1 illustrates our motivation, where temporal ordering in document collection V (t) at each time step t, is modeled by conditioning the latent topic h (t) on the sequence history of latent topics h (0) , ..., h (t−1) , accumulated with temporal lag. Each RSM discovers latent topics, where the introduction of a bias term in each RSM via the time-feedback latent topic dependencies enables to explicitly model topic evolution and specific topic term usage over time. The temporal connections and RSM biases allow to convey topical information and model relation among the words, in order to deeply analyze the dynamics of the underlying topics. We demonstrate the applicability of proposed RNN-RSM by analyzing 19 years of scientific articles from NLP research.
The contributions in this work are:
(1) Introduce an unsupervised neural dynamic topic model based on recurrent neural network and RSMs, named as RNN-RSM to explicitly model discovered latent topics (evolution) and word relations (topic characterization) over time.
(2) Demonstrate better generalization (logprobability and time stamp prediction), topic interpretation (coherence), evolution and characterization, compared to the state-of-the-art.
(3) It is the first work in dynamic topic modeling using undirected stochastic graphical models and deterministic recurrent neural network to model collections of different-sized documents over time, within the generative and neural network framework. The code and data are available at https://github.com/pgcool/RNN-RSM.
2 The RNN-RSM model RSM (Fig 2, Left) models are a family of differentsized Restricted Boltzmann Machines (RBMs) (Gehler et al., 2006;Xing et al., 2005;Gupta et al., 2015b,c) that models word counts by sharing the same parameters with multinomial distribution over the observable i.e. it can be interpreted as a single multinomial unit (Fig 2, Middle) sampled as many times as the document size. This facilitates in dealing with the documents of different lengths.
The proposed RNN-RSM model (Fig 2, Right) is a sequence of conditional RSMs 1 such that at any time step t, the RSM's bias parameters b v (t) and b h (t) depend on the output of a deterministic RNN with hidden layer u (t−1) in the previous time step, t−1. Similar to RNN-RBM (Boulanger-Lewandowski et al., 2012), we constrain RNN hidden units (u (t) ) to convey temporal information, while RSM hidden units (h (t) ) to model conditional distributions. Therefore, parameters (b v (t) , b h (t) ) are time-dependent on the sequence history at time t (via a series of conditional RSMs) denoted by Θ (t) ≡ { V (τ ) , u (τ ) |τ < t}, that captures temporal dependencies. The RNN-RSM is defined by its joint probability distribution:
P ( V, H) = P ({ V (t) , h (t) } T t=1 ) = T t=1 P ( V (t) , h (t) |Θ (t) ) where V = [ V (1) , ... V (T ) ] and H = [h (1) , ...h (T ) ].
Each h (t) ∈ {0, 1} F be a binary stochastic hidden topic vector with size F and
V (t) = {V (t) n } N (t) n=1
be a collection of N documents at time step t. Let
V (t) n be a K × D (t)
n observed binary matrix of the n th document in the collection where, D (t) n is the document size and K is the dictionary size over all the time steps. The conditional distribution (for each unit in hidden or visible) in each RSM at time step, is given by softmax and logistic functions:
P (v k,(t) n,i = 1|h (t) n ) = exp(bv,i k,(t) + F j=1 h (t) n,j W k ij ) K q=1 exp(bv,i q,(t) + F j=1 h (t) n,j W q ij ) P (h (t) n,j = 1|V (t) n ) = σ(b (t) h,j + D (t) n i=1 K k=1 v k,(t) n,i W k ij ) where P (v k,(t) n,i = 1|h (t) n ) and P (h (t) n,j = 1|V (t)
n ) are conditional distributions for i th visible v n,i and j th hidden unit h n,j for the n th document at t. W k ij is a symmetric interaction term between i that takes on value k and j. v k,(t) n is sampled D (t) n times with identical weights connected to binary hidden units, resulting in multinomial visibles, therefore the name Replicated Softmax. The conditionals across layers are factorized as:
P (V (t) n |h (t) n ) = D (t) n i=1 P (v (t) n,i |h (t) n ); P (h (t) n |V (t) n ) = j P (h (t) n,j |V (t) n ).
Since biases of RSM depend on the output of RNN at previous time steps, that allows to propagate the estimated gradient at each RSM backward through time (BPTT). The RSM biases and RNN hidden state u (t) at each time step t are given by- u (t) in RNN portion of the graph using eq 2. 2: Compute bv (t) and b h (t) using eq 1. 3: Generate negatives V (t) * using k-step Gibbs sampling. 4: Estimate the gradient of the cost C w.r.t. parameters of RSM W vh , bv (t) and b h (t) using eq 5. 5: Compute gradients (eq 6) w.r.t. RNN connections (W uh , Wuv, Wuu, Wvu, u 0 ) and biases (bv, b h , bu). 6: Goto step 1 until stopping criteria (early stopping or maximum iterations reached)
bv (t) = bv+Wuvu (t−1) b h (t) = b h +W uh u (t−1)(1)u (t) = tanh(bu + Wuuu (t−1) + Wvu N (t) n=1v (t) n ) (2) Algorithm 1 Training RNN-RSM with BPTT Input: Observed visibles, V = { V (0) , V (1) , ..., V (t) , ..., V (T ) } RNN-RSM Parameters: θ = {W uh , W vh , Wuv, Wvu, Wuu, bv, bu, b h , bv (t) , b h (t) , u (0) } 1: Propagate
where W uv , W uh and W vu are weights connecting RNN and RSM portions ( Figure 2). b u is the bias of u and W uu is the weight between RNN hidden units.v
(t)
n is a vector ofv k n (denotes the count for the k th word in n th document).
N (t) n=1v (t)
n refers to the sum of observed vectors across documents at time step t where each document is represented as-
v (t) n = [{v k,(t) n } K k=1 ] andv k,(t) n = D (t) n i=1 v k,(t) n,i(3)
where v k,(t) n,i =1 if visible unit i takes on k th value. In each RSM, a separate RBM is created for each document in the collection at time step t with
D (t) n softmax units, where D (t)
n is the count of words in the n th document. Consider a document of D (t) n words, the energy of the state {V
(t) n , h (t) n } at time step, t is given by- E(V (t) n , h (t) n ) = − F j=1 K k=1 h (t) n,j W k jv k,(t) n − K k=1v k,(t) n b k v − D (t) n F j=1 b h,j h (t) n,j
Observe that the bias terms on hidden units are scaled up by document length to allow hidden units to stabilize when dealing with different-sized documents. The corresponding energy-probability relation in the energy-based model is-
P (V (t) n ) = 1 Z (t) n h (t) n exp(−E(V (t) n , h (t) n )) (4) where Z (t) n = V (t) n h (t) n exp(−E(V (t) n , h (t)
n )) is the normalization constant. The lower bound on the log likelihood of the data takes the form:
ln P (V (t) n ) ≥ h (t) Q(h (t) n |V (t) n ) ln P (V (t) n , h (t) n ) + H(Q) = ln P (V (t) n ) − KL[Q(h (t) n |V (t) n )||P (h (t) n |V (t) n )]
Year 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 Total ACL 58 73 250 83 79 70 177 112 134 134 307 204 214 243 270 349 227 398 331 3713 EMNLP 15 24 15 36 29 21 42 29 58 28 75 132 115 164 125 149 140 206 228 1756 ACL+EMNLP 73 97 265 119 108 91 219 141 192 162 382 336 329 407 395 498 367 604 559 5469 where H(·) is the entropy and Q is the approximating posterior. Similar to Deep Belief Networks (Hinton et al., 2006), adding an extra layer improves lower bound on the log probability of data, we introduce the extra layer via RSM biases that propagates the prior via RNN connections. The dependence analogy follows-
E(V (t) n , h (t) n ) ∝ 1 bv (t) and E(V (t) n , h (t) n ) ∝ 1 b h (t) ln P (V (t) n ) ∝ 1 E(V (t) n ,h (t) n ) ; ln P ( V (t) n ) ∝ ln P ({ V τ n }τ<t)
Observe that the prior is seen as the deterministic hidden representation of latent topics and injected into each hidden state of RSMs, that enables the likelihood of the data to model complex temporal densities i.e. heteroscedasticity in document collections ( V) and temporal topics (H).
Gradient Approximations: The cost in RNN-RSM is:
C = T t=1 C t ≡ T t=1 − ln P ( V (t)
) Due to intractable Z, the gradient of cost at time step t w.r.t. (with respect to) RSM parameters are approximated by k-step Contrastive Divergence (CD) (Hinton, 2002). The gradient of the negative log-likelihood of a document collection
{V (t) n } N (t) n=1 w.r.t. RSM parameter W vh , 1 N (t) N (t) n=1 ∂(− ln P (V (t) n )) ∂W vh = 1 N (t) N (t) n=1 ∂F(V (t) n ) ∂W vh − ∂(− ln Z (t) n ) ∂W vh = E P data [ ∂F(V (t) n ) ∂W vh ] data-dependent expectation − E P model [ ∂F(V (t) n ) ∂W vh ] model's expectation 1 N (t) N (t) n=1 ∂F(V (t) n ) ∂W vh − ∂F(V (t) * n ) ∂W vh
The second term is estimated by negative samples V (t) * n obtained from k-step Gibbs chain starting at V (t)
n samples. P data ( V (t) , h (t) ) = P (h (t) | V (t) )P data ( V (t) ) and P data ( V (t) ) = 1 N (t) N (t) n δ( V (t) − V (t) n ) is the empirical distri- bution on the observable. P model (V (t) * n , h (t) n ) is defined in eq. 4. The free energy F(V (t) n ) is re- lated to normalized probability of V (t) n as P (V (t) n ) ≡ exp −F(V (t) n ) /Z (t)
n and as follows-
F(V (t) n ) = − K k=1v k,(t) n b k v − F j=1 log(1+ exp(D (t) n b h,j + K k=1v k,(t) n W k j )) Gradient approximations w.r.t. RSM parameters, ∂C t ∂b v (t) N (t) n=1v (t) * n −v (t) n ∂C t ∂b h (t) N (t) n=1 σ(W vhv (t) * n − D (t) n b h (t) ) −σ(W vhv (t) n − D (t) n b h (t) ) ∂C t ∂W vh T t=1 N (t) n=1 σ(W vhv (t) * n − D (t) n b h (t) ) v (t) * T n − σ(W vhv (t) n − D (t) n b h (t) )v (t)T n(5)
The estimated gradients w.r.t. RSM biases are back-propagated via hidden-to-bias parameters (eq 1) to compute gradients w.r.t. RNN connections (W uh , W uv , W vu and W uu ) and biases For the single-layer RNN-RSM, the BPTT recurrence relation for 0 ≤ t < T is given by-
(b h , b v and b u ). ∂C ∂W uh = T t=1 ∂C t ∂b h (t) u (t−1)T ∂C ∂W uv = T t=1 ∂C t ∂b v (t) u (t−1)T ∂C ∂W vu = T t=1 ∂C t ∂u (t) u (t) (1 − u (t) ) N (t) n=1v (t)T n ∂C ∂b h = T t=1 ∂C t ∂b h (t) and ∂C ∂b v = T t=1 ∂C t ∂b v (t) ∂C ∂b u = T t=1 ∂C t ∂u (t) u (t) (1 − u (t) ) ∂C ∂W uu = T t=1 ∂C t ∂u (t) u (t) (1 − u (t) )u (t−1)T(6)∂C t ∂u (t) = W uu ∂C t+1 ∂u (t+1) u (t+1) (1 − u (t+1) ) +W uh ∂C t+1 ∂b h (t+1) + W uv ∂C t+1 ∂b v (t+1)
where u (0) being a parameter and ∂C T ∂u (T ) = 0. See Training RNN-RSM with BPTT in Algo 1.
Evaluation
Dataset and Experimental Setup
We use the processed dataset (Gollapalli and Li, 2015), consisting of EMNLP and ACL conference papers from the year 1996 through 2014 (Table 1). We combine papers for each year from the two venues to prepare the document collections over time. We use ExpandRank (Wan and Xiao, 2008) to extract top 100 keyphrases for each paper, including unigrams and bigrams. We split the bigrams to unigrams to create a dictionary of all unigrams and bigrams. The dictionary size (K) and word count are 3390 and 5.19 M, respectively.
We evaluate RNN-RSM against static (RSM, LDA) and dynamic (DTM) topics models for topic and keyword evolution in NLP research over time. Individual 19 different RSM and LDA models are trained for each year, while DTM 2 and RNN-RSM are trained over the years with 19 time steps, where paper collections for a year is input at each time step. RNN-RSM is initialized with RSM (W vh , bv, b h ) trained for the year 2014.
We use perplexity to choose the number of topics (=30). See Table 2 for hyperparameters.
Generalization in Dynamic Topic Models
Perplexity: We compute the perplexity on unobserved documents ( V (t) ) at each time step as To further assess the dynamic topics models, we split the document collections at each time step into 80-20% train-test, resulting in 1067 held-out documents. We predict the time stamp (dating) of a document by finding the most likely (with the lowest perplexity) location over the time line. See the mean absolute error (Err) in year for the held-out in Table 3. Note, we do not use the time stamp as observables during training.
PPL( V (t) , t) = exp − 1 N (t) N (t) n=1 log P (V (t) n ) N (t) n=1 D(
TSD, TED: Topic Evolution over Time
Topic Detection: To extract topics from each RSM, we compute posterior P ( V (t) |h j = 1) by activating a hidden unit and deactivating the rest in a hidden layer. We extract the top 20 terms for every 30 topic set from 1996-2014, resulting in |Q| max = 19 × 30 × 20 possible topic terms.
Topic Popularity: To determine topic popularity, we selected three popular topics (Sentiment Analysis, Word Vector and Dependency Parsing) in NLP research and create a set 3 of key-terms (including unigrams and bigrams) for each topic. We compute cosine similarity of the key-terms defined for each selected topic and topics discovered by the topic models over the years. We consider the discovered topic that is the most similar to the key-terms in the target topic and plot the similarity values in Figure 3a, Topic Drift (Focus Change): To compute the topic focus change over the years, we first split the time period 1996-2014 into five parts: {1996, 2000, 2005, 2010, 2014}. The cosine similarity scores are computed between the topic sets discovered in a particular year and the years preceding it in the above set, for example the similarity scores between the topic-terms in (1996,2000), (1996, 2005), (1996, 2010) and (1996, 2014), respectively. Figure 3i, 3j, 3k and 3l demonstrate that RNN-RSM shows higher convergence in topic focus over the years, compared to LDA and RSM. In RNN-RSM, the topic similarity is gradually increased over time, however not in DTM. The higher similarities in the topic sets indicate that new/existing topics and words do not appear/disappear over time.
We compute topic-term drift (T T D) to show the changing topics from initial to final year, as
T T D = 1.0 − cosineSimilarity(Q (t) , Q (t ) )
where Q is the set of all topic-terms for time step t. Table 3 shows that T T D (where t=1996 and t =2014) are 0.268 and 0.084 for RNN-RSM and DTM, respectively. It suggests that the higher number of new topic-terms evolved in RNN-RSM, compared to DTM. Qualitatively, the Table 4 shows the topics observed with the highest and lowest cosine drifts in DTM and RNN-RSM.
In Figure 3g and 3h, we also illustrate the temporal evolution (drift) in the selected topics by computing cosine similarity on their adjacent topic vectors over time. The topic vectors are selected similarly as in computing topic popularity. We observe better TED in RNN-RSM than DTM for the three emerging topics in NLP research. For instance, for the selected topic Word Vector, the red line in DTM (Fig 3h) shows no drift (for x-axis 00-05, 05-10 and 10-14), suggesting the topicterms in the adjacent years are similar and does not evolve. -RSM (1996) reordering, statistical machine, translation model, translations, arabic, word align, translation probability, word alignment, translation system, source word, ibm model, source sentence, english translation, target language, word segmentation RNN-RSM (2014) reordering, statistical machine, translation model, translations, arabic, word align, translation probability, word alignment, translation system, source word, reordering model, bleu score, smt system, english translation, target language 0.53 RNN-RSM (1996) input, inference, semantic representation, distributional models, logical forms, space model, clustering algorithm, space models, similar word, frequent word, meaning representation, lexical acquisition, new algorithm, same context, multiple words RNN-RSM (2014) input, inference, word vector, word vectors, vector representation, semantic representation, distributional models, semantic space, space model, semantic parser, vector representations, neural language, logical forms, cosine similarity, clustering algorithm
RNN
Topic Interpretability
Beyond perplexities, we also compute topic coherence (Chang et al., 2009;Newman et al., 2009; to determine the meaningful topics captured. We use the coherence measure proposed by Aletras and Stevenson (2013) that retrieves co-occurrence counts for the set of topic words using Wikipedia as a reference corpus to identify context features (window=5) for each topic word. Relatedness between topic words and context features is measured using normalized pointwise mutual information (NPMI), resulting in a single vector for every topic word. The coherence (COH) score is computed as the arithmetic mean of the cosine similarities between all word pairs. Higher scores imply more coherent topics. We use Palmetto 4 library to estimate coherence. Quantitative: We compute mean and median coherence scores for each time step using the corresponding topics, as shown in Fig 3e and 3f. Table 3 shows mean-COH and median-COH scores, computed by mean and median of scores from Fig 3e and 3f, respectively. Observe that RNN-RSM captures topics with higher coherence. Qualitative: Table 5 shows topics (top-10 words) with the highest and lowest coherence scores.
TTC: Trending Keywords over time
We demonstrate the capability of RNN-RSM to capture word evolution (usage) in topics over time. We define: keyword-trend and SPAN. The keyword-trend is the appearance/disappearance of the keyword in topic-terms detected over time, while SPAN is the length of the longest sequence of the keyword appearance in its keyword trend. 4 github.com/earthquakesan/palmetto-py model } T t=1 be a set of sets 5 of topic-terms discovered by the model (LDA, RSM, DTM and RNN-RSM) over different time steps. Let Q (t) ∈ Q model be the topic-terms at time step t. The keyword-trend for a keyword k is a timeordered sequence of 0s and 1s, as
trend k ( Q) = [find(k, Q (t) )] T t=1 where; find(k, Q (t) ) = 1 if k ∈ Q (t) 0 otherwise(7)
And the SPAN (S k ) for the kth keyword is-
S k ( Q) = length longestOnesSeq(trend k ( Q)
We compute keyword-trend and SPAN for each term from the set of some popular terms. We define average-SPAN for all the topic-terms appearing in the topics discovered over the years, where || Q|| = |{k|Q (t) ∈ Q ∧ k ∈ Q (t) }| is the count of unique topic-terms andv k = T t=1
avg-SPAN( Q) = 1 || Q|| {k|Q (t) ∈ Q∧k∈Q (t) } S k ( Q) v k = 1 || Q|| {k|Q (t) ∈ Q∧k∈Q (t) } S dict k(
Dt j=1 v k j,t denotes the count of k th keyword. In Figure 4, the keyword-trends indicate emergence (appearance/disappearance) of the selected popular terms in topics discovered in ACL and EMNLP papers over time. Observe that RNN-RSM captures longer SPANs for popular keywords and better word usage in NLP research. For example: Word Embedding is one of the top keywords, appeared locally ( Figure 5) in the recent years. RNN-RSM detects it in the topics from 2010 to 2014, however DTM does not. Similarly, for Neural Language. However, Machine Translation and Language Model are globally appeared in the input document collections over time and captured in the topics by RNN-RSM and DTM. We also show keywords (Rule-set and Seed Words) that disappeared in topics over time.
Higher SPAN suggests that the model is capable in capturing trending keywords. Table 6 (Rumelhart et al., 1985). It has two hidden layers: h (stochastic binary) to capture topical information, while u (deterministic) to convey temporal information via BPTT that models the topic dependence at a time step t on all the previous steps τ < t. In contrast, DTM is built upon where Dirichlet distribution on words is not amenable to sequential modeling, therefore its natural parameters (topic and topic proportion distributions) for each topic are chained, instead of latent topics that results in intractable inference in topic detection and chaining.
Topic Dynamics: The introduction of explicit connection in latent topics in RNN-RSM allow new topics and words for the underlying topics to appear or disappear over time by the dynamics of topic correlations. As discussed, the distinction of h and u permits the latent topic h (t) to capture new topics, that may not be captured by h (t−1) .
DTM assumes a fixed number of global topics and models their distribution over time. However, there is no such assumption in RNN-RSM. We fixed the topic count in RNN-RSM at each time step, since W vh is fixed over time and RSM biases turn off/on terms in each topic. However, this is fundamentally different for DTM. E.g. a unique label be assigned to each of the 30 topics at any time steps t and t . DTM follows the sets of topic labels: {T opicLabels (t) } 30 k=1 = {T opicLabels (t ) } 30 k=1 , due to eq (1) in Blei and Lafferty (2006) (discussed in section 5) that limits DTM to capture new (or local) topics or words appeared over time. It corresponds to the keywordtrends (section 3.5).
Optimization: The RNN-RSM is based on Gibbs sampling and BPTT for inference while DTM employs complex variational methods, since applying Gibbs sampling is difficult due to the nonconjugacy of the Gaussian and multinomial distributions. Thus, easier learning in RNN-RSM.
For all models, approximations are solely used to compute the likelihood, either using variational approaches or contrastive divergence; perplexity was then computed based on the approximated likelihood. More specifically, we use variational approximations to compute the likelihood for DTM (Blei and Lafferty, 2006). For RSM and RNN-RSM, the respective likelihoods are approximated using the standard Contrastive Divergence (CD). While there are substantial differences between variational approaches and CD, and thus in the manner the likelihood for different models is estimated -both approximations work well for the respective family of models in terms of approximating the true likelihood. Consequently, perplexities computed based on these approximated likelihoods are indeed comparable.
Conclusion and Future Work
We have proposed a neural temporal topic model which we name as RNN-RSM, based on probabilistic undirected graphical topic model RSM with time-feedback connections via deterministic RNN, to capture temporal relationships in historical documents. The model is the first of its kind that learns topic dynamics in collections of different-sized documents over time, within the generative and neural network framework. The experimental results have demonstrated that RNN-RSM shows better generalization (perplexity and time stamp prediction), topic interpretation (coherence) and evolution (popularity and drift) in scientific articles over time. We also introduced SPAN to illustrate topic characterization.
In future work, we forsee to investigate learning dynamics in variable number of topics over time. It would also be an interesting direction to investigate the effect of the skewness in the distribution of papers over all years. Further, we see a potential application of the proposed model in learning the time-aware i.e. dynamic word embeddings (Aitchison, 2001;Basile et al., 2014;Bamler and Mandt, 2017;Rudolph and Blei, 2018;Yao et al., 2018) in order to capture language evolution over time, instead of document topics.
( 1 )Figure 1 :
11Topic Structure Detection (TSD): Identifying main topics in the document collection. (2) Topic Evolution Detection (TED): Detecting the emergence of a new topic (Left): Word Usage over time for Topic (Word Representation) in scholarly articles. (Right): RSM-based dynamic topic model with explicit temporal topic dependence
Figure 3 :
3(a, b, c): Topic popularity by LDA, RSM, DTM and RNN-RSM over time (d): Perplexity on the unobserved document collections over time (e, f): Mean and Median Topic Coherence (g, h): Topic Evolution (i,j,k,l): Topic focus change over time. Adj-Adjacent; Sim-Similarity topical locality in Figure 3c attributed to no correlation in topic dynamics over time, while in Figure 3b, DTM does not capture evolution of topic Word Vector.
Figure 4 :
4Q) 5 a set by bold and set of sets by bold Keyword-trend by RNN-RSM, DTM, RSM, LDA. Bar: Keyword presence in topics for the year
Figure 5 :
5Key-term frequency in the input over years
Figure 6 :
6Word usage for emerging topic Word Vector over time, captured by DTM and RNN-RSM LDA (directed model),
RSM for a document V n of D n =3 words (w). The bottom layer represents the softmax visible units, that share the same set of weights connected to binary hidden units h. (Middle): Interpretation of RSM in which D n softmax units with identical weights are replaced by a single multinomial unit, sampled D n times. (Right): Graphical structure of 2-layered RNN-RSM, unfolded in time. Single and double headed arrows represent deterministic and stochastic-symmetric connections, respectively. V (t) and h (t) are binary visible and hidden layers of RSM for a document collection at time, t. u is RNN hidden layer. k: dictionary index for a word wu (0)
u (1)
b h
(1)
b v
(1)
h (1)
W uh
W uv
W uu
W vu
u (2)
h (2)
W vu
W vh
b h
(2)
b v
(2)
W uu
u (T)
h (T)
W vu
b h
(T)
b v
(T)
...
...
...
W vh
W vh
RNN
RSM
w 1
w 2
Multinomial Visible
Latent Topics
V n
h
w 1
w 2
Latent Topics
h
w 2
w 1
w 1
w 2
Observed Softmax Visibles
1
0
1
2
46
k=1
k=K
w 1
w 2
w 3
0
0
0
0
0
1
0
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v n
v n
(1)
v n
(2)
v n
(T)
k=2
...
7
6
379 487
0
0
18
106
.
.
.
.
.
.
319
V (1)
V (2)
V (T)
Figure 2: (Left):
Table 1 :
1Number of papers from ACL and EMNLP conferences over the years
Table 2 :
2Hyperparameters for RNN-RSM model
Table 3 :
3State-of-the-art Comparison: Generalization
(PPL and Err), Topic Interpretation (COH) and Evolu-
tion (TTD) in DTM and RNN-RSM models
where t is the time step. N (t) is the number of
documents in a collection ( V (t) ) at time t. Better
models have lower perplexity values, suggesting
less uncertainties about the documents. For held-
out documents, we take 10 documents from each
time step i.e. total 190 documents and compute
perplexity for 30 topics. Fig 3d shows the com-
parison of perplexity values for unobserved doc-
uments from DTM and RNN-RSM at each time
step. The SumPPL (Table 3) is the sum of PPL
values for the held-out sets of each time step.
Document Time Stamp Prediction:
3b and 3b. Observe that RNN-RSM shows better topic evolution for the three emerging topics. LDA and RSM show3 topic-terms to be released with code
1083
96
97
98
99
00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Years
Cosine Similarity
RNN-RSM
RSM
LDA
DTM
(a) Topic: Sentiment Analysis
96
97
98
99
00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Years
Cosine Similarity
RNN-RSM
RSM
LDA
DTM
(b) Topic: Word Vector
96
97
98
99
00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Years
Cosine Similarity
RNN-RSM
RSM
LDA
DTM
(c) Topic: Dependency Parsing
96
97
98
99
00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
0
0.2
0.4
0.6
0.8
Years
Perplexity (PPL)
RNN-RSM
DTM
(d) Perplexity on Unobserved
96
97
98
99
00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
14
14.5
15
15.5
16
16.5
17
17.5
18
Years
COH (mean) of topics: COH
× 10
−2
RNN-RSM
DTM
(e) COH (mean) Over Time
96
97
98
99
00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
12
12.5
Table 4 :
4Topics (top 15 words) with the highest and lowest drifts (cosine) observed in DTM and RNN-RSM
Table 5 :
5Topics with the highest and lowest coherenceLet Q model = {Q
(t)
Table 6 :
6SPAN (S k ) for selected terms, avg-SPAN and set || Q|| by LDA, RSM, DTM and RNN-RSM selected keywords. The SPAN S k for each keyword is computed from Figure 4. Observe that || Q|| DT M < || Q|| RN N −RSM suggests new topics and words emerged over time in RNN-RSM, while higher SPAN values in RNN-RSM suggest better trends. Figure 6 shows how the word usage, captured by DTM and RNN-RSM for the topic Word Vector, changes over 19 years in NLP research. RNN-RSM captures popular terms Word Embedding and Word Representation emerged in it. Discussion: RNN-RSM vs DTM Architecture: RNN-RSM treats document's stream as high dimensional sequences over time and models the complex conditional probability distribution i.e. heteroscedasticity in document collections and topics over time by a temporal stack of RSMs (undirected graphical model), conditioned on time-feedback connections using RNN4
Notations: U={Un} N n=1 ; U:2D-Matrix; l:vector; U/l:Upper/lower-case; Scalars in unbold
AcknowledgmentsWe thank Sujatha Das Gollapalli for providing us with the data sets used in the experiments. We express appreciation for our colleagues Florian Buettner, Mark Buckley, Stefan Langer, Ulli Waltinger and Usama Yaseen, and anonymous reviewers for their in-depth review comments. This research was supported by Bundeswirtschaftsministerium (bmwi.de), grant 01MD15010A (Smart Data Web) at Siemens AG-CT Machine Intelligence, Munich Germany.
Language change: progress or decay. Jean Aitchison, Cambridge University PressJean Aitchison. 2001. Language change: progress or decay?. Cambridge University Press.
Evaluating topic coherence using distributional semantics. Nikolaos Aletras, Mark Stevenson, Proceedings of the 10th International Conference on Computational Semantics (IWCS). Potsdam, Germany. the 10th International Conference on Computational Semantics (IWCS). Potsdam, GermanyNikolaos Aletras and Mark Stevenson. 2013. Evaluat- ing topic coherence using distributional semantics. In Proceedings of the 10th International Conference on Computational Semantics (IWCS). Potsdam, Ger- many, pages 13-22.
Introduction to topic detection and tracking. James Allan, Topic detection and tracking. SpringerJames Allan. 2002. Introduction to topic detection and tracking. In Topic detection and tracking, Springer, pages 1-16.
Topic detection and tracking pilot study final report. James Allan, G Jaime, George Carbonell, Jonathan Doddington, Yiming Yamron, Yang, Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop. the DARPA Broadcast News Transcription and Understanding WorkshopVirginia, USJames Allan, Jaime G Carbonell, George Doddington, Jonathan Yamron, and Yiming Yang. 1998. Topic detection and tracking pilot study final report. In Proceedings of the DARPA Broadcast News Tran- scription and Understanding Workshop. Virginia, US, pages 194-218.
Dynamic word embeddings. Robert Bamler, Stephan Mandt, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningSydney, AustraliaRobert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th Inter- national Conference on Machine Learning. Sydney, Australia, pages 380-389.
Analysing word meaning over time by exploiting temporal random indexing. Pierpaolo Basile, Annalina Caputo, Giovanni Semeraro, Proceedings of the 1st Italian Conference on Computational Linguistics (CLiC-it). the 1st Italian Conference on Computational Linguistics (CLiC-it)Pisa, ItalyPisa University PressPierpaolo Basile, Annalina Caputo, and Giovanni Se- meraro. 2014. Analysing word meaning over time by exploiting temporal random indexing. In Pro- ceedings of the 1st Italian Conference on Computa- tional Linguistics (CLiC-it). Pisa University Press, Pisa, Italy.
Dynamic topic models. M David, John D Blei, Lafferty, Proceedings of the 23rd International Conference on Machine Learning. Association for Computing Machinery. the 23rd International Conference on Machine Learning. Association for Computing MachineryPittsburgh, Pennsylvania USADavid M. Blei and John D. Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd Interna- tional Conference on Machine Learning. Associa- tion for Computing Machinery, Pittsburgh, Pennsyl- vania USA, pages 113-120.
Latent dirichlet allocation. David M Blei, Andrew Y Ng, Michael I Jordan, Proceedings of Machine Learning Research. 3David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Proceedings of Machine Learning Research 3(Jan):993-1022.
Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. Nicolas Boulanger-Lewandowski, Yoshua Bengio, Pascal Vincent, Proceedings of the 29th International Conference on Machine Learning. the 29th International Conference on Machine LearningEdinburgh, Scotland UKNicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. 2012. Modeling temporal depen- dencies in high-dimensional sequences: Application to polyphonic music generation and transcription. In Proceedings of the 29th International Conference on Machine Learning. Edinburgh, Scotland UK.
Reading tea leaves: How humans interpret topic models. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-Graber, David M Blei, Advances in Neural Information Processing Systems. Vancouver, CanadaCurran Associates, IncJonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-Graber, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Ad- vances in Neural Information Processing Systems. Curran Associates, Inc., Vancouver, Canada, pages 288-296.
Gaussian lda for topic models with word embeddings. Rajarshi Das, Manzil Zaheer, Chris Dyer, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational Linguistics. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational LinguisticsBeijing, China1Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian lda for topic models with word embed- dings. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing. Association for Computa- tional Linguistics, Beijing, China, volume 1, pages 795-804.
The rate adapting poisson model for information retrieval and object recognition. V Peter, Alex D Gehler, Max Holub, Welling, Proceedings of the 23rd International Conference on Machine Learning. Association for Computing Machinery. the 23rd International Conference on Machine Learning. Association for Computing MachineryPittsburgh, Pennsylvania USAPeter V. Gehler, Alex D. Holub, and Max Welling. 2006. The rate adapting poisson model for infor- mation retrieval and object recognition. In Proceed- ings of the 23rd International Conference on Ma- chine Learning. Association for Computing Machin- ery, Pittsburgh, Pennsylvania USA, pages 337-344.
Xiaoli Sujatha Das Gollapalli, Li, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsEmnlp versus acl: Analyzing nlp research over timeSujatha Das Gollapalli and Xiaoli Li. 2015. Emnlp ver- sus acl: Analyzing nlp research over time. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2002-2006.
Deep learning methods for the extraction of relations in natural language text. Pankaj Gupta, Thomas Runkler, Heike Adel, Bernt Andrassy, Hans-Georg Zimmermann, Hinrich Schütze, Technical University of Munich, GermanyTechnical reportPankaj Gupta, Thomas Runkler, Heike Adel, Bernt Andrassy, Hans-Georg Zimmermann, and Hinrich Schütze. 2015a. Deep learning methods for the ex- traction of relations in natural language text. Tech- nical report, Technical University of Munich, Ger- many.
Keyword learning for classifying requirements in tender documents. Pankaj Gupta, Thomas Runkler, Bernt Andrassy, GermanyTechnical University of MunichTechnical reportPankaj Gupta, Thomas Runkler, and Bernt Andrassy. 2015b. Keyword learning for classifying require- ments in tender documents. Technical report, Tech- nical University of Munich, Germany.
Table filling multi-task recurrent neural network for joint entity and relation extraction. Pankaj Gupta, Hinrich Schütze, Bernt Andrassy, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanPankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural net- work for joint entity and relation extraction. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers. Osaka, Japan, pages 2537-2547.
Identifying patients with diabetes using discriminative restricted boltzmann machines. Pankaj Gupta, Udhayaraj Sivalingam, Sebastian Pölsterl, Nassir Navab, GermanyTechnical University of MunichTechnical reportPankaj Gupta, Udhayaraj Sivalingam, Sebastian Pölsterl, and Nassir Navab. 2015c. Identifying pa- tients with diabetes using discriminative restricted boltzmann machines. Technical report, Technical University of Munich, Germany.
Studying the history of ideas using topic models. David Hall, Daniel Jurafsky, Christopher D Manning, Proceedings of the conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the conference on Empirical Methods in Natural Language Processing. Association for Computational LinguisticsHonolulu, HawaiiDavid Hall, Daniel Jurafsky, and Christopher D. Man- ning. 2008. Studying the history of ideas using topic models. In Proceedings of the conference on Empir- ical Methods in Natural Language Processing. As- sociation for Computational Linguistics, Honolulu, Hawaii, pages 363-371.
Replicated softmax: an undirected topic model. Geoffrey Hinton, Ruslan Salakhutdinov, Advances in Neural Information Processing Systems. Vancouver, CanadaCurran Associates, Inc22Geoffrey Hinton and Ruslan Salakhutdinov. 2009. Replicated softmax: an undirected topic model. In Advances in Neural Information Processing Systems 22. Curran Associates, Inc., Vancouver, Canada, pages 1607-1614.
Training products of experts by minimizing contrastive divergence. Geoffrey E Hinton, Neural Computation. 148Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Com- putation 14(8):1771-1800.
A fast learning algorithm for deep belief nets. Geoffrey E Hinton, Simon Osindero, Yee-Whye Teh, Neural Computation. 187Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Computation 18(7):1527-1554.
Online multiscale dynamic topic models. Tomoharu Iwata, Takeshi Yamada, Yasushi Sakurai, Naonori Ueda, Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery. the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing MachineryWashington DC, USATomoharu Iwata, Takeshi Yamada, Yasushi Saku- rai, and Naonori Ueda. 2010. Online multiscale dynamic topic models. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Associ- ation for Computing Machinery, Washington DC, USA, pages 663-672.
External evaluation of topic models. David Newman, Sarvnaz Karimi, Lawrence Cavedon, Proceedings of the 14th Australasian Document Computing Symposium. Citeseer. the 14th Australasian Document Computing Symposium. CiteseerSydney, AustraliaDavid Newman, Sarvnaz Karimi, and Lawrence Cave- don. 2009. External evaluation of topic models. In Proceedings of the 14th Australasian Document Computing Symposium. Citeseer, Sydney, Australia.
Hierarchical bayesian modeling of topics in time-stamped documents. Iulian Pruteanu-Malinici, Lu Ren, John Paisley, Eric Wang, Lawrence Carin , IEEE transactions. 326Iulian Pruteanu-Malinici, Lu Ren, John Paisley, Eric Wang, and Lawrence Carin. 2010. Hierarchical bayesian modeling of topics in time-stamped doc- uments. IEEE transactions on pattern analysis and machine intelligence 32(6):996-1011.
Dynamic bernoulli embeddings for language evolution. Maja Rudolph, David Blei, Proceedings of the 27th International Conference on World Wide Web Companion. the 27th International Conference on World Wide Web CompanionLyon, FranceMaja Rudolph and David Blei. 2018. Dynamic bernoulli embeddings for language evolution. In Proceedings of the 27th International Conference on World Wide Web Companion. Lyon, France.
Learning internal representations by error propagation. David E Rumelhart, Geoffrey E Hinton, Ronald J Williams, California Univ San Diego La Jolla Inst for Cognitive ScienceTechnical reportDavid E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1985. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science.
Learning evolving and emerging topics in social media: a dynamic nmf approach with temporal regularization. Ankan Saha, Vikas Sindhwani, Proceedings of the 5th ACM International Conference on Web Search and Data Mining. Association for Computing Machinery. the 5th ACM International Conference on Web Search and Data Mining. Association for Computing MachinerySeattle, Washington USAAnkan Saha and Vikas Sindhwani. 2012. Learning evolving and emerging topics in social media: a dy- namic nmf approach with temporal regularization. In Proceedings of the 5th ACM International Con- ference on Web Search and Data Mining. Associa- tion for Computing Machinery, Seattle, Washington USA, pages 693-702.
Poisson-gamma dynamical systems. Aaron Schein, Hanna Wallach, Mingyuan Zhou, Advances in Neural Information Processing Systems. Barcelona, SpainCurran Associates, Inc29Aaron Schein, Hanna Wallach, and Mingyuan Zhou. 2016. Poisson-gamma dynamical systems. In Ad- vances in Neural Information Processing Systems 29, Curran Associates, Inc., Barcelona, Spain, pages 5005-5013.
Information processing in dynamical systems: Foundations of harmony theory. Paul Smolensky, Colorado University at Boulder Department of Computer ScienceTechnical reportPaul Smolensky. 1986. Information processing in dy- namical systems: Foundations of harmony theory. Technical report, Colorado University at Boulder Department of Computer Science.
Learning multilevel distributed representations for highdimensional sequences. Ilya Sutskever, Geoffrey Hinton, Proceedings of the 11th International Conference on Artificial Intelligence and Statistics. the 11th International Conference on Artificial Intelligence and StatisticsSan Juan, Puerto RicoIlya Sutskever and Geoffrey Hinton. 2007. Learn- ing multilevel distributed representations for high- dimensional sequences. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics. San Juan, Puerto Rico, pages 548- 555.
The recurrent temporal restricted boltzmann machine. Ilya Sutskever, Geoffrey E Hinton, Graham W Taylor, Advances in Neural Information Processing Systems 22. Vancouver, CanadaCurran Associates, IncIlya Sutskever, Geoffrey E. Hinton, and Graham W. Taylor. 2009. The recurrent temporal restricted boltzmann machine. In Advances in Neural Infor- mation Processing Systems 22. Curran Associates, Inc., Vancouver, Canada, pages 1601-1608.
Modeling human motion using binary latent variables. Graham W Taylor, Geoffrey E Hinton, Sam T Roweis, Advances in Neural Information Processing Systems 20. Vancouver, CanadaCurran Associates, IncGraham W. Taylor, Geoffrey E. Hinton, and Sam T. Roweis. 2007. Modeling human motion using bi- nary latent variables. In Advances in Neural Infor- mation Processing Systems 20. Curran Associates, Inc., Vancouver, Canada, pages 1345-1352.
Combining recurrent and convolutional neural networks for relation classification. Ngoc Thang Vu, Heike Adel, Pankaj Gupta, Hinrich Schütze, Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, California USAAssociation for Computational LinguisticsNgoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Schütze. 2016a. Combining recurrent and con- volutional neural networks for relation classifica- tion. In Proceedings of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies. Association for Computational Linguistics, San Diego, Califor- nia USA, pages 534-539.
Bi-directional recurrent neural network with ranking loss for spoken language understanding. Ngoc Thang Vu, Pankaj Gupta, Heike Adel, Hinrich Schütze, Proceedings of the Acoustics, Speech and Signal Processing (ICASSP). the Acoustics, Speech and Signal Processing (ICASSP)Shanghai, ChinaIEEENgoc Thang Vu, Pankaj Gupta, Heike Adel, and Hinrich Schütze. 2016b. Bi-directional recurrent neural network with ranking loss for spoken lan- guage understanding. In Proceedings of the Acous- tics, Speech and Signal Processing (ICASSP). IEEE, Shanghai, China, pages 6060-6064.
Single document keyphrase extraction using neighborhood knowledge. Xiaojun Wan, Jianguo Xiao, Proceedings of the 23rd National Conference on Artificial Intelligence. Chicago, Illinois USA. the 23rd National Conference on Artificial Intelligence. Chicago, Illinois USA8Xiaojun Wan and Jianguo Xiao. 2008. Single doc- ument keyphrase extraction using neighborhood knowledge. In Proceedings of the 23rd National Conference on Artificial Intelligence. Chicago, Illi- nois USA, volume 8, pages 855-860.
Topics over time: a non-markov continuous-time model of topical trends. Xuerui Wang, Andrew Mccallum, Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery. the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing MachineryPhiladelphia, Pennsylvania USAXuerui Wang and Andrew McCallum. 2006. Topics over time: a non-markov continuous-time model of topical trends. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Com- puting Machinery, Philadelphia, Pennsylvania USA, pages 424-433.
Mining associated text and images with dual-wing harmoniums. Eric P Xing, Rong Yan, Alexander G Hauptmann, Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence. the 21st Conference on Uncertainty in Artificial IntelligenceEdinburgh, Scotland UKAUAI PressEric P. Xing, Rong Yan, and Alexander G. Haupt- mann. 2005. Mining associated text and images with dual-wing harmoniums. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelli- gence. AUAI Press, Edinburgh, Scotland UK.
Dynamic word embeddings for evolving semantic discovery. Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, Hui Xiong, Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM). Association for Computing Machinery. the 11th ACM International Conference on Web Search and Data Mining (WSDM). Association for Computing MachineryLos Angeles, California USAZijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic word embeddings for evolving semantic discovery. In Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM). Association for Comput- ing Machinery, Los Angeles, California USA, pages 673-681.
| [
"https://github.com/pgcool/RNN-RSM."
] |
[
"Analysis Methods in Neural Language Processing: A Survey",
"Analysis Methods in Neural Language Processing: A Survey"
] | [
"Yonatan Belinkov belinkov@mit.edu \nMIT Computer Science\nArtificial Intelligence Laboratory\n\n\nHarvard School of Engineering and Applied Sciences Cambridge\nMAUSA\n",
"James Glass glass@mit.edu \nMIT Computer Science\nArtificial Intelligence Laboratory\n\n"
] | [
"MIT Computer Science\nArtificial Intelligence Laboratory\n",
"Harvard School of Engineering and Applied Sciences Cambridge\nMAUSA",
"MIT Computer Science\nArtificial Intelligence Laboratory\n"
] | [] | The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their featurerich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work. | 10.1162/tacl_a_00254 | [
"https://www.aclweb.org/anthology/Q19-1004.pdf"
] | 56,657,817 | 1812.08951 | 43d1e85713762e04240d5eb4993c35fc8564b554 |
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov belinkov@mit.edu
MIT Computer Science
Artificial Intelligence Laboratory
Harvard School of Engineering and Applied Sciences Cambridge
MAUSA
James Glass glass@mit.edu
MIT Computer Science
Artificial Intelligence Laboratory
Analysis Methods in Neural Language Processing: A Survey
The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their featurerich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work.
Introduction
The rise of deep learning has transformed the field of natural language processing (NLP) in recent years. Models based on neural networks have obtained impressive improvements in various tasks, including language modeling (Mikolov et al., 2010;Jozefowicz et al., 2016), syntactic parsing (Kiperwasser and Goldberg, 2016), machine translation (MT) (Bahdanau et al., 2014;, and many other tasks; see Goldberg (2017) for example success stories. This progress has been accompanied by a myriad of new neural network architectures. In many cases, traditional feature-rich systems are being replaced by end-to-end neural networks that aim to map input text to some output prediction. As end-to-end systems are gaining prevalence, one may point to two trends. First, some push back against the abandonment of linguistic knowledge and call for incorporating it inside the networks in different ways. 1 Others strive to better understand how NLP models work. This theme of analyzing neural networks has connections to the broader work on interpretability in machine learning, along with specific characteristics of the NLP field.
Why should we analyze our neural NLP models? To some extent, this question falls into the larger question of interpretability in machine learning, which has been the subject of much debate in recent years. 2 Arguments in favor of interpretability in machine learning usually mention goals like accountability, trust, fairness, safety, and reliability (Doshi-Velez and Kim, 2017;Lipton, 2016). Arguments against interpretability typically stress performance as the most important desideratum. All these arguments naturally apply to machine learning applications in NLP.
In the context of NLP, this question needs to be understood in light of earlier NLP work, often referred to as feature-rich or feature-engineered systems. In some of these systems, features are more easily understood by humans-they can be morphological properties, lexical classes, syntactic categories, semantic relations, etc. In theory, one could observe the importance assigned by statistical NLP models to such features in order to gain a better understanding of the model. 3 In contrast, it is more difficult to understand what happens in an end-to-end neural network model that takes input (say, word embeddings) and generates an output (say, a sentence classification). Much of the analysis work thus aims to understand how linguistic concepts that were common as features in NLP systems are captured in neural networks.
As the analysis of neural networks for language is becoming more and more prevalent, neural networks in various NLP tasks are being analyzed; different network architectures and components are being compared, and a variety of new analysis methods are being developed. This survey aims to review and summarize this body of work, highlight current trends, and point to existing lacunae. It organizes the literature into several themes. Section 2 reviews work that targets a fundamental question: What kind of linguistic information is captured in neural networks? We also point to limitations in current methods for answering this question. Section 3 discusses visualization methods, and emphasizes the difficulty in evaluating visualization work. In Section 4, we discuss the compilation of challenge sets, or test suites, for fine-grained evaluation, a methodology that has old roots in NLP. Section 5 deals with the generation and use of adversarial examples to probe weaknesses of neural networks. We point to unique characteristics of dealing with text as a discrete input and how different studies handle them. Section 6 summarizes work on explaining model predictions, an important goal of interpretability research. This is a relatively underexplored area, and we call for more work in this direction. Section 7 mentions a few other methods that do not fall neatly into one of the above themes. In the conclusion, we summarize the main gaps and potential research directions for the field. The paper is accompanied by online supplementary materials that contain detailed references for studies corresponding to Sections 2, 4, and 5 (Tables SM1, SM2, and SM3, respectively), available at https://boknilev.github.io/ nlp-analysis-methods.
Before proceeding, we briefly mention some earlier work of a similar spirit.
A Historical Note Reviewing the vast literature on neural networks for language is beyond our scope. 4 However, we mention here a few representative studies that focused on analyzing such networks in order to illustrate how recent trends have roots that go back to before the recent deep learning revival. Rumelhart and McClelland (1986) built a feedforward neural network for learning the English past tense and analyzed its performance on a variety of examples and conditions. They were especially concerned with the performance over the course of training, as their goal was to model the past form acquisition in children. They also analyzed a scaled-down version having eight input units and eight output units, which allowed them to describe it exhaustively and examine how certain rules manifest in network weights.
In his seminal work on recurrent neural networks (RNNs), Elman trained networks on synthetic sentences in a language prediction task (Elman, 1989(Elman, , 1990(Elman, , 1991. Through extensive analyses, he showed how networks discover the notion of a word when predicting characters; capture syntactic structures like number agreement; and acquire word representations that reflect lexical and syntactic categories. Similar analyses were later applied to other networks and tasks (Harris, 1990;Niklasson and Linåker, 2000;Pollack, 1990;Frank et al., 2013).
While Elman's work was limited in some ways, such as evaluating generalization or various linguistic phenomena-as Elman himself recognized (Elman, 1989)-it introduced methods that are still relevant today: from visualizing network activations in time, through clustering words by hidden state activations, to projecting representations to dimensions that emerge as capturing properties like sentence number or verb valency. The sections on visualization (Section 3) and identifying linguistic information (Section 2) contain many examples for these kinds of analysis.
What Linguistic Information Is
Captured in Neural Networks?
Neural network models in NLP are typically trained in an end-to-end manner on input-output pairs, without explicitly encoding linguistic features. Thus, a primary question is the following: What linguistic information is captured in neural networks? When examining answers to this question, it is convenient to consider three dimensions: which methods are used for conducting the analysis, what kind of linguistic information is sought, and which objects in the neural network are being investigated. Table SM1 (in the supplementary materials) categorizes relevant analysis work according to these criteria. In the next subsections, we discuss trends in analysis work along these lines, followed by a discussion of limitations of current approaches.
Methods
The most common approach for associating neural network components with linguistic properties is to predict such properties from activations of the neural network. Typically, in this approach a neural network model is trained on some task (say, MT) and its weights are frozen. Then, the trained model is used for generating feature representations for another task by running it on a corpus with linguistic annotations and recording the representations (say, hidden state activations). Another classifier is then used for predicting the property of interest (say, part-of-speech [POS] tags). The performance of this classifier is used for evaluating the quality of the generated representations, and by proxy that of the original model. This kind of approach has been used in numerous papers in recent years; see Table SM1 for references. 5 It is referred to by various names, including ''auxiliary prediction tasks'' (Adi et al., 2017b), ''diagnostic classifiers'' (Veldhoen et al., 2016), and ''probing tasks' ' (Conneau et al., 2018).
As an example of this approach, let us walk through an application to analyzing syntax in neural machine translation (NMT) by Shi et al. (2016b). In this work, two NMT models were trained on standard parallel data-English→ French and English→German. The trained models (specifically, the encoders) were run on an annotated corpus and their hidden states were used for training a logistic regression classifier that predicts different syntactic properties. The authors concluded that the NMT encoders learn significant syntactic information at both word level and sentence level. They also compared representations at different encoding layers and found that ''local features are somehow preserved in the lower layer whereas more global, abstract information tends to be stored in the upper layer.'' These results demonstrate the kind of insights that the classification analysis may lead to, especially when comparing different models or model components.
Other methods for finding correspondences between parts of the neural network and certain properties include counting how often attention weights agree with a linguistic property like anaphora resolution (Voita et al., 2018) or directly computing correlations between neural network activations and some property; for example, correlating RNN state activations with depth in a syntactic tree (Qian et al., 2016a) or with Melfrequency cepstral coefficient (MFCC) acoustic features (Wu and King, 2016). Such correspondence may also be computed indirectly. For instance, Alishahi et al. (2017) defined an ABX discrimination task to evaluate how a neural model of speech (grounded in vision) encoded phonology. Given phoneme representations from different layers in their model, and three phonemes, A, B, and X, they compared whether the model representation for X is closer to A or B. This discrimination task enabled them to draw conclusions about which layers encoder phonology better, observing that lower layers generally encode more phonological information.
Linguistic Phenomena
Different kinds of linguistic information have been analyzed, ranging from basic properties like sentence length, word position, word presence, or simple word order, to morphological, syntactic, and semantic information. Phonetic/phonemic information, speaker information, and style and accent information have been studied in neural network models for speech, or in joint audio-visual models. See Table SM1 for references.
While it is difficult to synthesize a holistic picture from this diverse body of work, it appears that neural networks are able to learn a substantial amount of information on various linguistic phenomena. These models are especially successful at capturing frequent properties, while some rare properties are more difficult to learn. Linzen et al. (2016), for instance, found that long short-term memory (LSTM) language models are able to capture subject-verb agreement in many common cases, while direct supervision is required for solving harder cases.
Another theme that emerges in several studies is the hierarchical nature of the learned representations. We have already mentioned such findings regarding NMT (Shi et al., 2016b) and a visually grounded speech model (Alishahi et al., 2017). Hierarchical representations of syntax were also reported to emerge in other RNN models (Blevins et al., 2018).
Finally, a couple of papers discovered that models trained with latent trees perform better on natural language inference (NLI) (Williams et al., 2018;Maillard and Clark, 2018) than ones trained with linguistically annotated trees. Moreover, the trees in these models do not resemble syntactic trees corresponding to known linguistic theories, which casts doubts on the importance of syntax-learning in the underlying neural network. 6
Neural Network Components
In terms of the object of study, various neural network components were investigated, including word embeddings, RNN hidden states or gate activations, sentence embeddings, and attention weights in sequence-to-sequence (seq2seq) models. Generally less work has analyzed convolutional neural networks in NLP, but see Jacovi et al. (2018) for a recent exception. In speech processing, researchers have analyzed layers in deep neural networks for speech recognition and different speaker embeddings. Some analysis has also been devoted to joint language-vision or audio-vision models, or to similarities between word embeddings and con volutional image representations. Table SM1 provides detailed references.
Limitations
The classification approach may find that a certain amount of linguistic information is captured in the neural network. However, this does not necessarily mean that the information is used by the network. For example, Vanmassenhove et al. (2017) investigated aspect in NMT (and in phrase-based statistical MT). They trained a classifier on NMT sentence encoding vectors and found that they can accurately predict tense about 90% of the time. However, when evaluating the output translations, they found them to have the correct tense only 79% of the time. They interpreted this result to mean that ''part of the aspectual information is lost during decoding.'' Relatedly, Cífka and Bojar (2018) compared the performance of various NMT models in terms of translation quality (BLEU) and representation quality (classification tasks). They found a negative correlation between the two, suggesting that high-quality systems may not be learning certain sentence meanings. In contrast, Artetxe et al. (2018) showed that word embeddings contain divergent linguistic information, which can be uncovered by applying a linear transformation on the learned embeddings. Their results suggest an alternative explanation, showing that ''embedding models are able to encode divergent linguistic information but have limits on how this information is surfaced.'' From a methodological point of view, most of the relevant analysis work is concerned with correlation: How correlated are neural network components with linguistic properties? What may be lacking is a measure of causation: How does the encoding of linguistic properties affect the system output? Giulianelli et al. (2018) make some headway on this question. They predicted number agreement from RNN hidden states and gates at different time steps. They then intervened in how the model processes the sentence by changing a hidden activation based on the difference between the prediction and the correct label. This improved agreement prediction accuracy, and the effect persisted over the course of the sentence, indicating that this information has an effect on the model. However, they did not report the effect on overall model quality, for example by measuring perplexity. Methods from causal inference may shed new light on some of these questions.
Finally, the predictor for the auxiliary task is usually a simple classifier, such as logistic regression. A few studies compared different classifiers and found that deeper classifiers lead to overall better results, but do not alter the respective trends when comparing different models or components (Qian et al., 2016b;Belinkov, 2018). Interestingly, Conneau et al. (2018) found that tasks requiring more nuanced linguistic knowledge (e.g., tree depth, coordination inversion) gain the most from using a deeper classifier. However, the approach is usually taken for granted; given its prevalence, it appears that better theoretical or empirical foundations are in place.
Visualization
Visualization is a valuable tool for analyzing neural networks in the language domain and beyond. Early work visualized hidden unit activations in RNNs trained on an artificial language modeling task, and observed how they correspond to certain grammatical relations such as agreement (Elman, 1991). Much recent work has focused on visualizing activations on specific examples in modern neural networks for language (Karpathy et al., 2015;Kádár et al., 2017;Qian et al., 2016a;Liu et al., 2018) and speech (Wu and King, 2016;Nagamine et al., 2015;Wang et al., 2017b). Figure 1 shows an example visualization of a neuron that captures position of words in a sentence. The heatmap uses blue and red colors for negative and positive activation values, respectively, enabling the user to quickly grasp the function of this neuron.
The attention mechanism that originated in work on NMT (Bahdanau et al., 2014) also lends itself to a natural visualization. The alignments obtained via different attention mechanisms have produced visualizations ranging from tasks like NLI (Rocktäschel et al., 2016;Yin et al., 2016), summarization (Rush et al., 2015), MT post-editing (Jauregi Unanue et al., 2018), and morphological inflection (Aharoni and Goldberg, 2017) to matching users on social media (Tay et al., 2018). Figure 2 reproduces a visualization of attention alignments from the original work by Bahdanau et al. Here grayscale values correspond to the weight of the attention between words in an English source sentence (columns) and its French translation (rows). As Bahdanau et al. explain, this visualization demonstrates that the NMT model learned a soft alignment between source and target words. Some aspects of word order may also be Godin et al., 2018). Saliency can also be computed with respect to intermediate values, rather than input features (Ghaeini et al., 2018). 7 An instructive visualization technique is to cluster neural network activations and compare them to some linguistic property. Early work clustered RNN activations, showing that they organize in lexical categories (Elman, 1989(Elman, , 1990. Similar techniques have been followed by others. Recent examples include clustering of sentence embeddings in an RNN encoder trained in a multitask learning scenario (Brunner et al., 2017), and phoneme clusters in a joint audio-visual RNN model (Alishahi et al., 2017).
A few online tools for visualizing neural networks have recently become available. LSTMVis (Strobelt et al., 2018b) visualizes RNN activations, focusing on tracing hidden state dynamics. 8 Seq2Seq-Vis (Strobelt et al., 2018a) visualizes different modules in attention-based seq2seq models, with the goal of examining model decisions and testing alternative decisions. Another tool focused on comparing attention alignments was proposed by Rikters (2018). It also provides translation confidence scores based on the distribution of attention weights. NeuroX (Dalvi et al., 2019b) is a tool for finding and analyzing individual neurons, focusing on machine translation.
Evaluation As in much work on interpretability, evaluating visualization quality is difficult and often limited to qualitative examples. A few notable exceptions report human evaluations of visualization quality. Singh et al. (2018) showed human raters hierarchical clusterings of input words generated by two interpretation methods, and asked them to evaluate which method is more accurate, or in which method they trust more. Others reported human evaluations for attention visualization in conversation modeling (Freeman et al., 2018) and medical code prediction tasks (Mullenbach et al., 2018).
The availability of open-source tools of the sort described above will hopefully encourage users to utilize visualization in their regular research and development cycle. However, it remains to be seen how useful visualizations turn out to be.
Challenge Sets
The majority of benchmark datasets in NLP are drawn from text corpora, reflecting a natural frequency distribution of language phenomena. While useful in practice for evaluating system performance in the average case, such datasets may fail to capture a wide range of phenomena. An alternative evaluation framework consists of challenge sets, also known as test suites, which have been used in NLP for a long time (Lehmann et al., 1996), especially for evaluating MT systems (King and Falkedal, 1990;Isahara, 1995;Koh et al., 2001). Lehmann et al. (1996) noted several key properties of test suites: systematicity, control over data, inclusion of negative data, and exhaustivity. They contrasted such datasets with test corpora, ''whose main advantage is that they reflect naturally occurring data.'' This idea underlines much of the work on challenge sets and is echoed in more recent work . For instance, Cooper et al. (1996) constructed a semantic test suite that targets phenomena as diverse as quantifiers, plurals, anaphora, ellipsis, adjectival properties, and so on.
After a hiatus of a couple of decades, 9 challenge sets have recently gained renewed popularity in the NLP community. In this section, we include datasets used for evaluating neural network models that diverge from the common averagecase evaluation. Many of them share some of the properties noted by Lehmann et al. (1996), although negative examples (ill-formed data) are typically less utilized. The challenge datasets can be categorized along the following criteria: the task they seek to evaluate, the linguistic phenomena they aim to study, the language(s) they target, their size, their method of construction, and how performance is evaluated. 10 Table SM2 (in the supplementary materials) categorizes many recent challenge sets along these criteria. Below we discuss common trends along these lines.
Task
By far, the most targeted tasks in challenge sets are NLI and MT. This can partly be explained by the popularity of these tasks and the prevalence of neural models proposed for solving them. Perhaps more importantly, tasks like NLI and MT arguably require inferences at various linguistic levels, making the challenge set evaluation especially attractive. Still, other high-level tasks like reading comprehension or question answering have not received as much attention, and may also benefit from the careful construction of challenge sets.
A significant body of work aims to evaluate the quality of embedding models by correlating the similarity they induce on word or sentence pairs with human similarity judgments. Datasets containing such similarity scores are often used 9 One could speculate that their decrease in popularity can be attributed to the rise of large-scale quantitative evaluation of statistical NLP systems. 10 Another typology of evaluation protocols was put forth by Burlot and Yvon (2017). Their criteria are partially overlapping with ours, although they did not provide a comprehensive categorization like the one compiled here. to evaluate word embeddings (Finkelstein et al., 2002;Bruni et al., 2012;Hill et al., 2015, inter alia) or sentence embeddings; see the many shared tasks on semantic textual similarity in SemEval (Cer et al., 2017, and previous editions). Many of these datasets evaluate similarity at a coarse-grained level, but some provide a more fine-grained evaluation of similarity or relatedness. For example, some datasets are dedicated for specific word classes such as verbs (Gerz et al., 2016) or rare words (Luong et al., 2013), or for evaluating compositional knowledge in sentence embeddings (Marelli et al., 2014). Multilingual and cross-lingual versions have also been collected (Leviant and Reichart, 2015;Cer et al., 2017). Although these datasets are widely used, this kind of evaluation has been criticized for its subjectivity and questionable correlation with downstream performance (Faruqui et al., 2016).
Linguistic Phenomena
One of the primary goals of challenge sets is to evaluate models on their ability to handle specific linguistic phenomena. While earlier studies emphasized exhaustivity (Cooper et al., 1996;Lehmann et al., 1996), recent ones tend to focus on a few properties of interest. For example, Sennrich (2017) introduced a challenge set for MT evaluation focusing on five properties: subject-verb agreement, noun phrase agreement, verb-particle constructions, polarity, and transliteration. Slightly more elaborated is an MT challenge set for morphology, including 14 morphological properties (Burlot and Yvon, 2017). See Table SM2 for references to datasets targeting other phenomena.
Other challenge sets cover a more diverse range of linguistic properties, in the spirit of some of the earlier work. For instance, extending the categories in Cooper et al. (1996), the GLUE analysis set for NLI covers more than 30 phenomena in four coarse categories (lexical semantics, predicate-argument structure, logic, and knowledge). In MT evaluation, Burchardt et al. (2017) reported results using a large test suite covering 120 phenomena, partly based on Lehmann et al. (1996). 11 Isabelle et al. (2017) and Isabelle and Kuhn (2018) prepared challenge sets for MT evaluation covering fine-grained phenomena at morpho-syntactic, syntactic, and lexical levels.
Generally, datasets that are constructed programmatically tend to cover less fine-grained linguistic properties, while manually constructed datasets represent more diverse phenomena.
Languages
As unfortunately usual in much NLP work, especially neural NLP, the vast majority of challenge sets are in English. This situation is slightly better in MT evaluation, where naturally all datasets feature other languages (see Table SM2). A notable exception is the work by Gulordava et al. (2018), who constructed examples for evaluating number agreement in language modeling in English, Russian, Hebrew, and Italian. Clearly, there is room for more challenge sets in non-English languages. However, perhaps more pressing is the need for large-scale non-English datasets (besides MT) to develop neural models for popular NLP tasks.
Scale
The size of proposed challenge sets varies greatly (Table SM2). As expected, datasets constructed by hand are smaller, with typical sizes in the hundreds. Automatically built datasets are much larger, ranging from several thousands to close to a hundred thousand (Sennrich, 2017), or even more than one million examples (Linzen et al., 2016). In the latter case, the authors argue that such a large test set is needed for obtaining a sufficient representation of rare cases. A few manually constructed datasets contain a fairly large number of examples, up to 10 thousand (Burchardt et al., 2017).
Construction Method
Challenge sets are usually created either programmatically or manually, by handcrafting specific examples. Often, semi-automatic methods are used to compile an initial list of examples that is manually verified by annotators. The specific method also affects the kind of language use and how natural or artificial/synthetic the examples are. We describe here some trends in dataset construction methods in the hope that they may be useful for researchers contemplating new datasets. Several datasets were constructed by modifying or extracting examples from existing datasets. For instance, Sanchez et al. (2018) and Glockner et al. (2018) extracted examples from SNLI (Bowman et al., 2015) and replaced specific words such as hypernyms, synonyms, and antonyms, followed by manual verification. Linzen et al. (2016), on the other hand, extracted examples of subject-verb agreement from raw texts using heuristics, resulting in a large-scale dataset. Gulordava et al. (2018) extended this to other agreement phenomena, but they relied on syntactic information available in treebanks, resulting in a smaller dataset.
Several challenge sets utilize existing test suites, either as a direct source of examples (Burchardt et al., 2017) or for searching similar naturally occurring examples . 12 Sennrich (2017) introduced a method for evaluating NMT systems via contrastive translation pairs, where the system is asked to estimate the probability of two candidate translations that are designed to reflect specific linguistic properties. Sennrich generated such pairs programmatically by applying simple heuristics, such as changing gender and number to induce agreement errors, resulting in a large-scale challenge set of close to 100 thousand examples. This framework was extended to evaluate other properties, but often requiring more sophisticated generation methods like using morphological analyzers/ generators (Burlot and Yvon, 2017) or more manual involvement in generation (Bawden et al., 2018) or verification (Rios Gonzales et al., 2017).
Finally, a few studies define templates that capture certain linguistic properties and instantiate them with word lists (Dasgupta et al., 2018;Rudinger et al., 2018;Zhao et al., 2018a). Template-based generation has the advantage of providing more control, for example for obtaining a specific vocabulary distribution, but this comes at the expense of how natural the examples are.
Evaluation
Systems are typically evaluated by their performance on the challenge set examples, either with the same metric used for evaluating the system in the first place, or via a proxy, as in the contrastive pairs evaluation of Sennrich (2017). Automatic evaluation metrics are cheap to obtain and can be calculated on a large scale. However, they may miss certain aspects. Thus a few studies report human evaluation on their challenge sets, such as in MT (Isabelle et al., 2017;Burchardt et al., 2017).
We note here also that judging the quality of a model by its performance on a challenge set can be tricky. Some authors emphasize their wish to test systems on extreme or difficult cases, ''beyond normal operational capacity'' (Naik et al., 2018). However, whether one should expect systems to perform well on specially chosen cases (as opposed to the average case) may depend on one's goals. To put results in perspective, one may compare model performance to human performance on the same task (Gulordava et al., 2018).
Adversarial Examples
Understanding a model also requires an understanding of its failures. Despite their success in many tasks, machine learning systems can also be very sensitive to malicious attacks or adversarial examples (Szegedy et al., 2014;Goodfellow et al., 2015). In the vision domain, small changes to the input image can lead to misclassification, even if such changes are indistinguishable by humans.
The basic setup in work on adversarial examples can be described as follows. 13 Given a neural network model f and an input example x, we seek to generate an adversarial example x that will have a minimal distance from x, while being assigned a different label by f :
min x ||x − x || s.t. f (x) = l, f (x ) = l , l = l
In the vision domain, x can be the input image pixels, resulting in a fairly intuitive interpretation of this optimization problem: measuring the distance ||x − x || is straightforward, and finding x can be done by computing gradients with respect to the input, since all quantities are continuous.
In the text domain, the input is discrete (for example, a sequence of words), which poses two problems. First, it is not clear how to measure the distance between the original and adversarial examples, x and x , which are two discrete objects (say, two words or sentences). Second, minimizing this distance cannot be easily formulated as an optimization problem, as this requires computing gradients with respect to a discrete input.
In the following, we review methods for handling these difficulties according to several criteria: the adversary's knowledge, the specificity of the attack, the linguistic unit being modified, and the task on which the attacked model was trained. 14 Table SM3 (in the supplementary materials) categorizes work on adversarial examples in NLP according to these criteria.
Adversary's Knowledge
Adversarial examples can be generated using access to model parameters, also known as white-box attacks, or without such access, with black-box attacks (Papernot et al., 2016a(Papernot et al., , 2017Narodytska and Kasiviswanathan, 2017;Liu et al., 2017).
White-box attacks are difficult to adapt to the text world as they typically require computing gradients with respect to the input, which would be discrete in the text case. One option is to compute gradients with respect to the input word embeddings, and perturb the embeddings. Since this may result in a vector that does not correspond to any word, one could search for the closest word embedding in a given dictionary (Papernot et al., 2016b); Cheng et al. (2018) extended this idea to seq2seq models. Others computed gradients with respect to input word embeddings to identify and rank words to be modified (Samanta and Mehta, 2017;Liang et al., 2018). Ebrahimi et al. (2018b) developed an alternative method by representing text edit operations in vector space (e.g., a binary vector specifying which characters in a word would be changed) and approximating the change in loss with the derivative along this vector.
Given the difficulty in generating white-box adversarial examples for text, much research has been devoted to black-box examples. Often, the adversarial examples are inspired by text edits that are thought to be natural or commonly generated by humans, such as typos, misspellings, and so 14 These criteria are partly taken from Yuan et al. (2017), where a more elaborate taxonomy is laid out. At present, though, the work on adversarial examples in NLP is more limited than in computer vision, so our criteria will suffice. on (Sakaguchi et al., 2017;Heigold et al., 2018;Belinkov and Bisk, 2018). Gao et al. (2018) defined scoring functions to identify tokens to modify. Their functions do not require access to model internals, but they do require the model prediction score. After identifying the important tokens, they modify characters with common edit operations. Zhao et al. (2018c) used generative adversarial networks (GANs) to minimize the distance between latent representations of input and adversarial examples, and performed perturbations in latent space. Since the latent representations do not need to come from the attacked model, this is a black-box attack.
Finally, Alzantot et al. (2018) developed an interesting population-based genetic algorithm for crafting adversarial examples for text classification by maintaining a population of modifications of the original sentence and evaluating fitness of modifications at each generation. They do not require access to model parameters, but do use prediction scores. A similar idea was proposed by Kuleshov et al. (2018).
Attack Specificity
Adversarial attacks can be classified to targeted vs. non-targeted attacks (Yuan et al., 2017). A targeted attack specifies a specific false class, l , while a nontargeted attack cares only that the predicted class is wrong, l = l. Targeted attacks are more difficult to generate, as they typically require knowledge of model parameters; that is, they are white-box attacks. This might explain why the majority of adversarial examples in NLP are nontargeted (see Table SM3). A few targeted attacks include Liang et al. (2018), which specified a desired class to fool a text classifier, and Chen et al. (2018a), which specified words or captions to generate in an image captioning model. Others targeted specific words to omit, replace, or include when attacking seq2seq models (Cheng et al., 2018;Ebrahimi et al., 2018a).
Methods for generating targeted attacks in NLP could possibly take more inspiration from adversarial attacks in other fields. For instance, in attacking malware detection systems, several studies developed targeted attacks in a blackbox scenario (Yuan et al., 2017). A black-box targeted attack for MT was proposed by Zhao et al. (2018c), who used GANs to search for attacks on Google's MT system after mapping sentences into continuous space with adversarially regularized autoencoders (Zhao et al., 2018b).
Linguistic Unit
Most of the work on adversarial text examples involves modifications at the character-and/or word-level; see Table SM3 for specific references. Other transformations include adding sentences or text chunks (Jia and Liang, 2017) or generating paraphrases with desired syntactic structures (Iyyer et al., 2018). In image captioning, Chen et al. (2018a) modified pixels in the input image to generate targeted attacks on the caption text.
Task
Generally, most work on adversarial examples in NLP concentrates on relatively high-level language understanding tasks, such as text classification (including sentiment analysis) and reading comprehension, while work on text generation focuses mainly on MT. See Table SM3 for references. There is relatively little work on adversarial examples for more low-level language processing tasks, although one can mention morphological tagging (Heigold et al., 2018) and spelling correction (Sakaguchi et al., 2017).
Coherence and Perturbation Measurement
In adversarial image examples, it is fairly straightforward to measure the perturbation, either by measuring distance in pixel space, say ||x − x || under some norm, or with alternative measures that are better correlated with human perception (Rozsa et al., 2016). It is also visually compelling to present an adversarial image with imperceptible difference from its source image.
In the text domain, measuring distance is not as straightforward, and even small changes to the text may be perceptible by humans. Thus, evaluation of attacks is fairly tricky. Some studies imposed constraints on adversarial examples to have a small number of edit operations (Gao et al., 2018). Others ensured syntactic or semantic coherence in different ways, such as filtering replacements by word similarity or sentence similarity (Alzantot et al., 2018;Kuleshov et al., 2018), or by using synonyms and other word lists (Samanta and Mehta, 2017;Yang et al., 2018). Some reported whether a human can classify the adversarial example correctly (Yang et al., 2018), but this does not indicate how perceptible the changes are. More informative human studies evaluate grammaticality or similarity of the adversarial examples to the original ones (Zhao et al., 2018c;Alzantot et al., 2018). Given the inherent difficulty in generating imperceptible changes in text, more such evaluations are needed.
Explaining Predictions
Explaining specific predictions is recognized as a desideratum in intereptability work (Lipton, 2016), argued to increase the accountability of machine learning systems (Doshi-Velez et al., 2017). However, explaining why a deep, highly non-linear neural network makes a certain prediction is not trivial. One solution is to ask the model to generate explanations along with its primary prediction (Zaidan et al., 2007;Zhang et al., 2016), 15 but this approach requires manual annotations of explanations, which may be hard to collect.
An alternative approach is to use parts of the input as explanations. For example, Lei et al. (2016) defined a generator that learns a distribution over text fragments as candidate rationales for justifying predictions, evaluated on sentiment analysis. Alvarez-Melis and Jaakkola (2017) discovered input-output associations in a sequence-to-sequence learning scenario, by perturbing the input and finding the most relevant associations. Gupta and Schütze (2018) inspected how information is accumulated in RNNs towards a prediction, and associated peaks in prediction scores with important input segments. As these methods use input segments to explain predictions, they do not shed much light on the internal computations that take place in the network.
At present, despite the recognized importance for interpretability, our ability to explain predictions of neural networks in NLP is still limited.
Other Methods
We briefly mention here several analysis methods that do not fall neatly into the previous sections.
A number of studies evaluated the effect of erasing or masking certain neural network components, such as word embedding dimensions, hidden units, or even full words (Li et al., 2016b;Feng et al., 2018;Khandelwal et al., 2018;Bau et al., 2018). For example, Li et al. (2016b) erased specific dimensions in word embeddings or hidden states and computed the change in probability assigned to different labels. Their experiments revealed interesting differences between word embedding models, where in some models information is more focused in individual dimensions. They also found that information is more distributed in hidden layers than in the input layer, and erased entire words to find important words in a sentiment analysis task.
Several studies conducted behavioral experiments to interpret word embeddings by defining intrusion tasks, where humans need to identify an intruder word, chosen based on difference in word embedding dimensions (Murphy et al., 2012;Fyshe et al., 2015;Faruqui et al., 2015). 16 In this kind of work, a word embedding model may be deemed more interpretable if humans are better able to identify the intruding words. Since the evaluation is costly for high-dimensional representations, alternative automatic metrics were considered (Park et al., 2017;Senel et al., 2018).
A long tradition in work on neural networks is to evaluate and analyze their ability to learn different formal languages (Das et al., 1992;Casey, 1996;Gers and Schmidhuber, 2001;Bodén and Wiles, 2002;Chalup and Blair, 2003). This trend continues today, with research into modern architectures and what formal languages they can learn (Weiss et al., 2018;Bernardy, 2018;Suzgun et al., 2019), or the formal properties they possess (Chen et al., 2018b).
Conclusion
Analyzing neural networks has become a hot topic in NLP research. This survey attempted to review and summarize as much of the current research as possible, while organizing it along several prominent themes. We have emphasized aspects in analysis that are specific to language-namely, what linguistic information is captured in neural networks, which phenomena they are successful at capturing, and where they fail. Many of the analysis methods are general techniques from the larger machine learning community, such as 16 The methodology follows earlier work on evaluating the interpretability of probabilistic topic models with intrusion tasks (Chang et al., 2009). visualization via saliency measures or evaluation by adversarial examples. But even those sometimes require non-trivial adaptations to work with text input. Some methods are more specific to the field, but may prove useful in other domains. Challenge sets or test suites are such a case.
Throughout this survey, we have identified several limitations or gaps in current analysis work:
• The use of auxiliary classification tasks for identifying which linguistic properties neural networks capture has become standard practice (Section 2), while lacking both a theoretical foundation and a better empirical consideration of the link between the auxiliary tasks and the original task.
• Evaluation of analysis work is often limited or qualitative, especially in visualization techniques (Section 3). Newer forms of evaluation are needed for determining the success of different methods.
• Relatively little work has been done on explaining predictions of neural network models, apart from providing visualizations (Section 6). With the increasing public demand for explaining algorithmic choices in machine learning systems (Doshi-Velez and Kim, 2017;Doshi-Velez et al., 2017), there is pressing need for progress in this direction.
• Much of the analysis work is focused on the English language, especially in constructing challenge sets for various tasks (Section 4), with the exception of MT due to its inherent multilingual character. Developing resources and evaluating methods on other languages is important as the field grows and matures.
• More challenge sets for evaluating other tasks besides NLI and MT are needed.
Finally, as with any survey in a rapidly evolving field, this paper is likely to omit relevant recent work by the time of publication. While we intend to continue updating the online appendix with newer publications, we hope that our summarization of prominent analysis work and its categorization into several themes will be a useful guide for scholars interested in analyzing and understanding neural networks for NLP.
Figure 1 :
1A heatmap visualizing neuron activations. In this case, the activations capture position in the sentence.
Figure 2 :
2A visualization of attention weights, showing soft alignment between source and target sentences in an NMT model. Reproduced from Bahdanau et al. (2014), with permission. noticed, as in the reordering of noun and adjective when translating the phrase ''European Economic Area.'' Another line of work computes various saliency measures to attribute predictions to input features. The important or salient features can then be visualized in selected examples(Li et al., 2016a; Aubakirova and Bansal, 2016;Sundararajan et al., 2017; Arras et al., 2017a,b; Ding et al., 2017;Murdoch et al., 2018; Mudrakarta et al., 2018; Montavon et al., 2018;
Xinchi Chen, Xipeng Qiu, Chenxi Zhu,Shiyu Wu, and Xuanjing Huang. 2015. Sentence Modeling with Gated Recursive Neural Network. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 793-798. Association for Computational Linguistics. Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. 2018b. Recurrent Neural Networks as Weighted Language Recognizers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2261-2271. Association for Computational Linguistics. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples. arXiv preprint arXiv:1803.01128v1. Grzegorz Chrupała, Lieke Gelderloos, and Afra Alishahi. 2017. Representations of language in a model of visually grounded speech signal. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 613-622. Association for Computational Linguistics. Ondřej Cífka and Ondřej Bojar. 2018. Are BLEU and Meaning Representation in Opposition? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1362-1371. Association for Computational Linguistics. Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136. Association for Computational Linguistics. of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI). Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, and Stephan Vogel. 2017. Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 142-151. Asian Federation of Natural Language Processing. Fahim Dalvi, Avery Nortonsmith, D. Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, and James Glass. 2019b, January. NeuroX: A Toolkit for Analyzing Individual Neurons in Neural Networks. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI): Demonstrations Track. Sreerupa Das, C. Lee Giles, and Guo-Zheng Sun. 1992. Learning Context-Free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory. In Proceedings of The Fourteenth Annual Conference of Cognitive Science Society. Indiana University, page 14. Ishita Dasgupta, Demi Guo, Andreas Stuhlmüller, Samuel J. Gershman, and Noah D. Goodman. 2018. Evaluating Compositionality in Sentence Embeddings. arXiv preprint arXiv:1802. 04302v2. Dhanush Dharmaretnam and Alona Fyshe. 2018. The Emergence of Semantics in Neural Network Representations of Visual Information. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 776-780. Association for Computational Linguistics. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and Understanding Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150-1159. Association for Computational Linguistics. Finale Doshi-Velez and Been Kim. 2017. Towards a Rigorous Science of Interpretable Machine Learning. In arXiv preprint arXiv: 1702.08608v2. Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O'Brien, Stuart Shieber, James Waldo, David Weinberger, and Alexandra Wood. 2017. Accountability of AI Under the Law: The Role of Explanation. Privacy Law Scholars Conference. Drexler and James Glass. 2017. Analysis of Audio-Visual Features for Unsupervised Speech Recognition. In International Workshop on Grounding Language Understanding. Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018a. On Adversarial Examples for Character-Level Neural Machine Translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 653-663. Association for Computational Linguistics. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018b. HotFlip: White-Box Adversarial Examples for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36. Association for Computational Linguistics. Ali Elkahky, Kellie Webster, Daniel Andor, and Emily Pitler. 2018. A Challenge Set and Methods for Noun-Verb Ambiguity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2562-2572. Association for Computational Linguistics. Zied Elloumi, Laurent Besacier, Olivier Galibert, and Benjamin Lecouteux. 2018. Analyzing Learned Representations of a Deep ASR Performance Prediction Model. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 9-15. Association for Computational Linguistics. Jeffrey L. Elman. 1989. Representation and Structure in Connectionist Models, University of California, San Diego, Center for Research in Language. Jeffrey L. Elman. 1990. Finding Structure in Time. Cognitive Science, 14(2):179-211. Jeffrey L. Elman. 1991. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7(2-3): 195-225. Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134-139. Association for Computational Linguistics. Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems With Evaluation of Word Embeddings Using Word Similarity Tasks. In Proceedings of the 1st Workshop on Evaluating Vector Space Representations for NLP. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse Overcomplete Word Vector Representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1491-1500. Association for Computational Linguistics. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of Neural Models Make Interpretations Difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719-3728. Association for Computational Linguistics. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing Search in Context: The Concept Revisited. ACM Transactions on Information Systems, 20(1):116-131. pages 107-112. Association for Computational Linguistics. Catherine L. Harris. 1990. Connectionism and Cognitive Linguistics. Connection Science, 2(1-2):7-33. David Harwath and James Glass. 2017. Learning Word-Like Units from Joint Audio-Visual Analysis. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 506-517. Association for Computational Linguistics. Georg Heigold, Günter Neumann, and Josef van Genabith. 2018. How Robust Are Character-Based Word Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse? In Proceedings of the 13th Conference of The Association for Machine Translation in the Americas (Volume 1: Research Track), pages 68-79. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation. Computational Linguistics, 41(4):665-695. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ''diagnostic classifiers'' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926. Pierre Isabelle, Colin Cherry, and George Foster. 2017. A Challenge Set Approach to Evaluating Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2486-2496. Association for Computational Linguistics. Pierre Isabelle and Roland Kuhn. 2018. A Challenge Set for French-> English Machine Translation. arXiv preprint arXiv:1806.02725v2. Hitoshi Isahara. 1995. JEIDA's test-sets for quality evaluation of MT systems-technical evaluation from the developer's point of view. In Proceedings of MT Summit V. Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885. Association for Computational Linguistics. on Computational Natural Language Learning, pages 104-113. Association for Computational Linguistics. Jean Maillard and Stephen Clark. 2018. Latent Tree Learning with Differentiable Parsers: Shift-Reduce Parsing and Chart Parsing. In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP, pages 13-18. Association for Computational Linguistics. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 1-8. Association for Computational Linguistics. R. Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society.Wasi Uddin Ahmad, Xueying Bai, Zhechao
Huang, Chao Jiang, Nanyun Peng, and Kai-Wei
Chang. 2018. Multi-task Learning for Universal
Sentence Embeddings: A Thorough Evaluation
using Transfer and Auxiliary Tasks. arXiv
preprint arXiv:1804.07911v2.
Afra Alishahi, Marie Barking, and Grzegorz
Chrupała. 2017. Encoding of phonology in a
recurrent neural model of grounded speech.
In Proceedings of the 21st Conference on
Computational Natural Language Learning
(CoNLL 2017), pages 368-378. Association for
Computational Linguistics.
David Alvarez-Melis and Tommi Jaakkola.
2017. A causal framework for explaining the
predictions of black-box sequence-to-sequence
models. In Proceedings of the 2017 Conference
on Empirical Methods in Natural Language
Processing, pages 412-421. Association for
Computational Linguistics.
Moustafa Alzantot, Yash Sharma, Ahmed
Elgohary, Bo-Jhang Ho, Mani Srivastava, and
Kai-Wei Chang. 2018. Generating Natural
Language Adversarial Examples. In Proceed-
ings of the 2018 Conference on Empirical
Methods in Natural Language Processing,
pages 2890-2896. Association for Computa-
tional Linguistics.
Leila Arras, Franziska Horn, Grégoire Montavon,
Klaus-Robert Müller, and Wojciech Samek.
2017a. ''What is relevant in a text document?'':
An interpretable machine learning approach.
PLOS ONE, 12(8):1-23.
Leila Arras, Grégoire Montavon, Klaus-Robert
Müller, and Wojciech Samek. 2017b. Explain-
ing Recurrent Neural Network Predictions in
Sentiment Analysis. In Proceedings of the
8th Workshop on Computational Approaches
to Subjectivity, Sentiment and Social Media
Analysis, pages 159-168. Association for
Computational Linguistics.
Mikel Artetxe, Gorka Labaka, Inigo Lopez-
Gazpio, and Eneko Agirre. 2018. Uncovering
Divergent Linguistic Information in Word
Embeddings with Lessons for Intrinsic and
Extrinsic Evaluation. In Proceedings of the
22nd Conference on Computational Natural
Language Learning, pages 282-291. Associa-
tion for Computational Linguistics.
Malika Aubakirova and Mohit Bansal. 2016. Inter-
preting Neural Networks to Improve Politeness
Comprehension. In Proceedings of the 2016
Conference on Empirical Methods in Natu-
ral Language Processing, pages 2035-2041.
Association for Computational Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua
Bengio. 2014. Neural Machine Translation by
Jointly Learning to Align and Translate. arXiv
preprint arXiv:1409.0473v7.
Anthony Bau, Yonatan Belinkov, Hassan Sajjad,
Nadir Durrani, Fahim Dalvi, and James Glass.
2018. Identifying and Controlling Important
Neurons in Neural Machine Translation. arXiv
preprint arXiv:1811.01157v1.
Rachel Bawden, Rico Sennrich, Alexandra Birch,
and Barry Haddow. 2018. Evaluating Discourse
Phenomena in Neural Machine Translation. In
Proceedings of the 2018 Conference of the
North American Chapter of the Association
for Computational Linguistics: Human Lan-
guage Technologies, Volume 1 (Long Papers),
pages 1304-1313. Association for Computa-
tional Linguistics.
Yonatan Belinkov. 2018. On Internal Language
Representations in Deep Learning: An Analy-
sis of Machine Translation and Speech Recog-
nition. Ph.D. thesis, Massachusetts Institute of
Technology.
Yonatan Belinkov and Yonatan Bisk. 2018. Syn-
thetic and Natural Noise Both Break Neural
Machine Translation. In International Confer-
ence on Learning Representations (ICLR).
Yonatan Belinkov, Nadir Durrani, Fahim Dalvi,
Hassan Sajjad, and James Glass. 2017a.
What do Neural Machine Translation Models
Learn about Morphology? In Proceedings of
the 55th Annual Meeting of the Association
for Computational Linguistics (Volume 1:
Long Papers), pages 861-872. Association for
Computational Linguistics.
Yonatan Belinkov and James Glass. 2017, Anal-
yzing Hidden Representations in End-to-End
Automatic Speech Recognition Systems, I. Guyon,
U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus,
S. Vishwanathan, and R. Garnett, editors, Ad-
vances in Neural Information Processing Sys-
tems 30, pages 2441-2451. Curran Associates,
Inc.
Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad,
Nadir Durrani, Fahim Dalvi, and James Glass.
2017b. Evaluating Layers of Representation in
Neural Machine Translation on Part-of-Speech
and Semantic Tagging Tasks. In Proceedings
of the Eighth International Joint Conference
on Natural Language Processing (Volume 1:
Long Papers), pages 1-10. Asian Federation of
Natural Language Processing.
Jean-Philippe Bernardy. 2018. Can Recurrent
Neural Networks Learn Nested Recursion?
LiLT (Linguistic Issues in Language Tech-
nology), 16(1).
Arianna Bisazza and Clara Tump. 2018. The Lazy
Encoder: A Fine-Grained Analysis of the Role
of Morphology in Neural Machine Translation.
In Proceedings of the 2018 Conference on
Empirical Methods in Natural Language
Processing, pages 2871-2876. Association for
Computational Linguistics.
Terra Blevins, Omer Levy, and Luke Zettlemoyer.
2018. Deep RNNs Encode Soft Hierarchi-
cal Syntax. In Proceedings of the 56th Annual
Meeting of the Association for Computa-
tional Linguistics (Volume 2: Short Papers),
pages 14-19. Association for Computational
Linguistics.
Mikael Bodén and Janet Wiles. 2002. On learning
context-free and context-sensitive languages.
IEEE Transactions on Neural Networks, 13(2):
491-493.
Samuel R. Bowman, Gabor Angeli, Christopher
Potts, and Christopher D. Manning. 2015. A
large annotated corpus for learning natural
language inference. In Proceedings of the
2015 Conference on Empirical Methods in
Natural Language Processing, pages 632-642.
Association for Computational Linguistics.
Elia Bruni, Gemma Boleda, Marco Baroni,
and Nam Khanh Tran. 2012. Distributional
Semantics in Technicolor. In Proceedings of
the 50th Annual Meeting of the Association
for Computational Linguistics (Volume 1:
Long Papers), pages 136-145. Association for
Computational Linguistics.
Gino Brunner, Yuyi Wang, Roger Wattenhofer,
and Michael Weigelt. 2017. Natural Language
Multitasking: Analyzing and Improving Syn-
tactic Saliency of Hidden Representations. The
31st Annual Conference on Neural Information
Processing (NIPS)-Workshop on Learning
Disentangled Features: From Perception to
Control.
Aljoscha Burchardt, Vivien Macketanz, Jon
Dehdari, Georg Heigold, Jan-Thorsten Peter,
and Philip Williams. 2017. A Linguistic
Evaluation of Rule-Based, Phrase-Based, and
Neural MT Engines. The Prague Bulletin of
Mathematical Linguistics, 108(1):159-170.
Franck Burlot and François Yvon. 2017.
Evaluating the morphological competence of
Machine Translation Systems. In Proceedings
of the Second Conference on Machine Trans-
lation, pages 43-55. Association for Compu-
tational Linguistics.
Mike Casey. 1996. The Dynamics of Discrete-
Time Computation, with Application to Re-
current Neural Networks and Finite State
Machine Extraction. Neural Computation,
8(6):1135-1178.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo
Lopez-Gazpio, and Lucia Specia. 2017.
SemEval-2017 Task 1: Semantic Textual Sim-
ilarity Multilingual and Crosslingual Focused
Evaluation. In Proceedings of the 11th Inter-
national Workshop on Semantic Evaluation
(SemEval-2017), pages 1-14. Association for
Computational Linguistics.
Rahma Chaabouni, Ewan Dunbar, Neil Zeghidour,
and Emmanuel Dupoux. 2017. Learning weakly
supervised multimodal phoneme embeddings.
In Interspeech 2017.
Stephan K. Chalup and Alan D. Blair. 2003.
Incremental Training of First Order Recurrent
Neural Networks to Predict a Context-Sensitive
Language. Neural Networks, 16(7):955-972.
Jonathan Chang, Sean Gerrish, Chong Wang,
Jordan L. Boyd-graber, and David M. Blei.
2009, Reading Tea Leaves: How Humans Inter-
pret Topic Models, Y. Bengio, D. Schuurmans,
J. D. Lafferty, C. K. I. Williams, and A. Culotta,
editors, Advances in Neural Information Pro-
cessing Systems 22, pages 288-296, Curran
Associates, Inc..
Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng
Yi, and Cho-Jui Hsieh. 2018a. Attacking visual
language grounding with adversarial examples:
A case study on neural image captioning. In
Proceedings of the 56th Annual Meeting of
the Association for Computational Linguistics
(Volume 1: Long Papers), pages 2587-2597.
Association for Computational Linguistics.
Fahim Jennifer Mohit Conference
For instance, a neural network that learns distributed representations of words was developed already inMiikkulainen and Dyer (1991). SeeGoodfellow et al. (2016, chapter 12.4) for references to other important milestones.
A similar method has been used to analyze hierarchical structure in neural networks trained on arithmetic expressions(Veldhoen et al., 2016;Hupkes et al., 2018).
Others found that even simple binary trees may work well in MT(Wang et al., 2018b) and sentence classification(Chen et al., 2015).
Generally, many of the visualization methods are adapted from the vision domain, where they have been extremely popular; seeZhang and Zhu (2018) for a survey.
RNNVis (Ming et al., 2017) is a similar tool, but its online demo does not seem to be available at the time of writing.
Their dataset does not seem to be available yet, but more details are promised to appear in a future publication.
also verified that their examples do not contain annotation artifacts, a potential problem noted in recent studies(Gururangan et al., 2018; Poliak et al., 2018b).
The notation here followsYuan et al. (2017).
Other work considered learning textual-visual explanations from multimodal annotations(Park et al., 2018).
Robert Frank, Donald Mathis, and William Badecker. 2013. The Acquisition of Anaphora by Simple Recurrent Networks. Language Acquisition, 20(3):181-227.
AcknowledgmentsWe would like to thank the anonymous reviewers and the action editor for their very helpful comments. This work was supported by the Qatar Computing Research Institute. Y.B. is also supported by the Harvard Mind, Brain, Behavior Initiative.Alon Jacovi, Oren Sar Shalom, and Yoav
Analysis of sentence embedding models using prediction tasks in natural language processing. Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, Yoav Goldberg, IBM Journal of Research and Development. 614Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017a. Anal- ysis of sentence embedding models using prediction tasks in natural language processing. IBM Journal of Research and Development, 61(4):3-9.
Fine-Grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, Yoav Goldberg, International Conference on Learning Representations. ICLRYossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine- Grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. In Interna- tional Conference on Learning Representations (ICLR).
Morphological Inflection Generation with Hard Monotonic Attention. Roee Aharoni, Yoav Goldberg, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Roee Aharoni and Yoav Goldberg. 2017. Morphological Inflection Generation with Hard Monotonic Attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004-2015. Association for Computational Linguistics.
Cynthia Freeman, Jonathan Merriman, Abhinav Aggarwal, Ian Beaver, Abdullah Mueen, arXiv:1808.02113v1Paying Attention to Attention: Highlighting Influential Samples in Sequential Analysis. arXiv preprintCynthia Freeman, Jonathan Merriman, Abhinav Aggarwal, Ian Beaver, and Abdullah Mueen. 2018. Paying Attention to Attention: High- lighting Influential Samples in Sequential Analysis. arXiv preprint arXiv:1808.02113v1.
A Compositional and Interpretable Semantic Space. Alona Fyshe, Leila Wehbe, Partha P Talukdar, Brian Murphy, Tom M Mitchell, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsAlona Fyshe, Leila Wehbe, Partha P. Talukdar, Brian Murphy, and Tom M. Mitchell. 2015. A Compositional and Interpretable Semantic Space. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 32-41. Association for Computational Linguistics.
What's Going On in Neural Constituency Parsers? An Analysis. David Gaddy, Mitchell Stern, Dan Klein, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1David Gaddy, Mitchell Stern, and Dan Klein. 2018. What's Going On in Neural Constituency Parsers? An Analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 999-1010. Association for Computational Linguistics.
Interpretation of Semantic Tweet Representations. J Ganesh, Manish Gupta, Vasudeva Varma, Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17. the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17New York, NY, USAACMJ. Ganesh, Manish Gupta, and Vasudeva Varma. 2017. Interpretation of Semantic Tweet Representations. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17, pages 95-102, New York, NY, USA. ACM.
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers. Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi, arXiv:1801.04354v5arXiv preprintJi Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers. arXiv preprint arXiv: 1801.04354v5.
From phonemes to images: Levels of representation in a recurrent neural model of visuallygrounded language learning. Lieke Gelderloos, Grzegorz Chrupała, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeLieke Gelderloos and Grzegorz Chrupała. 2016. From phonemes to images: Levels of repre- sentation in a recurrent neural model of visually- grounded language learning. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1309-1319, Osaka, Japan, The COLING 2016 Organizing Committee.
LSTM Recurrent Networks Learn Simple Context-Free and Context-Sensitive Languages. A Felix, Jürgen Gers, Schmidhuber, IEEE Transactions on Neural Networks. 126Felix A. Gers and Jürgen Schmidhuber. 2001. LSTM Recurrent Networks Learn Simple Context-Free and Context-Sensitive Languages. IEEE Transactions on Neural Networks, 12(6): 1333-1340.
SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity. Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, Anna Korhonen, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsDaniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173-2182. Association for Computational Linguistics.
What does Attention in Neural Machine Translation Pay Attention to?. Hamidreza Ghader, Christof Monz, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingLong Papers1Asian Federation of Natural Language ProcessingHamidreza Ghader and Christof Monz. 2017. What does Attention in Neural Machine Translation Pay Attention to? In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 30-39. Asian Federation of Natural Language Processing.
Interpreting Recurrent and Attention-Based Neural Models: A Case Study on Natural Language Inference. Reza Ghaeini, Xiaoli Fern, Prasad Tadepalli, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsReza Ghaeini, Xiaoli Fern, and Prasad Tadepalli. 2018. Interpreting Recurrent and Attention- Based Neural Models: A Case Study on Nat- ural Language Inference. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 4952-4957. Association for Computational Linguistics.
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve How Language Models Track Agreement Information. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, Willem Zuidema, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPAssociation for Computational LinguisticsMario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the Hood: Using Diagnostic Classifiers to Investigate and Improve How Language Models Track Agreement Information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240-248. Association for Computational Linguistics.
Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. Max Glockner, Vered Shwartz, Yoav Goldberg, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Infer- ences. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 650-655. Association for Computational Linguistics.
Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?. Fréderic Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, Thomas Demeester, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsFréderic Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, and Thomas Demeester. 2018. Explaining Character-Aware Neural Net- works for Word-Level Prediction: Do They Dis- cover Linguistic Rules? In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 3275-3284. Association for Computational Linguistics.
Neural Network methods for Natural Language Processing. Yoav Goldberg, Synthesis Lectures on Human Language Technologies. 10Morgan & Claypool PublishersYoav Goldberg. 2017. Neural Network methods for Natural Language Processing, volume 10 of Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.
Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning. MIT PressIan Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning, MIT Press. http://www.deepleaningbook.org.
Generative Adversarial Nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in Neural Information Processing Systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, pages 2672-2680.
Explaining and Harnessing Adversarial Examples. Ian J Goodfellow, Jonathon Shlens, Christian Szegedy, International Conference on Learning Representations. ICLRIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In International Con- ference on Learning Representations (ICLR).
Colorless Green Recurrent Networks Dream Hierarchically. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, Marco Baroni, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless Green Recurrent Networks Dream Hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205. Association for Computational Linguistics.
Distributional vectors encode referential attributes. Abhijeet Gupta, Gemma Boleda, Marco Baroni, Sebastian Padó, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsAbhijeet Gupta, Gemma Boleda, Marco Baroni, and Sebastian Padó. 2015. Distributional vectors encode referential attributes. In Pro- ceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 12-21. Association for Computational Linguistics.
LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation. Pankaj Gupta, Hinrich Schütze, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPAssociation for Computational LinguisticsPankaj Gupta and Hinrich Schütze. 2018. LISA: Explaining Recurrent Neural Network Judg- ments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 154-164. Association for Computational Linguistics.
Annotation Artifacts in Natural Language Inference Data. Swabha Suchin Gururangan, Omer Swayamdipta, Roy Levy, Samuel Schwartz, Noah A Bowman, Smith, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersSuchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers),
Translation Systems. Papers Presented to the 13th International Conference on Computational Linguistics. 2COLNGTranslation Systems. In COLNG 1990 Volume 2: Papers Presented to the 13th International Conference on Computational Linguistics.
Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations. Eliyahu Kiperwasser, Yoav Goldberg, Transactions of the Association for Computational Linguistics. 4Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Represen- tations. Transactions of the Association for Computational Linguistics, 4:313-327.
A test suite for evaluation of English-to-Korean machine translation systems. Sungryong Koh, Jinee Maeng, Ji-Young Lee, Young-Sook Chae, Key-Sun Choi, MT Summit Conference. Sungryong Koh, Jinee Maeng, Ji-Young Lee, Young-Sook Chae, and Key-Sun Choi. 2001. A test suite for evaluation of English-to-Korean machine translation systems. In MT Summit Conference.
What's in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation. Arne Köhn, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsArne Köhn. 2015. What's in an Embedding? Analyzing Word Embeddings through Multi- lingual Evaluation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 2067-2073, Lisbon, Portugal. Association for Computa- tional Linguistics.
Adversarial Examples for Natural Language Classification Problems. Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, Stefano Ermon, Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial Examples for Natural Language Classification Problems.
Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks. Brenden Lake, Marco Baroni, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningStockholmsmässan, Stockholm, Sweden80Brenden Lake and Marco Baroni. 2018. Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Ma- chine Learning Research, pages 2873-2882, Stockholmsmässan, Stockholm, Sweden. PMLR.
TSNLP-Test Suites for Natural Language Processing. Sabine Lehmann, Stephan Oepen, Sylvie Regnier-Prost, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Estival, Eva Dauphin, Herve Compagnion, Judith Baur, Lorna Balkan, Doug Arnold, The 16th International Conference on Computational Linguistics. 2Sabine Lehmann, Stephan Oepen, Sylvie Regnier- Prost, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Estival, Eva Dauphin, Herve Compagnion, Judith Baur, Lorna Balkan, and Doug Arnold. 1996. TSNLP-Test Suites for Natural Language Processing. In COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics.
Rationalizing Neural Predictions. Tao Lei, Regina Barzilay, Tommi Jaakkola, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsTao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117. Association for Computational Linguistics.
Ira Leviant, Roi Reichart, arXiv:1508.00106v5Separated by an Un-Common Language: Towards Judgment Language Informed Vector Space Modeling. arXiv preprintIra Leviant and Roi Reichart. 2015. Separated by an Un-Common Language: Towards Judgment Language Informed Vector Space Modeling. arXiv preprint arXiv:1508.00106v5.
Visualizing and Understanding Neural Models in NLP. Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsJiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and Under- standing Neural Models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691. Association for Computational Linguistics.
Jiwei Li, Will Monroe, Dan Jurafsky, arXiv:1612.08220v3Understanding Neural Networks through Representation Erasure. arXiv preprintJiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding Neural Networks through Representation Erasure. arXiv preprint arXiv: 1612.08220v3.
Deep Text Classification Can Be Fooled. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep Text Classification Can Be Fooled. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4208-4215. International Joint Conferences on Artificial Intelligence Organization.
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Tal Linzen, Emmanuel Dupoux, Yoav Goldberg, Transactions of the Association for Computational Linguistics. 4Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Depen- dencies. Transactions of the Association for Computational Linguistics, 4:521-535.
The Mythos of Model Interpretability. Zachary C Lipton, ICML Workshop on Human Interpretability of Machine Learning. Zachary C. Lipton. 2016. The Mythos of Model Interpretability. In ICML Workshop on Human Interpretability of Machine Learning.
LSTMs Exploit Linguistic Attributes of Data. Nelson F Liu, Omer Levy, Roy Schwartz, Chenhao Tan, Noah A Smith, Proceedings of The Third Workshop on Representation Learning for NLP. The Third Workshop on Representation Learning for NLPAssociation for Computational LinguisticsNelson F. Liu, Omer Levy, Roy Schwartz, Chenhao Tan, and Noah A. Smith. 2018. LSTMs Exploit Linguistic Attributes of Data. In Proceedings of The Third Workshop on Rep- resentation Learning for NLP, pages 180-186. Association for Computational Linguistics.
Delving into Transferable Adversarial Examples and Black-Box Attacks. Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song, International Conference on Learning Representations. ICLRYanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into Transferable Adversarial Examples and Black-Box Attacks. In International Conference on Learning Representations (ICLR).
Better Word Representations with Recursive Neural Networks for Morphology. Thang Luong, Richard Socher, Christopher Manning ; Jason Naradowsky, Brian Leonard, Benjamin Van Durme, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics2Proceedings of the Seventeenth Rachel Rudinger. Short PapersThang Luong, Richard Socher, and Christopher Manning. 2013. Better Word Representations with Recursive Neural Networks for Mor- phology. In Proceedings of the Seventeenth Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender Bias in Coreference Resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14. Association for Computational Linguistics.
Parallel Distributed Processing: Explorations in the Microstructure of Cognition. D E Rumelhart, J L Mcclelland, chapter On Leaning the Past Tenses of English Verbs. Cambridge, MA, USAMIT Press2D. E. Rumelhart and J. L. McClelland. 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. volume 2, chapter On Leaning the Past Tenses of English Verbs, pages 216-271. MIT Press, Cambridge, MA, USA.
A Neural Attention Model for Abstractive Sentence Summarization. Alexander M Rush, Sumit Chopra, Jason Weston, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsAlexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Pro- cessing, pages 379-389. Association for Computational Linguistics.
Robsut Wrod Reocginiton via Semi-Character Recurrent Neural Network. Keisuke Sakaguchi, Kevin Duh, Matt Post, Benjamin Van Durme, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceSan Francisco, California, USA.AAAI PressKeisuke Sakaguchi, Kevin Duh, Matt Post, and Benjamin Van Durme. 2017. Robsut Wrod Reocginiton via Semi-Character Recurrent Neural Network. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3281-3287. AAAI Press.
. Suranjana Samanta, Sameep Mehta, arXiv:1707.02812v1Towards Crafting Text Adversarial Samples. arXiv preprintSuranjana Samanta and Sameep Mehta. 2017. Towards Crafting Text Adversarial Samples. arXiv preprint arXiv:1707.02812v1.
Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness. Ivan Sanchez, Jeff Mitchell, Sebastian Riedel, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Long PapersIvan Sanchez, Jeff Mitchell, and Sebastian Riedel. 2018. Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1975-1985. Association for Computational Linguistics.
Interpretable Adversarial Perturbation in Input Embedding Space for Text. Motoki Sato, Jun Suzuki, Hiroyuki Shindo, Yuji Matsumoto, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable Adversar- ial Perturbation in Input Embedding Space for Text. In Proceedings of the Twenty-Seventh International Joint Conference on Artifi- cial Intelligence, IJCAI-18, pages 4323-4330. International Joint Conferences on Artificial Intelligence Organization.
Semantic Structure and Interpretability of Word Embeddings. Ihsan Lutfi Kerem Senel, Veysel Utlu, Aykut Yucesoy, Tolga Koc, Cukur, Speech, and Language Processing. Lutfi Kerem Senel, Ihsan Utlu, Veysel Yucesoy, Aykut Koc, and Tolga Cukur. 2018. Se- mantic Structure and Interpretability of Word Embeddings. IEEE/ACM Transactions on Audio, Speech, and Language Processing.
How Grammatical Is Character-Level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs. Rico Sennrich, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterAssociation for Computational Linguistics2Short PapersRico Sennrich. 2017. How Grammatical Is Character-Level Neural Machine Translation? Assessing MT Quality with Contrastive Trans- lation Pairs. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376-382. Association for Computational Linguistics.
Learning Visually-Grounded Semantics from Contrastive Adversarial Samples. Haoyue Shi, Jiayuan Mao, Tete Xiao, Yuning Jiang, Jian Sun, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsAssociation for Computational LinguisticsHaoyue Shi, Jiayuan Mao, Tete Xiao, Yuning Jiang, and Jian Sun. 2018. Learning Visually- Grounded Semantics from Contrastive Adver- sarial Samples. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3715-3727. Association for Computational Linguistics.
Why Neural Translations are the Right Length. Xing Shi, Kevin Knight, Deniz Yuret, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsXing Shi, Kevin Knight, and Deniz Yuret. 2016a. Why Neural Translations are the Right Length. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2278-2282. Association for Computational Linguistics.
Does String-Based Neural MT Learn Source 70. Xing Shi, Inkit Padhi, Kevin Knight, Xing Shi, Inkit Padhi, and Kevin Knight. 2016b. Does String-Based Neural MT Learn Source 70
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsSyntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534, Austin, Texas. Association for Computational Linguistics.
Chandan Singh, W James Murdoch, Bin Yu, arXiv:1806.05337v1Hierarchical interpretations for neural network predictions. arXiv preprintChandan Singh, W. James Murdoch, and Bin Yu. 2018. Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337v1.
Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alexander M Rush, arXiv:1804.09299v1Seq2Seq-Vis: A Visual Debugging Tool for Sequenceto-Sequence Models. arXiv preprintHendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, and Alexander M. Rush. 2018a. Seq2Seq-Vis: A Visual Debugging Tool for Sequence- to-Sequence Models. arXiv preprint arXiv: 1804.09299v1.
LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks. Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, Alexander M Rush, IEEE Transactions on Visualization and Computer Graphics. 241Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, and Alexander M. Rush. 2018b. LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks. IEEE Transactions on Visualization and Computer Graphics, 24(1):667-676.
Axiomatic Attribution for Deep Networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, PMLRProceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningSydney, Australia70Proceedings of Machine Learning ResearchMukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th Inter- national Conference on Machine Learning, Volume 70 of Proceedings of Machine Learn- ing Research, pages 3319-3328, International Convention Centre, Sydney, Australia. PMLR.
Sequence to Sequence Learning with Neural Networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in neural infor- mation processing systems, pages 3104-3112.
On Evaluating the Generalization of LSTM Models in Formal Languages. Mirac Suzgun, Yonatan Belinkov, Stuart M Shieber, Proceedings of the Society for Computation in Linguistics (SCiL). the Society for Computation in Linguistics (SCiL)Mirac Suzgun, Yonatan Belinkov, and Stuart M. Shieber. 2019. On Evaluating the Generaliza- tion of LSTM Models in Formal Languages. In Proceedings of the Society for Computation in Linguistics (SCiL).
Intriguing properties of neural networks. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus, International Conference on Learning Representations. ICLRChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).
An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation. Gongbo Tang, Rico Sennrich, Joakim Nivre, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersAssociation for Computational LinguisticsGongbo Tang, Rico Sennrich, and Joakim Nivre. 2018. An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 26-35. Association for Computational Linguistics.
CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Recommendation. Yi Tay, Anh Tuan Luu, Siu Cheung Hui, Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM). the Twelfth International AAAI Conference on Web and Social Media (ICWSM)Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Rec- ommendation. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM).
The Importance of Being Recurrent for Modeling Hierarchical Structure. Ke Tran, Arianna Bisazza, Christof Monz, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsKe Tran, Arianna Bisazza, and Christof Monz. 2018. The Importance of Being Recurrent for Modeling Hierarchical Structure. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731-4736. Association for Computa- tional Linguistics.
Investigating ''Aspect'' in NMT and SMT: Translating the English Simple Past and Present Perfect. Eva Vanmassenhove, Jinhua Du, Andy Way, Computational Linguistics in the Netherlands Journal. 7Eva Vanmassenhove, Jinhua Du, and Andy Way. 2017. Investigating ''Aspect'' in NMT and SMT: Translating the English Simple Past and Present Perfect. Computational Linguistics in the Netherlands Journal, 7:109-128.
Diagnostic Classifiers: Revealing How Neural Networks Process Hierarchical Structure. Sara Veldhoen, Dieuwke Hupkes, Willem Zuidema, CEUR Workshop Proceedings. Sara Veldhoen, Dieuwke Hupkes, and Willem Zuidema. 2016. Diagnostic Classifiers: Reveal- ing How Neural Networks Process Hierarchical Structure. In CEUR Workshop Proceedings.
Context-Aware Neural Machine Translation Learns Anaphora Resolution. Elena Voita, Pavel Serdyukov, Rico Sennrich, Ivan Titov, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-Aware Neural Machine Translation Learns Anaphora Resolu- tion. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1264-1274. Association for Computational Linguistics.
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation. Ekaterina Vylomova, Trevor Cohn, Xuanli He, Gholamreza Haffari, arXiv:1606.04217v1arXiv preprintEkaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2016. Word Representation Models for Morphologically Rich Languages in Neural Machine Translation. arXiv preprint arXiv:1606.04217v1.
Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1804.07461v1GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. arXiv preprintAlex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018a. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Under- standing. arXiv preprint arXiv:1804.07461v1.
What Does the Speaker Embedding Encode?. Shuai Wang, Yanmin Qian, Kai Yu, Interspeech 2017. Shuai Wang, Yanmin Qian, and Kai Yu. 2017a. What Does the Speaker Embedding Encode? In Interspeech 2017, pages 1497-1501.
A Tree-Based Decoder for Neural Machine Translation. Xinyi Wang, Hieu Pham, Pengcheng Yin, Graham Neubig, Conference on Empirical Methods in Natural Language Processing (EMNLP). Brussels, BelgiumXinyi Wang, Hieu Pham, Pengcheng Yin, and Graham Neubig. 2018b. A Tree-Based Decoder for Neural Machine Translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Brussels, Belgium.
Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries. Yu-Hsuan Wang, Cheng-Tao Chung, Hung-Yi Lee, Yu-Hsuan Wang, Cheng-Tao Chung, and Hung-yi Lee. 2017b. Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries. In Interspeech 2017.
On the Practical Computational Power of Finite Precision RNNs for Language Recognition. Gail Weiss, Yoav Goldberg, Eran Yahav, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the Practical Computational Power of Finite Precision RNNs for Language Recognition. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 740-745. Association for Computational Linguistics.
Do latent tree learning models identify meaningful structure in sentences. Adina Williams, Andrew Drozdov, Samuel R Bowman, Transactions of the Association for Computational Linguistics. 6Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Compu- tational Linguistics, 6:253-267.
Investigating gated recurrent networks for speech synthesis. Zhizheng Wu, Simon King, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEZhizheng Wu and Simon King. 2016. Inves- tigating gated recurrent networks for speech synthesis. In 2016 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 5140-5144. IEEE.
Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, Michael I Jordan, arXiv:1805.12316v1Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data. arXiv preprintPuyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane- Ling Wang, and Michael I. Jordan. 2018. Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data. arXiv preprint arXiv:1805.12316v1.
ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs. Wenpeng Yin, Hinrich Schütze, Bing Xiang, Bowen Zhou, Transactions of the Association for Computational Linguistics. 4Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs. Transactions of the Association for Computational Linguistics, 4:259-272.
Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li, arXiv:1712.07107v3Adversarial Examples: Attacks and Defenses for Deep Learning. arXiv preprintXiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2017. Adversarial Examples: Attacks and Defenses for Deep Learning. arXiv preprint arXiv:1712.07107v3.
Using ''Annotator Rationales'' to Improve Machine Learning for Text Categorization. Omar Zaidan, Jason Eisner, Christine Piatko, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference. Association for Computational LinguisticsOmar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using ''Annotator Rationales'' to Improve Machine Learning for Text Cate- gorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Conference, pages 260-267. Association for Computational Linguistics.
Visual interpretability for deep learning: A survey. Song-Chun Quan-Shi Zhang, Zhu, Frontiers of Information Technology & Electronic Engineering. 191Quan-shi Zhang and Song-chun Zhu. 2018. Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1):27-39.
Rationale-Augmented Convolutional Neural Networks for Text Classification. Ye Zhang, Iain Marshall, Byron C Wallace, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsYe Zhang, Iain Marshall, and Byron C. Wallace. 2016. Rationale-Augmented Convolutional Neural Networks for Text Classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 795-804. Association for Computational Linguistics.
Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics2Short PapersJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20. Association for Computational Linguistics.
Adversarially Regularized Autoencoders. Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, Yann Lecun, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningStockholmsmässan, Stockholm, Sweden80Proceedings of Machine Learning ResearchJunbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun. 2018b. Adversarially Regularized Autoencoders. In Proceedings of the 35th International Conference on Machine Learning, Volume 80 of Proceedings of Ma- chine Learning Research, pages 5902-5911, Stockholmsmässan, Stockholm, Sweden. PMLR.
Generating Natural Adversarial Examples. Zhengli Zhao, Dheeru Dua, Sameer Singh, International Conference on Learning Representations. Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018c. Generating Natural Adversarial Examples. In International Conference on Learning Representations.
| [] |
[
"BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN",
"BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN"
] | [
"Lisa Kühnel kuehnel@zbmed.de ",
"Alexander Schulz aschulz@techfak.uni-bielefeld.de ",
"Barbara Hammer bhammer@techfak.uni-bielefeld.de ",
"Juliane Fluck fluck@zbmed.de ",
"\nGraduate School DILS\nBielefeld Institute for Bioinformatics Infrastructure (BIBI)\nFaculty of Technology\nZB MED -Information Centre for Life Sciences Cologne\nGermany\n",
"\nBielefeld University\nBielefeldGermany\n",
"\nCITEC\nBielefeld University\nBielefeldGermany\n",
"\nZB MED -Information Centre for Life Sciences Cologne\nCITEC\nBielefeld University\nBielefeldGermany, Germany\n",
"\nUniversity of Bonn\nBonnGermany\n"
] | [
"Graduate School DILS\nBielefeld Institute for Bioinformatics Infrastructure (BIBI)\nFaculty of Technology\nZB MED -Information Centre for Life Sciences Cologne\nGermany",
"Bielefeld University\nBielefeldGermany",
"CITEC\nBielefeld University\nBielefeldGermany",
"ZB MED -Information Centre for Life Sciences Cologne\nCITEC\nBielefeld University\nBielefeldGermany, Germany",
"University of Bonn\nBonnGermany"
] | [] | Recent developments in transfer learning have boosted the advancements in natural language processing tasks. The performance is, however, dependent on high-quality, manually annotated training data. Especially in the biomedical domain, it has been shown that one training corpus is not enough to learn generic models that are able to efficiently predict on new data. Therefore, state-of-the-art models need the ability of lifelong learning in order to improve performance as soon as new data are available -without the need of re-training the whole model from scratch. We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model, thereby reducing catastrophic forgetting. We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once, while being computationally more efficient. Because there is no need of data sharing, the presented method is also easily applicable to federated learning settings and can for example be beneficial for the mining of electronic health records from different clinics. | null | [
"https://export.arxiv.org/pdf/2202.10101v2.pdf"
] | 247,012,007 | 2202.10101 | b8b88a01d6b7f9d17c4f7cc5bb9d558192e39832 |
BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN
Lisa Kühnel kuehnel@zbmed.de
Alexander Schulz aschulz@techfak.uni-bielefeld.de
Barbara Hammer bhammer@techfak.uni-bielefeld.de
Juliane Fluck fluck@zbmed.de
Graduate School DILS
Bielefeld Institute for Bioinformatics Infrastructure (BIBI)
Faculty of Technology
ZB MED -Information Centre for Life Sciences Cologne
Germany
Bielefeld University
BielefeldGermany
CITEC
Bielefeld University
BielefeldGermany
ZB MED -Information Centre for Life Sciences Cologne
CITEC
Bielefeld University
BielefeldGermany, Germany
University of Bonn
BonnGermany
BERT WEAVER: USING WEIGHT AVERAGING TO ENABLE LIFELONG LEARNING FOR TRANSFORMER-BASED MODELS IN THE BIOMEDICAL DOMAIN
Text Mining · BioNLP · BERT · Continual Learning · Federated Learning
Recent developments in transfer learning have boosted the advancements in natural language processing tasks. The performance is, however, dependent on high-quality, manually annotated training data. Especially in the biomedical domain, it has been shown that one training corpus is not enough to learn generic models that are able to efficiently predict on new data. Therefore, state-of-the-art models need the ability of lifelong learning in order to improve performance as soon as new data are available -without the need of re-training the whole model from scratch. We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model, thereby reducing catastrophic forgetting. We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once, while being computationally more efficient. Because there is no need of data sharing, the presented method is also easily applicable to federated learning settings and can for example be beneficial for the mining of electronic health records from different clinics.
Introduction
The amount of literature in the medical domain is increasing enormously and emphasises the need for text mining-based solutions in order to automatically extract relevant information. Named entity recognition (NER) is an important task of natural language processing (NLP) where the aim is to find entity classes in unstructured text, such as specific diseases. As the amount of data for a specific setup is usually limited, recently, transfer learning-based models have been shown to achieve state-of-the-art results in many NLP tasks including NER [1]. Especially, transformer-based models, such as BERT [2], show promising results on benchmark tasks [3]. In the biomedical domain, BioBERT [4] shows state-of-the-art performance for several NER tasks, such as disease recognition. Promising F1-scores are achieved for the available data sets (above 84%).
Based on the use case of disease NER, we recently showed that models trained on an available data set are not able to efficiently predict on another data set that, however, follows the same annotation guidelines [5]. This is not only true for transformer-based models such as BioBERT but holds true for different machine learning-based models such as convolutional neural networks or conditional random fields. In our previous study, we showed -based on five different manually labelled data sets -that the performance of a model trained on one of these corpora is reduced by up to 20% in terms of F1-score when predicting on another corpus. This significant drop in performance indicates that the training data is either too small or not representative -compared to a random PubMed corpus. One reason can be attributed to the fact that specific corpora are often comparably small such that small differences in between those data sets are mapped to differences in embeddings and according NER downstream tasks. Therefore, in order to use these models in real world applications, such as semantic search engines, it is advisable to improve the models as soon as new annotated data are available, to obtain optimum performance. This process is known as lifelong learning or, equivalently, continual learning (CL), which means that a model is sequentially re-trained in a so-called online fashion [6]. However, for such settings, a mechanism called catastrophic forgetting easily happens [7]. This means that the model will be biased towards the last data set and will forget previously learned structures.
A lot of research has been done in the area of continual learning to prevent a model from forgetting. One of the most prominent approaches is called Elastic Weight Consolidation (EWC) proposed by Kirkpatrick et al. [8]. It is a regularization-based technique that basically quantifies the importance of weights and thereby impedes important weights from being changed drastically. It has been successfully applied for an online personalization of speech recognition systems, as an example [9]. Based on EWC, Liu et al. proposed an extension that makes use of a network reparameterization that basically rotates the parameter space [10]. More recently, Aljundi et al. proposed Memory Aware Synapses (MAS) -a method that, given a new sample, measures the importance of each parameter of the network by taking the sensitivity of the predicted output towards a change in that parameter into account [11].
Next to regularization-based techniques, (pseudo-)rehearsal-based approaches have been proposed, e.g. [12,13]. Rehearsal means that a subset of previously seen data is combined with the new data. Since the old data are not always available, these methods often include a generator network that generates new data based on the previously seen data set; this is also often called silver standard or replay buffer. These data are then mixed with new data to re-train the model. For rehearsal-based methods, research has been done on how best to select the replay buffer for an efficient training, e.g. gradient-based selection has been proposed, known under the abbreviation GEM -Gradient Episodic Memory -where several algorithms and extensions have been proposed recently, e.g. [14,15,16]. Experience replay is another rehearsal-based approach, for example investigated by [17].
In addition, promising methods exist where new parameters are added to the model for each new task that is learned, such as proposed by Fayek et al. [18]. Moreover, dual-memory-based methods are applied where two different networks are used -one for memorising already learned information and one for learning new tasks -such as shown by Hattori [19] or Park [20]. The architecture proposed by Hattori is strongly inspired by biological processes in the brain, making use of so-called hippocampal and neonortical networks. In contrast, Park implemented a dual network architecture based on state-of-the-art transformers. Further transformer-based lifelong learning algorithms include Memory based parameter adaptation (M bP A ++ ) [21], Language Modeling for Lifelong Language Learning (LAMOL) [22] and its extension Lifelong Learning Knowledge Distillation (L2KD) [23]. They all belong to the category of rehearsal-based techniques. Whereas the two latter simultaneously train a learner and a generator network, M bP A ++ uses sparse experience replay and local adaptation. Moreover, Houlsby et al. investigated Adapters, which are transformer-based modules exploiting a different learning procedure than the usual fine-tuning [24]. Except from being more parameter-efficient, these Adapters can be used for sequential learning settings as they can be trained individually and then be "fused" together [25]. Zhang et al. build upon Adapters, but instead of training a new module for each task, the authors developed an algorithm that decides whether a completely new module needs to be trained or whether an already trained one can be re-used [26]. In addition, the authors apply pseudo experience replay according to [22]. Several overview articles structure and compare online learning methods and their suitability in various domains [27,28].
Still, the suitability of such methods for NER tasks in the biomedical domain, where comparably small annotated data sets are present that also suffer from a very complex language, and their suitability to provide federated learning schemes [29] where sharing data should be avoided, has not yet been investigated. In addition, all mentioned methods learn a series of different tasks. Even though the tasks can be similar, as for example investigated by [22,26], none of the studies aim to sequentially improve on the same task as soon as new data are available -without forgetting what has been learned previously. This, however, can be of great importance for real world applications where integrated text mining methods need to be improved as soon as new data are available. Our recently developed semantic search engine preVIEW COVID-19 represents one example for this [30,31]. In this service, several different text mining-based components are integrated, such as for the recognition of disease names or virus proteins. Moreover, we integrated a feedback button, where users (who are mainly information specialists) can give us direct feedback on the annotations [32]. With this feedback, we can generate new data sets and improve our included models. To do this efficiently, a lifelong learning capacity is essential.
In this work, we present a new continual learning method -called WEAVER -that can be applied to transformerbased models which exploits parts of the federated averaging algorithm (FedAvg), known from federated learning approaches [33]. Thereby, previously used data are not required for the new task, the model structure does not need to be changed and the process is computationally efficient. For the real world applications described above, the model can therefore be efficiently re-trained as soon as new data are available. Moreover, as previous data do not need to be available in one place for incremental learning, it can also be used for federated learning approaches -which is particularly important in the medical domain. In each clinic or institution, a model can be trained using the available data on-site and can then pass the weights of the trained model to another site where the model is re-trained and WEAVER is applied. This results in a model that is trained on larger data sets without the need for data sharing. There is also no need for a central server where a model is built -instead, the model is passed sequentially through all the institutions.
Model
For our proposed continual learning procedure, we exploit a mechanism that is originally used in federated learning settings, where models are trained at different places using different data sets, mostly due to data privacy concerns [29]. After training these models individually, their weights are passed to a central server and averaged in relation to the amount of training data they were trained on -hence, the more data were available the more influence in the final model [33]. The corresponding formula for the objective of the target model can be seen in the following:
f (w) = K k=1 n k n F k (w)(1)
K is the number of clients, i.e. the number of models that were trained. The total amount of training data is described by n, whereas n k is the amount of the current data set. F k (w) defines the client's loss function. As shown in [33], this objective results in weight averaging for convex costs. Based on this, we developed the following procedure: For the first model that is trained on a given task, we initialise a pre-trained BioBERT model and fine-tune it in a usual manner. As soon as new data are then available, we fine-tune the already trained model again using the new data set. In a post-processing step, the weights of the old and the new model are then averaged, taking the amount of training data into account. Thereby, if a second model is trained on top of the first one, the total amount of training data is the sum of the two data sets. Therefore, either a new pre-trained model can be initialised or the already fine-tuned model will be fine-tuned again and afterwards combined. A simplified overview about the continual learning procedure is shown in Fig. 1.
Experiments
In this section, we first describe the used data sets. Afterwards, the conducted experiments and their implementation details are given. Finally, we describe our evaluation and visualisation strategies in detail.
Used Data Sets
We perform the experiments on three different NER tasks from the biomedical domain. For disease NER, we use five different data sets -four of them have been described in detail in our previous publication [5]. Additionally, we use the plant-disease corpus [34]. For both proteins/genes and chemicals, we rely on six and five different datasets, respectively, described in detail by Crichton et al. [35] and provided under https://github.com/cambridgeltl/ MTL-Bioinformatics-2016. An overview of the used data sets can be seen in Table 1.
We simulate a continuous learning setting, where different datasets are learned sequentially. As in real-world settings, the datasets can differ in size and slightly differ in used annotation guidelines. According to [21], we randomly chose four different orders for each setup (as can be seen in Table A1). For training the first model, a transformer-based model, such as BERT, is initialised and fine-tuned on the available data. To continue training in a sequential manner, the already fine-tuned model is fine-tuned again on a second corpus. To prevent catastrophic forgetting, knowledge of the previous model is infused into the new model by applying weight averaging. Thereby, the size of the data set the individual model was trained on determines the averaging coefficient, i.e. the bigger the data set, the higher the influence for the new model. This procedure is repeated for every new data set. [4] 3230 miRNA-disease [37] 3043 plant-disease [34] 2944 BioNLP13-CG [38] 1885
Proteins/Genes BioNLP11-ID [39] 3197 BioNLP13-CG [38] 4027 BioNLP13-GE [40] 3566 BioNLP13-PC [41] 5468 Ex-PTM [42] 1787 JNLPBA [43] 46750
Chemicals BioNLP13-CG [38] 1097 BioNLP13-PC [41] 1178 BC4CHEMD [44] 29478 BioNLP11-ID [39] 594 BC5CDR [4] 5203 *Size refers to the amount of entities in the training set.
Conducted Experiments
As baseline experiments, we train one model individually on each of the available data sets. Each model is then evaluated on all available test data sets for this entity class. For example, a model trained on the NCBI training corpus is evaluated on the corresponding test set but also on the four other test sets (BC5CDR, BioNLP13, miRNA-disease and plant-disease).
We perform the following CL-based experiments to evaluate and compare our developed algorithm.
• FineTune: a standard BERT model that is fine tuned sequentially for each new data set • EWC [8]: our own implementation of EWC for NER with transformer-based models • AdapterFusion [24]: one adapter is individually trained per training data set and they are sequentially fused together • WEAVER: our model described in Chapter 2 • Replay: Sparse experience replay mechanism as performed by [21]. Hence, while fine-tuning BERT models sequentially, we replay 10% of all previously seen data. • MTL: a multi-task upper-bound model trained on all available training data sets simultaneously Note, that Replay and MTL require the datasets to be at one place and therefore just serve as upper-bound methods for comparative reasons in our study. For the same reason, we omit other state-of-the-art transformer-based methods, such as LAM OL or L2KD [22,23]. In addition to the need for data sharing, prediction on new data may also be less efficient due to local adaptation strategies such as in M bP A ++ [21], which can be a hindrance for the integration into running services.
Implementation Details
For all conducted experiments, we build our code upon the Transformers library [45] and we use dmis-lab/biobertbase-cased-v1.1 as pre-trained model. For all experiments, due to lack of data, we did not perform hyperparameter optimisation but train the models for three epochs, batch size of 16 and with a learning rate of 3e − 5, except for the Adapter-based experiments, where we use a learning rate of 5e − 4. We provide our code under https: //github.com/llangnickel/WEAVER.
Evaluation
To evaluate our methods, we determine precision, recall and F1-score using the following formula (FP stands for false positive, FN for false negative and TP for true positive).
precision = T P T P + F P recall = T P T P + F N (2) F 1 − score = 2 × precision × recall precision + recall(3)
We determine the averaged precision, recall and F1-score over all test sets after the complete training procedure has been finished for all different training data set orders. To proof statistical significance, we use the paired t-test and compare WEAVER against the other CL-based methods. The significance level is set to 0.01. Because the averaged F1-score determined at the end of training is not a sufficient measure to judge on the training performance, we, additionally, examine the extent of forgetting when re-training a model in a continual manner, by determining the Backward Transfer (BWT) according to [15]. This metric measures the influence that learning a new task has on the previously learned task. Accordingly, we determine the Forward Transfer (FWT) that measures the influence of a learned task on the future tasks [15]. The corresponding formula are depicted in Equations 4 and 5, respectively. Note, that, in contrast to [15], we use the F1-score instead of the accuracy score. Hence, R ∈ R T xT consists of the F1-scores on task t j after finishing training on task t i . Moreover, we plot the extent of forgetting by evaluating the performance on the test set corresponding to the training data set that has been used for the very first model after each re-training.
BW T = 1 T − 1 T −1 i=1 R T,i − R i,i(4)F W T = 1 T − 1 T i=2 R i−1,i − b i(5)
Visualisation Techniques
To visualise the word (i.e. token) embeddings, we apply the dimensionality reduction technique Uniform Manifold Approximation and Projection (UMAP). More specifically, we make use of the python library umap-learn [46]. This allows us to judge whether different data sets are embedded in different regions of the network or whether they share the representation space. Thereby, we compare the visualisation obtained by the following settings: First, we train different models on different data sets individually, for example on the NCBI training and the BC5CDR training set in case of diseases NER. Then, we make predictions on these training data sets using the corresponding models and use the word embeddings which are vectors of length 768. Because this high dimensionality cannot be visualised, we apply UMAP to scale it down to two dimensions. We then colour the embeddings of the different data sets (predicted by the two different models) differently. In addition, we use the baseline model that has been trained on both data sets simultaneously to also make predictions on both of these data sets. Finally, we visualise the word embeddings predicted by a model that has been trained sequentially on the mentioned data sets according to our developed method. Since UMAP preserves cluster structures, this enables us to judge differences or overlaps of the embedding of different sets.
Ablation Study
Several studies show that end-to-end fine tuning is not necessary because only the final top layers are decisive for the downstream task. Accordingly, we freeze the first eight layers as suggested by [47] and [48] to investigate whether weight averaging can be reduced to the top four layers.
Results
In the following section, we first describe the results of the models trained on a single corpus. Afterwards, the results for the simulated continual learning setting are given. Finally, we depict and discuss the UMAP visualisations of the word embeddings as proof-of-concept. The results of the performed ablation study can be found in the Appendix in Table A2.
Single Model Training and Cross-evaluation (Baseline)
For every entity class, several manually annotated data sets are available. As a first step, we train a BioBERT model on each of the data sets individually and evaluate the model on all available test sets for the given entity class. We visualise the result in a heatmap in Figure 2. For each trained model, we see the highest score on the test set that belongs to the used training set and see significant drops in performance when evaluating the model on the other test sets. For example in case of disease entity recognition (see Fig. 2a), the model trained on the NCBI disease corpus achieves an F1-score of 83% on the corresponding test set, but drops to 66% and 65% for the BC5CDR and miRNA-disease corpus, respectively. The same phenomenon can be seen for all data sets across all three tasks (diseases, genes/proteins and chemicals). These results underline the fact that, even for the same task, trained models need to be improved as soon as new data sets exits because one available corpus may be too small or too specific. Hence, particularly for running services, continuous learning methods are of great importance. Table A1 for data set orders
Continual Learning Experiments
We simulate a continual learning scenario for three different named entity recognition use cases from the biomedical domain, namely diseases (5 data sets), genes/proteins (6 data sets) and chemicals (5 data sets), that are presented sequentially to the models without storing already seen data. We compare our developed method WEAVER with other continual learning-based methods (for transformers), namely FineTune, EWC and AdapterFusion. Additionally, we apply sparse memory replay (Replay) and multi-task learning (MTL) as upper bound methods because they require the data to be at the same place, which is not always possible in medical applications.
We apply four evaluation methods. First, the averaged F1-score on all available test data is determined after finishing training using the last training data set. The results are summarised in Figure 2. For each entity class, four different orders of training data sets have been chosen randomly (see A1). For diseases, WEAVER outperforms all other CL methods. Whereas it achieves an average F1-score of 77.33% for the different orders, with EWC and FineTune, we achieve 76.15% and 76.59%, respectively. For AdapterFusion, the difference is much higher; it achieves an F1-score of 63.51%. In comparison to the upper bound MTL (79.50%), WEAVER performs only around 2% worse. For protein named entity recognition, WEAVER shows excellent results, that are on average only less than 1% worse than the MTL model and outperforms the other CL-based methods. Interestingly, the Replay model does not work well here and achieves an average F1-score of 68%, which could be caused by the fact that the replayed data is only learned for one epoch, which is probably not sufficient for these data sets. For chemical named entity recognition, a similar trend can be seen. WEAVER outperforms all other methods and is only around 1% worse than MTL. For all three experiments, WEAVER shows on average the lowest standard deviation, thus the model's performance is less influenced by the training parameters/initialisation.
As the averaged F1-score after finishing training does not indicate how training a new task influences both previous and future tasks, we additionally determine forward and backward transfer that are summarised in Table 3. Note, that as upper bound method only Replay can be used because for the multi-task setting, all training data are combined and no sequential training can be performed. Comparing the CL-based methods, WEAVER performs best for disease NER. Except for the first order of training data sets, we have a positive backward transfer, meaning that when learning a new task, the performance of a preceding task is increased. In two of the four cases, the BWT of WEAVER is also better than for Replay. In case of FWT, WEAVER achieves the best scores (approximately 0.5). The forward transfer is positive for all scenarios, which is expected because we train the models on the same task sequentially, hence, learning on a specific data set will always positively influence a future task (as compared to random initialisation of the model).
In case of protein/gene NER, WEAVER achieves the best BWT scores for the CL-based methods, even though they are all slightly negative, meaning that learning on a new task results in moderate forgetting of the previously learned data. For the FWT, we achieve also scores around 0.5 that are better than for Replay, but slightly worse than for FineTune. For example, for the first order, the FWT scores for FineTune and WEAVER amount to 0.4699 and 0.4655, respectively, indicating only a very small difference. For chemicals, we see a similar phenomenon. WEAVER achieves on average Table A1 for data set orders the highest FWT score, however, it is very similar for all methods, amounting to approximately 0.5. Larger differences can be seen for the BWT, where WEAVER achieves for example a score of −0.04 for the first order and the value for FineTune is more than twice as bad (−0.11).
As a last evaluation metric, we plot the extent of forgetting in Fig. 3. Thereby, we determined the F1-score for the test set that corresponds to the very first training data set after random initialisation and after each re-training of the model in order to see how much the model "forgets" when being exposed to new data. For the disease NER (see Fig. 3a), FineTune, EWC and AdapterFusion drop to an F1-score of around 30%, WEAVER only drops to around 60% after seeing the fourth training data set. Also after finishing the last training, WEAVER only performs slightly worse than Replay. For protein NER, the CL-based methods perform very similar until the fourth training data set, where the highest score can now be seen for WEAVER. It also outperforms Replay, which which goes in line with the average performance seen in Table 2. For Chemicals, we also achieve the highest F1-score with WEAVER, however, the difference to all other methods is rather small in this case.
Visualisation of Word Embeddings
In order to comprehend what happens to the word embeddings when averaging the weights of two BERT models, we performed a UMAP visualisation for the different scenarios that can be seen in Figure 4. Exemplary, we use the disease NER use case and compare the arrangement of the embeddings for three different training sets (NCBI, BC5CDR and miRNA-disease) to simulate continual training. Therefore, first, two models have been trained independently on the NCBI and BC5CDR data set, respectively, and their predicted embeddings are visualised in Fig. 4a. Embeddings for the different data sets, predicted by the two different models are clearly separated. In contrast, in Fig. 4b, where a model trained on both data sets simultaneously is used for prediction, the points are strongly overlapping and separate clusters cannot be recognised. Figure 4c shows word embeddings predicted by a model trained according to our method WEAVER (first NCBI training set, then BC5CDR training set). Interestingly, it can be seen that the distribution looks very similar to a combined training. Thereby, we can infer that weight averaging after training two models sequentially has a similar effect to a combined training (simultaneously on all training data). To investigate that effect when training on more than two data sets, we use a third one (miRNA-disease data set). Figures 4e-4f visualise the same settings as described before but now, the first two data sets (NCBI and BC5CDR) are combined to one colour so that the new data set can be clearly distinguished. Here, we see the same phenomenon, i.e. that training sequentially with WEAVER results in very similar distributions of the embeddings than training one model jointly on all three data sets. The different sub-figures show the distribution of the word embeddings predicted for three different disease NER data sets (NCBI, BC5CDR and miRNA-disease) using different models. In sub-figure (a), two different models were used that have been trained independently on the two data sets. In contrast, predicted word embeddings from a model trained on the combined training data are depicted in (b). In sub-figure (c), the embeddings resulting from the continually trained model using WEAVER is shown. With the red and yellow squares depicted in (a), we show where the corresponding word embeddings moved to in settings (b) and (c). In subfigures d-e, the same setting is shown but now for a third data set. Therefore, the previously used data sets NCBI and BC5CDR are represented in one colour.
Discussion
Transformer-based models have boosted the advancements of natural language processing tasks. Especially in the biomedical domain, BERT-based models are adapted to specific applications, such as electronic health record (EHR) mining or the identification of rare disease patients from administrative claims [49,50,51]. However, these models have the underlying assumption that data are independent and identically distributed. In real world scenarios in the biomedical domain, this is unfortunately not the case, in particular since current models do not yet represent all facets, which are present in such corpora due to much smaller training sets as compared to general domains. Hence different corpora easily display novel and different aspects here, which correspond to a shift of the distribution.
In previous studies, we showed that there are significant differences in different data sets and that a model trained on one corpus does not perform well on another corpus; i.e., one such annotated corpus is not representative for biomedical literature data bases such as PubMed. Therefore, to be used in real world applications, trained models need the ability of lifelong learning (also on the same task) -meaning that they can be improved continuously without suffering from catastrophic forgetting. Whereas a lot of research has been done in this direction, most of the approaches do either need also previous data when training on the new data (i.e. (pseudo-)rehearsal), consists of a more complex structure containing two or more different networks (i.e. for example a knowledge base and an active column) or are, in case of regularisation-based methods, computationally more inefficient.
Therefore, we propose a lifelong learning algorithm that (1) is based on transformers as current state-of-the-art methods, (2) can be used for federated learning if the data sets are not available at one place (e.g. in clinical use cases due to data privacy), (3) does not involve a second or different neural network structure hence requires limited resources, and (4) is computationally efficient.
We evaluated our method on three different use cases from the biomedical domain, namely diseases, genes/proteins and chemicals. For these entity classes five or six different data sets are available, respectively (see Table 1). As baseline, we first determined the evaluation results on single models, i.e. models that have been trained on a single training data set. These are evaluated on the corresponding test set but also on all other test sets from this domain (called cross-evaluation). Here, we see significant differences (compare Figure 2). For example, a model trained on the NCBI disease corpus performs best on its available test set (F1-score of 86%) but drops to 25% for the BioNLP13-CG data set that focuses on cancer related diseases. Similar is true for genes and chemicals where the F1-score can differ about 50%. Hence, continual improvement of the models is needed.
We compared our method WEAVER to several different transformer-based CL methods that, except for Replay, do not require the data to be at the same place. We show that WEAVER outperforms the other methods in terms of average F1-score after finishing training of the last data set (see Table 2), backward and forward transfer (Table 3) as well as on the performance of the test set corresponding to the very first data set (Fig. 3) for all three use cases. However, the evaluation turns out differently for the different use cases. In terms of averaged F1-score, for disease NER, WEAVER is for example about 1% better than FineTune and EWC, and around 2% worse than the upper bound (MTL). In case of protein NER, WEAVER is only less than 1% worse than MTL and around 2% better than FineTune and EWC. In all scenarios, AdapterFusion performs worst.
In terms of disease NER, WEAVER achieves mainly positive backward transfer and outperforms all other CL-based methods. Generally, for backward transfer, we see differences between the different orderings, indicating that the order can influence the success of training. For the forward transfer, this is less noticeable, for all use cases and orderings, the values range from 0.4 to 0.5.
As we plotted the extent of forgetting in Fig. 3, it can be seen that WEAVER is more robust to training of new data sets, i.e. it shows less variation in its performance of the test set corresponding to the very first training data set when seeing a new data set.
For proof of concept of WEAVER, we visualised token embeddings of the variously trained models. Figure 4 indicates that applying WEAVER to a series of new data sets results in similar word embedding distributions as the combined training -with the advantage of efficiently improving a model as soon as new data sets arise.
Summarising, WEAVER consists of only one small post processing step where weights are averaged. In comparison to other presented methods, there is no need to change the training procedure; in addition, this method can theoretically not only be applied to transformer-based methods but to all neural network-based methods where weights are determined by training. However, possible limitations of our proposed method need to be further investigated: Since the averaging is weighted based on the size of the training data sets, this can be dangerous if sizes differ too much. For example, if a model is re-trained on a big data set which only represents a small sub-domain (e.g. cancer related diseases), the model can be still biased towards this data set/topic. Therefore, further experiments are needed to investigate the influence and importance of weighting based on the corpus size. Thereby, the previous recognition of a shift could also be useful and needs to be incorporated into future experiments [52]. Still, WEAVER shows very good results and outperforms other CL-based methods that do not require the data to be at the same place. Thereby, it perfectly combines practicability and quality, and can hence also be used for the continuous improvement of running services.
Conclusion
Based on transformer models as state-of-the-art methods for document processing, we propose a new lifelong learning method called WEAVER. This method continuously trains a model on top of a previously trained model and infuses knowledge from the previously trained model into the new one by weight averaging. Thereby, we demonstrate a simple, yet efficient method that can also be used in settings, where the data sets are not available in one place. This is especially important in clinical use cases where data sets underlie data protection laws. In addition, in contrast to conventional federated learning settings, no central server is needed but the weights can be simply passed from one institution to the next. Moreover, our method is a simple post-processing step, which means that the training workflow itself does not need to be changed and therefore it can be easily implemented into running services, such as semantic search engines. Therefore, in future work, the method will be tested on other NLP tasks from the biomedical domain, such as document classification, and will be integrated into our semantic search engine [31].
Figure 1 :
1Overview of our transformer-based continual learning procedure WEAVER using weight averaging.
Figure 2 :
2Evaluation results of single models. Each model has been trained on one of the available datasets and evaluated on all other datasets for the specific entity class.
Figure 3 :
3F1-scores on the first test data set over time. After each re-training of the model, it is evaluated on the test set corresponding to the first training data set in order to see how much the model forgets. The legend depicted in subfigure (a) equally applies to the two other subfigures.
( a )Figure 4 :
a4NCBI and BC5CDR models trained independently (b) NCBI and BC5CDR models trained jointly (c) Model trained continually on NCBI and BC5CDR using WEAVER (d) Model trained independently on NCBI+BC5CDR and miRNA-disease PLACEHOLDER (e) Model trained jointly on NCBI, BC5CDR and miRNA-disease data set PLACEHOLDER (f) Model trained continually on NCBI, BC5CDR and miRNA-disease using WEAVER BERT embeddings visualised using UMAP.
Table 1 :
1Overview of used datasets from the biomedical domainNER-Task
Dataset
Size*
Disease
NCBI [36]
4725
BC5CDR
Table 2 :
2Averaged F1-scores after finishing training on all data sets. We averaged the F1-score over ten independent runs and determined the standard deviation (shown in brackets). Highest score is shown in bold, statistical significance has been proven with the paired t-test. Upper-bound methods are shown on the right.Entity Class Order*
FineTune
EWC
AdapterFusion
WEAVER
Replay
MTL
Diseases
(i)
76.20 (0.44) 74.24 (0.38)
60.69 (2.34)
76.77 (0.29) 78.09 (0.44) 79.68 (0.29)
(ii)
76.84 (0.23) 77.13 (0.47)
62.32 (2.87)
77.36 (0.28) 77.44 (0.89) 79.49 (0.28)
(iii)
76.63 (0.33) 76.93 (0.49)
62.09 (2.73)
77.70 (0.18) 77.33 (0.47) 79.34 (0.25)
(iv)
76.68 (0.48) 76.29 (0.29) 53.42 (11.93) 77.47 (0.29) 77.52 (0.64) 79.48 (0.33)
Avg.
76.59
76.15
63.51
77.33
77.60
79.50
Proteins
(i)
72.01 (0.32) 72.28 (0.46)
61.95 (1.3)
75.60 (0.11) 68.66 (1.09) 76.05 (0.22)
(ii)
75.84 (0.17) 76.26 (0.41)
63.66 (2.99)
75.53 (0.17) 68.44 (0.58) 76.03 (0.22)
(iii)
72.85 (0.38) 71.91 (0.44)
60.4 (2.39)
74.43 (0.18) 67.40 (0.55) 75.98 (0.12)
(iv)
73.34 (0.35) 72.39 (0.39)
60.29 (2.46)
75.47 (0.16) 67.14 (0.86) 75.85 (0.28)
Avg.
73.51
73.21
61.58
75.26
67.91
75.98
Chemicals
(i)
74.33 (0.72) 73.70 (0.39)
62.57 (2.05)
76.81 (0.43) 76.51 (0.51) 78.26 (0.71)
(ii)
74.27 (0.47) 74.56 (0.51)
62.08 (3.15)
76.63 (0.13) 76.22 (0.22) 78.18 (0.12)
(iii)
77.36 (0.42) 77.54 (0.21)
63.78 (4.69)
77.76 (0.16) 76.39 (0.69) 78.27 (0.27)
(iv)
75.05 (0.33) 75.39 (0.32)
64.32 (4.52)
75.57 (0.36) 75.90 (0.78) 78.14 (0.43)
Avg.
75.25
75.30
63.19
76.69
76.26
78.21
*See
Table 3 :
3Backward/Forward Transfer for all entity classes. Upper bound method Replay is shown on the right. Highest score is shown in bold. If highest score outperforms the upper-bound, it is also shown in italics.Entity Class Order*
FineTune
EWC
AdapterFusion
WEAVER
Replay
AppendixOrderings:Table A1: Overview of randomly chosen orderings of the data sets.Entity ClassOrderFor the ablation study, we freeze the first eight layers and only fine-tune the last four. As can be seen inTable A2, this results in lower F1-scores than fine-tuning the whole model for WEAVER.
A survey on recent advances in named entity recognition from deep learning models. Vikas Yadav, Steven Bethard, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsVikas Yadav and Steven Bethard. A survey on recent advances in named entity recognition from deep learning models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2145-2158, Santa Fe, New Mexico, USA, August 2018. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding.
Named-Entity Recognition Using CRF and BERT. Akshay Kulkarni, Adarsha Shivananda, Anoosh Kulkarni, Apress2022Berkeley, CAAkshay Kulkarni, Adarsha Shivananda, and Anoosh Kulkarni. Named-Entity Recognition Using CRF and BERT, pages 211-238. Apress, Berkeley, CA, 2022.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, 36Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. 36(4):1234-1240.
We are not ready yet: limitations of state-of-the-art disease named entity recognizers. Lisa Kühnel, Juliane Fluck, 1326Lisa Kühnel and Juliane Fluck. We are not ready yet: limitations of state-of-the-art disease named entity recognizers. 13(1):26.
Lifelong learning for text retrieval and recognition in historical handwritten document collections. Lambert Schomaker, abs/1912.05156CoRRLambert Schomaker. Lifelong learning for text retrieval and recognition in historical handwritten document collections. CoRR, abs/1912.05156, 2019.
Catastrophic interference in connectionist networks: The sequential learning problem. Michael Mccloskey, Neal J Cohen, Psychology of Learning and Motivation. Gordon H. Bower24Academic PressMichael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Gordon H. Bower, editor, Psychology of Learning and Motivation, volume 24, pages 109-165. Academic Press.
Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Publisher: National Academy of Sciences Section: Biological Sciences. 11413James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. 114(13):3521-3526. Publisher: National Academy of Sciences Section: Biological Sciences.
Personalization of end-to-end speech recognition on mobile devices for named entities. Khe Chai Sim, Françoise Beaufays, Arnaud Benard, Dhruv Guliani, Andreas Kabel, Nikhil Khare, Tamar Lucassen, Petr Zadrazil, Harry Zhang, Leif Johnson, Giovanni Motta, Lillian Zhou, Khe Chai Sim, Françoise Beaufays, Arnaud Benard, Dhruv Guliani, Andreas Kabel, Nikhil Khare, Tamar Lucassen, Petr Zadrazil, Harry Zhang, Leif Johnson, Giovanni Motta, and Lillian Zhou. Personalization of end-to-end speech recognition on mobile devices for named entities, 2019.
Rotate your networks: Better weight consolidation and less catastrophic forgetting. Xialei Liu, Marc Masana, Luis Herranz, Joost Van De, Antonio M Weijer, Andrew D Lopez, Bagdanov, Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez, and Andrew D. Bagdanov. Rotate your networks: Better weight consolidation and less catastrophic forgetting.
Memory aware synapses: Learning what (not) to forget. Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars, Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget.
Catastrophic forgetting, rehearsal and pseudorehearsal. A Robins, A. Robins. Catastrophic forgetting, rehearsal and pseudorehearsal.
Pseudo-rehearsal: A simple solution to catastrophic forgetting for NLP · explosion. Matthew Honnibal, Matthew Honnibal. Pseudo-rehearsal: A simple solution to catastrophic forgetting for NLP · explosion.
Gradient based sample selection for online continual learning. Rahaf Aljundi, Min Lin, Baptiste Goujaud, Yoshua Bengio, Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning.
Gradient episodic memory for continual learning. David Lopez, - Paz, Marc'aurelio Ranzato, David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning.
. Arslan Chaudhry, Marc'aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-GEMArslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-GEM.
Complementary learning for overcoming catastrophic forgetting using experience replay. Mohammad Rostami, Soheil Kolouri, Praveen K Pilly, Mohammad Rostami, Soheil Kolouri, and Praveen K. Pilly. Complementary learning for overcoming catastrophic forgetting using experience replay.
Progressive learning: A deep learning framework for continual learning. M Haytham, Lawrence Fayek, Hong Ren Cavedon, Wu, 128Haytham M. Fayek, Lawrence Cavedon, and Hong Ren Wu. Progressive learning: A deep learning framework for continual learning. 128:345-357.
A biologically inspired dual-network memory model for reduction of catastrophic forgetting. Motonobu Hattori, 134Motonobu Hattori. A biologically inspired dual-network memory model for reduction of catastrophic forgetting. 134:262-268.
Continual BERT: Continual learning for adaptive extractive summarization of COVID-19 literature. Park Jong Won, Jong Won Park. Continual BERT: Continual learning for adaptive extractive summarization of COVID-19 literature.
Episodic memory in lifelong language learning. Sebastian Cyprien De Masson D'autume, Lingpeng Ruder, Dani Kong, Yogatama, Cyprien de Masson d'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. Episodic memory in lifelong language learning.
LAMOL: LAnguage MOdeling for lifelong language learning. Fan-Keng Sun, Cheng-Hao Ho, Hung-Yi Lee, Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. LAMOL: LAnguage MOdeling for lifelong language learning.
. Yung-Sung Chuang, Shang-Yu Su, Yun-Nung Chen, Lifelong language knowledge distillationYung-Sung Chuang, Shang-Yu Su, and Yun-Nung Chen. Lifelong language knowledge distillation.
Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP.
AdapterFusion: Non-destructive task composition for transfer learning. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych, Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. AdapterFusion: Non-destructive task composition for transfer learning.
Continual sequence generation with adaptive compositional modules. Yanzhe Zhang, Xuezhi Wang, Diyi Yang, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Yanzhe Zhang, Xuezhi Wang, and Diyi Yang. Continual sequence generation with adaptive compositional modules. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3653-3667. Association for Computational Linguistics.
Online learning: A comprehensive survey. C H Steven, Doyen Hoi, Jing Sahoo, Peilin Lu, Zhao, Neurocomputing. 459Steven C.H. Hoi, Doyen Sahoo, Jing Lu, and Peilin Zhao. Online learning: A comprehensive survey. Neurocom- puting, 459:249-289, 2021.
Incremental on-line learning: A review and comparison of state of the art algorithms. Viktor Losing, Barbara Hammer, Heiko Wersing, Neurocomputing. 275Viktor Losing, Barbara Hammer, and Heiko Wersing. Incremental on-line learning: A review and comparison of state of the art algorithms. Neurocomputing, 275:1261-1274, 2018.
A survey on federated learning systems: Vision, hype and reality for data privacy and protection. Qinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Xu Liu, Bingsheng He, abs/1907.09693CoRRQinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Xu Liu, and Bingsheng He. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. CoRR, abs/1907.09693, 2019.
COVID-19 preVIEW: Semantic search to explore COVID-19 research preprints. Lisa Langnickel, Roman Baum, Johannes Darms, Sumit Madan, Juliane Fluck, Publisher: IOS PressLisa Langnickel, Roman Baum, Johannes Darms, Sumit Madan, and Juliane Fluck. COVID-19 preVIEW: Semantic search to explore COVID-19 research preprints. pages 78-82. Publisher: IOS Press.
preVIEW: from a fast prototype towards a sustainable semantic search system for central access to COVID-19 preprints. Lisa Langnickel, Johannes Darms, Roman Baum, Juliane Fluck, Journal of EAHIL. Lisa Langnickel, Johannes Darms, Roman Baum, and Juliane Fluck. preVIEW: from a fast prototype towards a sustainable semantic search system for central access to COVID-19 preprints. Journal of EAHIL:8-14.
Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID. Lisa Langnickel, Johannes Darms, Katharina Heldt, Denise Ducks, Juliane Fluck, 202248Lisa Langnickel, Johannes Darms, Katharina Heldt, Denise Ducks, and Juliane Fluck. Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID. 2022:baac048.
Communicationefficient learning of deep networks from decentralized data. H , Brendan Mcmahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera Y Arcas, H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. Communication- efficient learning of deep networks from decentralized data, 2017.
A corpus of plant-disease relations in the biomedical domain. Baeksoo Kim, Wonjun Choi, Hyunju Lee, e0221582. Publisher: Public Library of Science14Baeksoo Kim, Wonjun Choi, and Hyunju Lee. A corpus of plant-disease relations in the biomedical domain. 14(8):e0221582. Publisher: Public Library of Science.
A neural network multi-task learning approach to biomedical named entity recognition. Gamal Crichton, Sampo Pyysalo, Billy Chiu, Anna Korhonen, 18368Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. A neural network multi-task learning approach to biomedical named entity recognition. 18(1):368.
NCBI disease corpus: A resource for disease name recognition and concept normalization. Robert Rezarta Islamaj Dogan, Zhiyong Leaman, Lu, 47Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. NCBI disease corpus: A resource for disease name recognition and concept normalization. 47:1-10.
Detecting miRNA mentions and relations in biomedical literature. Shweta Bagewadi, Tamara Bobić, Martin Hofmann-Apitius, Juliane Fluck, Roman Klinger, 3205Shweta Bagewadi, Tamara Bobić, Martin Hofmann-Apitius, Juliane Fluck, and Roman Klinger. Detecting miRNA mentions and relations in biomedical literature. 3:205.
Overview of the cancer genetics (CG) task of BioNLP shared task 2013. Sampo Pyysalo, Tomoko Ohta, Sophia Ananiadou, Proceedings of the BioNLP Shared Task 2013 Workshop. the BioNLP Shared Task 2013 WorkshopAssociation for Computational LinguisticsSampo Pyysalo, Tomoko Ohta, and Sophia Ananiadou. Overview of the cancer genetics (CG) task of BioNLP shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 58-66. Association for Computational Linguistics.
Overview of the ID, EPI and REL tasks of BioNLP shared task. Sampo Pyysalo, Tomoko Ohta, Rafal Rak, Dan Sullivan, Chunhong Mao, Chunxia Wang, Bruno Sobral, Sophia Jun'ichi Tsujii, Ananiadou, 132Sampo Pyysalo, Tomoko Ohta, Rafal Rak, Dan Sullivan, Chunhong Mao, Chunxia Wang, Bruno Sobral, Jun'ichi Tsujii, and Sophia Ananiadou. Overview of the ID, EPI and REL tasks of BioNLP shared task 2011. 13(11):S2.
The genia event extraction shared task. Jin-Dong Kim, Yue Wang, Yamamoto Yasunori, editionoverviewJin-Dong Kim, Yue Wang, and Yamamoto Yasunori. The genia event extraction shared task, 2013 edition - overview.
Overview of the pathway curation (PC) task of BioNLP shared task 2013. Tomoko Ohta, Sampo Pyysalo, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Sophia Ananiadou, Jun'ichi Tsujii, Proceedings of the BioNLP Shared Task 2013 Workshop. the BioNLP Shared Task 2013 WorkshopAssociation for Computational LinguisticsTomoko Ohta, Sampo Pyysalo, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Sophia Ananiadou, and Jun'ichi Tsujii. Overview of the pathway curation (PC) task of BioNLP shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 67-75. Association for Computational Linguistics.
Towards exhaustive protein modification event extraction. Sampo Pyysalo, Tomoko Ohta, Makoto Miwa, Jun'ichi Tsujii, Proceedings of BioNLP 2011 Workshop, BioNLP '11. BioNLP 2011 Workshop, BioNLP '11Association for Computational LinguisticsSampo Pyysalo, Tomoko Ohta, Makoto Miwa, and Jun'ichi Tsujii. Towards exhaustive protein modification event extraction. In Proceedings of BioNLP 2011 Workshop, BioNLP '11, pages 114-123. Association for Computational Linguistics.
Introduction to the bio-entity recognition task at JNLPBA. Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, Nigel Collier, Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04. the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04Association for Computational LinguisticsJin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. Introduction to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04, pages 70-75. Association for Computational Linguistics.
Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, Roger A Sayle, Riza Theresa Batista-Navarro, Rafal Rak, Torsten Huber, Tim Rocktäschel, Sérgio Matos, David Campos, Buzhou Tang, Hua Xu, Tsendsuren Munkhdalai, Keun Ho Ryu, Senthil Sv Ramanan, Slavko Nathan, Marko Žitnik, Lutz Bajec, Matthias Weber, Irmer, A Saber, Jan A Akhondi, Shuo Kors, Xin Xu, An, Asif Kumar Sikdar, Masaharu Ekbal, Yoshioka, M Thaer, Miji Dieb, Karin Choi, Madian Verspoor, C Lee Khabsa, Hongfang Giles, Liu, The CHEMDNER corpus of chemicals and drugs and its annotation principles. Komandur Elayavilli Ravikumar, Andre Lamurias, Francisco M. Couto, Hong-Jie Dai, Richard Tzong-Han TsaiCaglar Ata, Tolga Can, Anabel Usié, Rui Alves, Isabel Segura-Bedmar, Paloma Martínez72Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M. Lowe, Roger A. Sayle, Riza Theresa Batista-Navarro, Rafal Rak, Torsten Huber, Tim Rocktäschel, Sérgio Matos, David Campos, Buzhou Tang, Hua Xu, Tsendsuren Munkhdalai, Keun Ho Ryu, SV Ramanan, Senthil Nathan, Slavko Žitnik, Marko Bajec, Lutz Weber, Matthias Irmer, Saber A. Akhondi, Jan A. Kors, Shuo Xu, Xin An, Utpal Kumar Sikdar, Asif Ekbal, Masaharu Yoshioka, Thaer M. Dieb, Miji Choi, Karin Verspoor, Madian Khabsa, C. Lee Giles, Hongfang Liu, Komandur Elayavilli Ravikumar, Andre Lamurias, Francisco M. Couto, Hong-Jie Dai, Richard Tzong-Han Tsai, Caglar Ata, Tolga Can, Anabel Usié, Rui Alves, Isabel Segura-Bedmar, Paloma Martínez, Julen Oyarzabal, and Alfonso Valencia. The CHEMDNER corpus of chemicals and drugs and its annotation principles. 7(1):S2.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics.
Umap: Uniform manifold approximation and projection. Leland Mcinnes, John Healy, Nathaniel Saul, Lukas Grossberger, The Journal of Open Source Software. 329861Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861, 2018.
What would elsa do? freezing layers during transformer fine-tuning. Jaejun Lee, Raphael Tang, Jimmy Lin, Jaejun Lee, Raphael Tang, and Jimmy Lin. What would elsa do? freezing layers during transformer fine-tuning.
What happens to BERT embeddings during fine-tuning?. Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, Ian Tenney, Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. What happens to BERT embeddings during fine-tuning?
Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. Laila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, Degui Zhi, Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Number: 1 Primary_atype. Nature Publishing Group4Subject_term: Disease prevention;Experimental models of disease;Health care Sub-ject_term_id: disease-prevention;experimental-models-of-disease;health-careLaila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, and Degui Zhi. Med-BERT: pretrained contextualized embed- dings on large-scale structured electronic health records for disease prediction. 4(1):1-13. Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Number: 1 Primary_atype: Research Publisher: Nature Publishing Group Subject_term: Disease prevention;Experimental models of disease;Health care Sub- ject_term_id: disease-prevention;experimental-models-of-disease;health-care.
Subject_term: Experimental models of disease. Yikuan Li, Shishir Rao, José Roberto Ayala Solares, Abdelaali Hassaine, Rema Ramakrishnan, Dexter Canoy, Yajie Zhu, Kazem Rahimi, Gholamreza Salimi-Khorshidi, Behrt, Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Number: 1 Primary_atype. Nature Publishing Group107155Transformer for electronic health records. Preventive medicine Subject_term_id: experimental-models-of-disease;preventive-medicineYikuan Li, Shishir Rao, José Roberto Ayala Solares, Abdelaali Hassaine, Rema Ramakrishnan, Dexter Canoy, Yajie Zhu, Kazem Rahimi, and Gholamreza Salimi-Khorshidi. BEHRT: Transformer for electronic health records. 10(1):7155. Bandiera_abtest: a Cc_license_type: cc_by Cg_type: Nature Research Journals Num- ber: 1 Primary_atype: Research Publisher: Nature Publishing Group Subject_term: Experimental models of disease;Preventive medicine Subject_term_id: experimental-models-of-disease;preventive-medicine.
RareBERT: Transformer architecture for rare disease patient identification using administrative claims. P K S Prakash, Srinivas Chilukuri, Nikhil Ranade, Shankar Viswanathan, 351P. K. S. Prakash, Srinivas Chilukuri, Nikhil Ranade, and Shankar Viswanathan. RareBERT: Transformer architecture for rare disease patient identification using administrative claims. 35(1):453-460. Number: 1.
Drift detection in text data with document embeddings. Robert Feldhans, Adrian Wilke, Stefan Heindorf, Mohammad Hossein Shaker, Barbara Hammer, Axel-Cyrille Ngonga Ngomo, Eyke Hüllermeier, ; David Camacho, Peter Tiño, Richard Allmendinger, Antonio J Tallón-Ballesteros, Ke Tang, Sung-Bae Cho, Intelligent Data Engineering and Automated Learning -IDEAL 2021 -22nd International Conference, IDEAL 2021. Manchester, UKSpringer13113ProceedingsRobert Feldhans, Adrian Wilke, Stefan Heindorf, Mohammad Hossein Shaker, Barbara Hammer, Axel- Cyrille Ngonga Ngomo, and Eyke Hüllermeier. Drift detection in text data with document embeddings. In Hujun Yin, David Camacho, Peter Tiño, Richard Allmendinger, Antonio J. Tallón-Ballesteros, Ke Tang, Sung-Bae Cho, Paulo Novais, and Susana Nascimento, editors, Intelligent Data Engineering and Automated Learning - IDEAL 2021 -22nd International Conference, IDEAL 2021, Manchester, UK, November 25-27, 2021, Proceedings, volume 13113 of Lecture Notes in Computer Science, pages 107-118. Springer, 2021.
| [
"https://github.com/cambridgeltl/"
] |
[
"PERSONALIZED SPEECH RECOGNITION ON MOBILE DEVICES",
"PERSONALIZED SPEECH RECOGNITION ON MOBILE DEVICES"
] | [
"Ian Mcgraw imcgraw@google.com \nGoogle Inc\n\n",
"Rohit Prabhavalkar prabhavalkar@google.com \nGoogle Inc\n\n",
"Raziel Alvarez raziel@google.com \nGoogle Inc\n\n",
"Montse Gonzalez Arenas montse@google.com \nGoogle Inc\n\n",
"Kanishka Rao kanishkarao@google.com \nGoogle Inc\n\n",
"David Rybach rybach@google.com \nGoogle Inc\n\n",
"Ouais Alsharif \nGoogle Inc\n\n",
"Haşim Sak \nGoogle Inc\n\n",
"Alexander Gruenstein \nGoogle Inc\n\n",
"Françoise Beaufays \nGoogle Inc\n\n",
"Carolina Parada carolinap@google.com \nGoogle Inc\n\n"
] | [
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n",
"Google Inc\n"
] | [] | We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an openended dictation task, running with a median speed that is seven times faster than real-time. | 10.1109/icassp.2016.7472820 | [
"https://arxiv.org/pdf/1603.03185v2.pdf"
] | 10,405,700 | 1603.03185 | dae5704cf21112b3f9f74c679a7ee7812cb45ea1 |
PERSONALIZED SPEECH RECOGNITION ON MOBILE DEVICES
Ian Mcgraw imcgraw@google.com
Google Inc
Rohit Prabhavalkar prabhavalkar@google.com
Google Inc
Raziel Alvarez raziel@google.com
Google Inc
Montse Gonzalez Arenas montse@google.com
Google Inc
Kanishka Rao kanishkarao@google.com
Google Inc
David Rybach rybach@google.com
Google Inc
Ouais Alsharif
Google Inc
Haşim Sak
Google Inc
Alexander Gruenstein
Google Inc
Françoise Beaufays
Google Inc
Carolina Parada carolinap@google.com
Google Inc
PERSONALIZED SPEECH RECOGNITION ON MOBILE DEVICES
Index Terms-embedded speech recognitionCTCLSTMquantizationmodel compression
We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an openended dictation task, running with a median speed that is seven times faster than real-time.
INTRODUCTION
Speech recognition for dictation, search, and voice commands has become a standard feature on smartphones and wearable devices. The vast majority of the literature devoted to improving accuracy for these tasks assumes that speech recognition will be run in datacenters on powerful servers. However, despite increases in speed and the availability of mobile internet, speech recognition requests frequently have high latency, or even completely fail, due to unreliable or unavailable network connections. An embedded speech recognition system that runs locally on a mobile device is more reliable and can have lower latency; however, it must be accurate and must not consume significant memory or computational resources.
In this paper we extend previous work that used quantized deep neural networks (DNNs) and on-the-fly language model rescoring to achieve real-time performance on modern smartphones [1]. We demonstrate that given similar size and computation constraints, we achieve large improvements in word error rate (WER) performance and latency by employing Long Short-Term Memory (LSTM) recurrent neural networks (RNNs), trained with connectionist temporal classification (CTC) [2] and state-level minimum Bayes risk (sMBR) [3] techniques. LSTMs are made small and fast enough for embedded speech recognition by quantizing parameters to 8 bits, by using context independent (CI) phone outputs instead of more numerous context dependent (CD) phone outputs, and by using Singular Value Decomposition (SVD) compression [4,5].
SVD has elsewhere been shown to be effective for speech processing tasks [4,6,7] as have structured transforms [8] and low-rank matrix factorizations [9]. Vector quantization has also been shown to significantly reduce model size with only small accuracy losses [10], however it is unclear whether this algorithm can be implemented in a computationally efficient manner while minimizing runtime memory footprint. Such parameter reduction techniques have generally been applied to DNNs and not RNNs. For embedded speech recognition, some authors have avoided RNNs citing increased computational costs and instead evaluated methods for transferring knowledge from RNNs to DNNs [11].
We present results in two very different domains: dictation and voice commands. To keep the disk space requirements of the system small, we experiment with language model interpolation techniques that enable us to effectively share a single model across both domains. In particular, we demonstrate how the application of bayesian interpolation out-performs simple linear interpolation for these tasks.
Finally, we explore using language model personalization techniques to improve voice command and dictation accuracy. Many voice commands can be completed and executed on a device without a network connection, or can easily be queued up to be completed over an unreliable or slow network connection later in the background. For example, a command such as "Send an email message to Darnica Cumberland: can we reschedule?" can be transcribed by an embedded speech recognition system and executed later without a perceptual difference to the user. Accurate transcription, however, requires integrating personal information such as the contact name "Darnica Cumberland" into the language model. We demonstrate that the vocabulary injection and on-the-fly language model biasing techniques from [12,13] can significantly improve accuracy without significant adverse computational overhead.
The remainder of this paper is organized as follows. We summarize the baseline sytem in Section 2. Section 3 describes our techniques to build a small but accurate acoustic model, Section 4 describes our LM training procedure and the interpolation techniques used in our system, Section 5 describes the decoder. Section 6 describes how we handle context or device-specific information, and finally Section 7 summarizes the footprint of our system. Conclusions are presented in Section 8.
BASELINE SYSTEM
We model our baseline system after the embedded speech recognition system presented in [1]. Instead of using a standard feedforward DNN, however, we use deep LSTM models which have been shown to achieve state-of-the-art results on large-scale speech recognition tasks [14,15,16]. The LSTM architecture of our baseline consists of three hidden layers with 850 LSTM cells in each. We make use of a recurrent projection layer as described in [14] of size 450 for each of hidden layers. This LSTM is trained to predict 2,000 CD states, analogous to the system described in [1]. This system is also trained to optimize the standard (CE) criterion on the training set, with the output labels delayed by 5 frames [14].
The input features are 40-dimensional log mel-filterbank energies calculated on a 25ms window every 10ms. Unlike in [1], where frames are stacked to provide right and left context to the net, we rely on the LSTM's memory capabilities and supply only one frame every 10ms as input. This model was trained to optimize the standard cross-entropy (CE) criterion on the training set described in Section 3.1, with frame-level labels derived from a larger system.
The language model presented in this work also follows along the lines of [1]. The vocabulary size is restricted to 64K so that an index into the lexicon only requires 16-bits of storage. The small decoder graph is constructed from a tiny LM containing 70K n-grams (almost entirely of unigrams). During decoding the partial paths are rescored on-the-fly with a large LM containing roughly 1.5M n-grams. This rescoring LM is made extremely compact using the LOUDS [17] compression mechanism. More details of the LM can be found in Section 4.
ON-DEVICE ACOUSTIC MODELING
In this section we describe an LSTM configuration that can successfully be deployed to a mobile device and contrast this with the baseline system described in Section 2.
In particular, the LSTM architecture that we investigate is a CTC model [15,16]: the system consists of five hidden layers with 500 LSTM cells in each, that predict 41 context independent (CI) phoneme targets plus an additional "blank" target that can be hypothesized if the system is unsure of the identity of the phoneme at the current frame. The system is trained to optimize the connectionist temporal classification (CTC) criterion [2] as described in [15,16].
Similar to the baseline, we use standard 40-dimensional log melfilterbank energies over the 8Khz range, computed every 10ms on 25ms of input speech. In order to stabilize CTC training, our CTC models use the strategy proposed in [16]: we stack together 8 consecutive frames (7 frames of right context) and only present every third stacked frame as input to the network. In addition to stabilizing CTC training, this has the additional benefit of speeding up computation since the network is only evaluated every 30ms.
AM Experiments
Our AMs are trained on 3M hand-transcribed anonymized utterances extracted from Google voice search traffic (approximately 2,000 hours). All models in our work are trained using distributed asynchronous stochastic gradient descent (ASGD) [18]. In order to improve robustness to noise and reverberation, we generate "multi-style" training data by synthetically distorting each training utterance using a room simulator with a virtual noise source, to generate 20 distorted versions of each utterance. Noise samples are extracted from YouTube videos and environmental recordings of daily events.
Results in this section are reported on a set of 13.3K anonymized utterances in the domain of open-ended dictation extracted from Google traffic. The LM used in these experiments was described in Section 2 and detailed further in Section 4. We benchmark our systems to determine runtime speed by decoding a subset of 100 utterances on a Nexus 5 smartphone which contains a 2. quad-core CPU and 2 GB of RAM. We report median real-time factors (RT50) on our test set. Our results are presented in Table 1.
As can be seen in Table 1, and consistent with previous work [15], the CTC-trained LSTM model that predicts CI phones outperforms the CE-trained LSTM that predicts 2,000 CD states. Furthermore, although both systems are comparable in terms of the number of parameters, the CTC-trained model is about 4× faster than the CE-trained baseline. Sequence discriminative training with the sMBR criterion [3,19] further improves system performance by 20% relative to the CTC-trained sytem.
In order to reduce memory consumption further, we compress our acoustic models using projection layers that sit between the outputs of an LSTM layer and both the recurrent and non-recurrent inputs to same and subsequent layers [14]. Of crucial importance, however, is that when a significant rank reduction is applied, it is not sufficient to simply initialize the projection layer's weight matrix randomly for training with the CTC criterion. Instead we use the larger 'uncompressed' model without the projection layer and jointly factorize its recurrent and (non-recurrent) inter-layer weight matrices at each hidden layer using a form of singular value decomposition to determine a shared projection layer. This process yields an initialization that results in stable convergence as described in detail in [5]. In our system, we introduce projection matrices of rank 100 for the first four layers, and a projection matrix of rank 200 for the fifth hidden layer. Following SVD compression, we once again train the system to optimize the CTC criterion, followed by discriminative sequence training with the sMBR criterion. As can be seen in Table 1, the proposed compression technique allows us to compress the AM by about 3×.
Finally, we note that adapting the AM using a set of 1M anonymized hand-transcribed utterances from the domain of openended dictation (processed to generate multi-style training as described in Section 3.1) results in a further 12.8% relative improvement over the SVD compressed models. The combination of all of these techniques allows us to significantly improve performance over the baseline system. For completeness, we also trained a DNN system with topology described in [1]. As expected, this 2,000 CD state DNN performed significantly worse than all of the LSTMs in Table 1.
For reference, we also present results obtained using a much larger 'server-sized' CTC model, which predicts 9287 CD phones (plus "blank"), but is evaluated with the same LM and decoder graph as our other systems, which serves as a sort of upperbound performance on this task 1 .
Efficient Representation and Fast Execution
Since the 11.9 MB floating point neural network acoustic model described above consumes a significant chunk of the memory and processing-time, we quantize the model parameters (i.e. weights, biases) into a more compact 8-bit integer-based representation. This quantization has an immediate impact on the memory usage, reducing the acoustic model's footprint to a fourth of the original size. The final footprint of our AM is 3 MB as shown in Table 1. Using 8-bit integers also has the advantage that we can also achieve 8-way parallelism in many matrix operations on most mobile platforms.
Although we could have applied a number of compression schemes [20,21], with simplicity and performance in mind, and validated by previous work [22], we adopt a uniform linear quantizer that assumes a uniform distribution of the values within a given range. First, we find the minimum and maximum values of the original parameters. We then use a simple mapping formula which determines a scaling factor that when multiplied by the parameters spreads the values evenly in the smaller precision scale, thus obtaining a quantized version of the original parameters. The inverse operation is used when converting a quantized value back to its 32-bit floating point equivalent.
During neural network inference, we operate in 8-bit integers everywhere except in the activation functions and the final output of the network, which remain in floating point precision (by converting between quantized 8-bit values and their 32-bit equivalents as needed). Our quantization scheme and the inference computation approach provides a 2× speed-up in evaluating our acoustic models as compared to the unquantized model, with only a small performance degredation (compare 'adaptation' vs. 'quantization' in Table 1).
ON-DEVICE LANGUAGE MODELLING
In this work, we focus on building a compact language model for the domains of dictation and voice commands. To maintain a small system footprint, we train a single model for both domains. As described in Section 2, we limit the vocabulary size to 64K. Our language models are trained using unsupervised speech logs from the dictation domain (∼100M utterances) and voice commands domain (∼2M utterances). The voice command utterances were extracted by filtering general voice search queries through the grammars usually used to parse voice commands at runtime. Those queries that parsed were added to the training set. A Katz-smoothed 5-gram LM is then trained and entropy-based pruning is employed to shrink the LM to the sizes described in Section 2.
In addition to the dictation test set described in Section 3, in this section we present results on a voice commands test set. This set includes utterances from 3 types of commands: Device (∼2K utterances) -which includes commands for device control (e.g., "Turn up volume"), Planning (∼9K utterances) -consisting of utterances relevant to planning calendar events (e.g., "Set an alarm at 6 p.m."), and Communication (∼8K utterances) with utterances relevant to chat messaging, emails, or making phone calls. The Communication set, also includes some open-ended dictation corresponding to the message (e.g. "Text Jacob, I'm running 10 minutes late, can we reschedule?").
All results in this section are evaluated using the quantized LSTM CI CTC acoustic model described in Section 3, thus allowing us to focus on the impact of the LM.
In order to build a single LM to use across both dictation and command domains, we explore different interpolation techniques. As our baseline, we consider a linearly interpolated LM with interpolation weights estimated over a separate held-out development set sampled from speech logs. We compare performance obtained from the baseline system to a Bayesian interpolated LM [23], where voice commands and dictation are each represented as a unique task and the corresponding task priors are determined by sweeping parameters on a held-out development set to minimize word error rates rather than setting these based on the log counts.
Our results are presented in Table 2. The first two rows of the table highlight the utility of Bayesian interpolation over linear interpolation for both domains. The decoder graph used to produce these results was constructed with a single large language model, and therefore rescoring on-the-fly was not used. The third row of Table 2 shows the effects of on-the-fly rescoring on WER. Whereas the fully composed decoder graph is an unacceptable 29 MB, breaking them down into first-pass and on-the-fly rescoring models yields a 8.3 MB decoder graph and a 6.8 MB rescoring LM (with LOUDS compression [17]).
DECODER
In this section, we describe our decoder setup and a modification thereof that takes advantage of CTC's simple topology. In contrast to a conventional 3-state HMM structure, each phoneme is represented by a single AM output state in combination with a generic blank (or "non-perceiving") state. An FST-based decoder graph for the CTC model is created by the usual construction and composition of lexicon and LM transducers [24]. We do not require a context-dependency transducer, since we use context-independent phone models. Self-loop transitions are added to each state for the blank label. An example is shown in Figure 1.
We use an FST-based decoder with optimizations for CTC models in terms of both computation time and memory usage. By applying the blank self-loop transitions in the decoder, we can avoid adding them explicitly as arcs in the decoder graph. Furthermore, the dynamic expansion of HMM state sequences used in our generic FST-based decoder can be removed, which allows for a more compact search space in memory and a simpler search hypothesis expansion procedure. Table 3. Impact of contact injection and biasing on WER and latency.
PERSONALIZATION
Our final set of experiments highlight the advantages of integrating personal information into the language model. These experiments are aimed at determining the impact of incorporating device-specific information (e.g., the user's list of contact names) on the word error rate for individual users. We experiment with two test sets related to contact name recognition. The first is the 8K utterance Communication test set described in Section 4, containing contact names in the context of messages, e.g., "Text Jacob, . . .". The second set consists of 1.2K utterances containing only contact names. This second set is representative of the utterances that might follow a text-to-speech (TTS) prompt such as: "Which Jacob?" or perhaps a more general prompt such as "Who would you like to email?". The number of candidate contacts injected will depend on whether the TTS prompt is requesting disambiguation or just any name from the contact list.
In either context, we can perform the additional step of using on-thefly rescoring as in [13] to bias the language model towards recognizing only these contact names.
Given the lexical limits of the language model described above, it is unlikely that the recognizer will be able to handle the long tail of contact names as is. This motivates the incorporation of dynamic classes into our language model. In the general spirit of class-based LMs, and following the work of Aleksic et. al. [12] we annotate our training data with a special $CONTACTS symbol in place of contact names and train a language model that includes this placeholder token. At run-time we inject a small FST representing the user's personal contacts into the decoder graph at these locations. It should be noted that this is particularly simple in our system as our AM uses context-independent phonemes.
In order to generate pronunciations for contacts we train a LSTM-based grapheme-to-phoneme (G2P) model on human transcribed word-pronunciation pairs. The G2P problem is treated as a sequence transcription task as described in [25]. The LSTM-G2P system consists of four LSTM layers with 64 cells in each layer, and is trained to optimize the CTC objective function. The LSTM-G2P performs better in terms of word accuracy compared to traditional joint-sequence models represented as finite state transducers (FSTs) (a detailed comparison can be found in [25]). More importantly, the LSTM-G2P is considerably smaller in size compared to the FST implementation, 500 KB vs. 70 MB, making it an ideal solution for on-device pronunciation generation. Table 3 summarizes our results on the two contact test sets. For each utterance recognized, N contacts are injected into the decoder graph. If the transcript does indeed contain a contact name, one of these N is the correct contact. For the set containing only contact names, we additionaly evaluate performance obtained using on-thefly biasing [13] towards contact names.
Unsurprisingly, adding in personal contact names has a significant impact on WER, since many of the terms in these test sets are Table 4. Size of various components in the overall system architecture.
out-of-vocabulary items. In contexts when a single contact name is the expected user-response, these results indicate that biasing recognition towards the unigram $CONTACTS can yield dramatic improvements, especially if the set of candidate names can be whittled down to just two, as is often the case when disambiguating between contacts ("Do you mean John Smith or John Snow?"). While in practice one can often precompute these graphs, we also show here that median RT factors are not significantly affected even when 50 pronunciations are compiled and injected on-the-fly in the system.
SYSTEM FOOTPRINT
We present the sizes of the various components in our overall system architecture in Table 4. Using a combination of SVD-based compression and quantization, along with a compact first-pass decoding strategy and on-the-fly rescoring with a larger LM, we can build a system that is about 20.3 MB in size, without compromising accuracy or latency.
CONCLUSION
We presented our design of a compact large vocabulary speech recognition system that can run efficiently on mobile devices, accurately and with low latency. This is achieved by using a CTC-based LSTM acoustic model which predicts context-independent phones and is compressed to a tenth of its original size using a combination of SVD-based compression [4,5] and quantization. In order to support the domains of both open-ended dictation and voice commands in a single language model we use a form of Bayesian interpolation. Language model personalization is achieved through a combination of vocabulary injection and on-the-fly language model biasing [12,13].
For efficient decoding, we use a on-the-fly rescoring strategy following [1] with additional optimizations for CTC models which reduce computation and memory usage. The combination of these techniques allows us to build a system which runs 7× faster than real-time on a Nexus 5, with a total system footprint of 20.3 MB.
Fig. 1 .
1Example of a part of a decoder graph with blank labels [b].
Table 1. Word Error Rates (%) on an open-ended dictation task, evaluating various acoustic models, using the same language model described in Section 4, along with median RT factor.26 GHz
AM Setup
WER Params
Size
RT50
LSTM 2,000 CD States
23.4
9.9M
39.4 MB
2.94
LSTM CTC CI Phones
19.4
9.7M
38.8 MB
0.64
+ sMBR
15.1
9.7M
38.8 MB
0.65
+ SVD Compression
14.8
3M
11.9 MB
0.22
+ adaptation
12.9
3M
11.9 MB
0.22
+ quantization
13.5
3M
3 MB
0.14
LSTM CTC (Server-size)
11.3
20.1M 80.4 MB
-
This model uses 80-dimensional filterbank features in the frontend, since this resulted in slightly improved performance. Frame stacking and frame skipping are as in the CI LSTM CTC model.
ACKNOWLEDGEMENTSThe authors would like to thank our colleagues: Johan Schalkwyk, Chris Thornton, Petar Aleksic, and Peter Brinkmann, for helpful research discussions and support for the implementation.
Accurate and compact large vocabulary speech recognition on mobile devices.," in INTERSPEECH. Xin Lei, Andrew Senior, Alexander Gruenstein, Jeffrey Sorensen, ISCAXin Lei, Andrew Senior, Alexander Gruenstein, and Jeffrey Sorensen, "Accurate and compact large vocabulary speech recognition on mobile devices.," in INTERSPEECH. 2013, pp. 662-665, ISCA.
Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. Alex Graves, Santiago Fernández, Faustino Gomez, Jürgen Schmidhuber, ICML. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber, "Connectionist temporal classification: La- belling unsegmented sequence data with recurrent neural net- works," in ICML, 2006, pp. 369-376.
Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling. Brian Kingsbury, ICASSP. IEEEBrian Kingsbury, "Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling," in ICASSP. 2009, pp. 3761-3764, IEEE.
Restructuring of deep neural network acoustic models with singular value decomposition. Jian Xue, Jinyu Li, Yifan Gong, INTERSPEECH. Jian Xue, Jinyu Li, and Yifan Gong, "Restructuring of deep neural network acoustic models with singular value decompo- sition," in INTERSPEECH, 2013, pp. 2365-2369.
On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition. Rohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, Ian Mcgraw, ICASSP. IEEERohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, and Ian McGraw, "On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition," in ICASSP. 2016, IEEE.
Locallyconnected and convolutional neural networks for small footprint speaker recognition. Ignacio Yu-Hsin Chen, Tara N Lopez-Moreno, Mirkó Sainath, Raziel Visontai, Carolina Alvarez, Parada, ISCAYu-hsin Chen, Ignacio Lopez-Moreno, Tara N. Sainath, Mirkó Visontai, Raziel Alvarez, and Carolina Parada, "Locally- connected and convolutional neural networks for small foot- print speaker recognition," in INTERSPEECH. 2015, pp. 1136-1140, ISCA.
Compressing deep neural networks using a rank-constrained topology. Preetum Nakkiran, Raziel Alvarez, Rohit Prabhavalkar, Carolina Parada, ISCAPreetum Nakkiran, Raziel Alvarez, Rohit Prabhavalkar, and Carolina Parada, "Compressing deep neural networks using a rank-constrained topology," in INTERSPEECH. 2015, pp. 1473-1477, ISCA.
Structured transforms for small-footprint deep learning. Vikas Sindhwani, Tara N Sainath, Sanjiv Kumar, NIPS. to appearVikas Sindhwani, Tara N. Sainath, and Sanjiv Kumar, "Struc- tured transforms for small-footprint deep learning," in NIPS (to appear), 2015.
Low-rank matrix factorization for deep neural network training with high-dimensional output targets. Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, Bhuvana Ramabhadran, ICASSP. IEEETara N. Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran, "Low-rank matrix factor- ization for deep neural network training with high-dimensional output targets," in ICASSP. 2013, pp. 6655-6659, IEEE.
Small-footprint high-performance deep neural network-based speech recognition using split-VQ. Yongqiang Wang, Jinyu Li, Yifan Gong, IEEEYongqiang Wang, Jinyu Li, and Yifan Gong, "Small-footprint high-performance deep neural network-based speech recogni- tion using split-VQ," in ICASSP. 2015, pp. 4984-4988, IEEE.
Transferring knowledge from a RNN to a DNN. William Chan, Nan Rosemary Ke, Ian Lane, INTERSPEECH. 2015, ISCA. William Chan, Nan Rosemary Ke, and Ian Lane, "Transferring knowledge from a RNN to a DNN," in INTERSPEECH. 2015, ISCA.
Improved recognition of contact names in voice commands. Petar Aleksic, Cyril Allauzen, David Elson, Aleksandar Kracun, Diego Melendo Casado, Pedro J Moreno, ICASSP. Petar Aleksic, Cyril Allauzen, David Elson, Aleksandar Kra- cun, Diego Melendo Casado, and Pedro J. Moreno, "Improved recognition of contact names in voice commands," in ICASSP, 2015, pp. 5172-5175.
Composition-based on-the-fly rescoring for salient n-gram biasing. Keith Hall, Eunjoon Cho, Cyril Allauzen, Françoise Beaufays, Noah Coccaro, Kaisuke Nakajima, Michael Riley, Brian Roark, David Rybach, Linda Zhang, INTER-SPEECH. 2015, ISCAKeith Hall, Eunjoon Cho, Cyril Allauzen, Françoise Beau- fays, Noah Coccaro, Kaisuke Nakajima, Michael Riley, Brian Roark, David Rybach, and Linda Zhang, "Composition-based on-the-fly rescoring for salient n-gram biasing," in INTER- SPEECH. 2015, ISCA.
Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Haşim Sak, Andrew Senior, Françoise Beaufays, INTERSPEECH. ISCAHaşim Sak, Andrew Senior, and Françoise Beaufays, "Long short-term memory recurrent neural network architectures for large scale acoustic modeling," in INTERSPEECH. 2014, pp. 338-342, ISCA.
Learning acoustic frame labeling for speech recognition with recurrent neural networks. Haşim Sak, Andrew Senior, Kanishka Rao, Alex Ozanirsoy, Françoise Graves, Johan Beaufays, Schalkwyk, ICASSP. Haşim Sak, Andrew Senior, Kanishka Rao, Ozanİrsoy, Alex Graves, Françoise Beaufays, and Johan Schalkwyk, "Learning acoustic frame labeling for speech recognition with recurrent neural networks," in ICASSP, 2015, pp. 4280-4284.
Fast and accurate recurrent neural network acoustic models for speech recognition. Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays, ISCAHaşim Sak, Andrew Senior, Kanishka Rao, and Françoise Bea- ufays, "Fast and accurate recurrent neural network acoustic models for speech recognition," in INTERSPEECH. 2015, pp. 1468-1472, ISCA.
Unary data structures for language models. Jeffrey Sorensen, Cyril Allauzen, INTERSPEECH. Jeffrey Sorensen and Cyril Allauzen, "Unary data structures for language models," in INTERSPEECH. 2011, ISCA.
Large scale distributed deep networks. Jeffrey Dean, Greg S Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V Le, Mark Z Mao, Marc'aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Andrew Y Ng, NIPS. Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc'Aurelio Ran- zato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng, "Large scale distributed deep networks," in NIPS, 2012, pp. 1223-1231.
Sequence discriminative distributed training of long short-term memory recurrent neural networks. Haşim Sak, Oriol Vinyals, Georg Heigold, Andrew Senior, Erik Mcdermott, Rajat Monga, Mark Mao, INTERSPEECH. Haşim Sak, Oriol Vinyals, Georg Heigold, Andrew Senior, Erik McDermott, Rajat Monga, and Mark Mao, "Sequence dis- criminative distributed training of long short-term memory re- current neural networks," in INTERSPEECH, 2014, pp. 1209- 1213.
Alan C Bovik, Handbook of Image and Video Processing (Communications, Networking and Multimedia). Orlando, FL, USAAcademic Press, IncAlan C. Bovik, Handbook of Image and Video Processing (Communications, Networking and Multimedia), Academic Press, Inc., Orlando, FL, USA, 2005.
Vector Quantization and Signal Compression. Allen Gersho, Robert M Gray, Springer USAllen Gersho and Robert M. Gray, Vector Quantization and Signal Compression, Springer US, 1992.
Improving the speed of neural networks on cpus. Vincent Vanhoucke, Andrew Senior, Mark Mao, Deep Learning and Unsupervised Feature Learning Workshop. Vincent Vanhoucke, Andrew Senior, and Mark Mao, "Improv- ing the speed of neural networks on cpus," in Deep Learn- ing and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.
Bayesian language model interpolation for mobile speech input. Cyril Allauzen, Michael Riley, INTERSPEECH. Cyril Allauzen and Michael Riley, "Bayesian language model interpolation for mobile speech input," in INTERSPEECH, 2011, pp. 1429-1432.
Speech recognition with weighted finite-state transducers. Mehryar Mohri, Fernando Pereira, Michael Riley, Handbook of Speech Processing. Jacob Benesty, M. Sondhi, and Yiteng HuangSpringer28Mehryar Mohri, Fernando Pereira, and Michael Riley, "Speech recognition with weighted finite-state transducers," in Hand- book of Speech Processing, Jacob Benesty, M. Sondhi, and Yiteng Huang, Eds., chapter 28, pp. 559-582. Springer, 2008.
Grapheme-to-phoneme conversion using long shortterm memory recurrent neural networks. Kanishka Rao, Fuchun Peng, ICASSP. Haşim Sak, and Françoise BeaufaysKanishka Rao, Fuchun Peng, Haşim Sak, and Françoise Bea- ufays, "Grapheme-to-phoneme conversion using long short- term memory recurrent neural networks," in ICASSP, 2015.
| [] |
[
"CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems",
"CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems"
] | [
"Kushal Chawla 1chawla@ict.usc.edu ",
"Jaysa Ramirez 2jramirez@rollins.edu3rene.clever@lc.cuny.edu4jonmay@isi.edu \nRollins College\nWinter ParkUSA\n",
"Rene Clever \nCUNY Lehman College\nBronxUSA\n",
"Gale Lucas lucas@ict.usc.edu ",
"Jonathan May ",
"Jonathan Gratch gratch@ict.usc.edu ",
"\n1&4 University of Southern California\nLos AngelesUSA\n"
] | [
"Rollins College\nWinter ParkUSA",
"CUNY Lehman College\nBronxUSA",
"1&4 University of Southern California\nLos AngelesUSA"
] | [
"Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | Automated systems that negotiate with humans have broad applications in pedagogy and conversational AI. To advance the development of practical negotiation systems, we present CaSiNo: a novel corpus of over a thousand negotiation dialogues in English. Participants take the role of campsite neighbors and negotiate for food, water, and firewood packages for their upcoming trip. Our design results in diverse and linguistically rich negotiations while maintaining a tractable, closeddomain environment. Inspired by the literature in human-human negotiations, we annotate persuasion strategies and perform correlation analysis to understand how the dialogue behaviors are associated with the negotiation performance. We further propose and evaluate a multi-task framework to recognize these strategies in a given utterance. We find that multi-task learning substantially improves the performance for all strategy labels, especially for the ones that are the most skewed. We release the dataset, annotations, and the code to propel future work in human-machine negotiations: https:// github.com/kushalchawla/CaSiNo. Yaniv Leviathan and Yossi Matias. 2018. Google duplex:An ai system for accomplishing real-world tasks over the phone. | 10.18653/v1/2021.naacl-main.254 | [
"https://www.aclweb.org/anthology/2021.naacl-main.254.pdf"
] | 232,417,432 | 2103.15721 | 5b6f0ffa5f1e690325499bda485478d80c8c8ff5 |
CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems
June 6-11, 2021
Kushal Chawla 1chawla@ict.usc.edu
Jaysa Ramirez 2jramirez@rollins.edu3rene.clever@lc.cuny.edu4jonmay@isi.edu
Rollins College
Winter ParkUSA
Rene Clever
CUNY Lehman College
BronxUSA
Gale Lucas lucas@ict.usc.edu
Jonathan May
Jonathan Gratch gratch@ict.usc.edu
1&4 University of Southern California
Los AngelesUSA
CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJune 6-11, 20213167
Automated systems that negotiate with humans have broad applications in pedagogy and conversational AI. To advance the development of practical negotiation systems, we present CaSiNo: a novel corpus of over a thousand negotiation dialogues in English. Participants take the role of campsite neighbors and negotiate for food, water, and firewood packages for their upcoming trip. Our design results in diverse and linguistically rich negotiations while maintaining a tractable, closeddomain environment. Inspired by the literature in human-human negotiations, we annotate persuasion strategies and perform correlation analysis to understand how the dialogue behaviors are associated with the negotiation performance. We further propose and evaluate a multi-task framework to recognize these strategies in a given utterance. We find that multi-task learning substantially improves the performance for all strategy labels, especially for the ones that are the most skewed. We release the dataset, annotations, and the code to propel future work in human-machine negotiations: https:// github.com/kushalchawla/CaSiNo. Yaniv Leviathan and Yossi Matias. 2018. Google duplex:An ai system for accomplishing real-world tasks over the phone.
Introduction
Negotiations are highly prevalent in our interactions, from deciding who performs the household chores to high-stake business deals to maintaining international peace. Automatic negotiation systems are helpful in providing cost-effective social skills training (Johnson et al., 2019) and for advanced capabilities of AI assistants such as Google Duplex (Leviathan and Matias, 2018).
A negotiation requires understanding the partner's motives along with effective reasoning and communication, which is challenging for an automated system. Prior work in human-machine negotiations primarily uses strict communication protocols such as a pre-defined menu of options (Mell * Work done when authors were interns at USC ICT and Gratch, 2016). Systems involving free-form dialogue are limited due to a lack of interdisciplinary efforts in NLP and Computational Social Science in this direction. Initial efforts in building dialogue systems for negotiations looked at game environments (Asher et al., 2016;Lewis et al., 2017). DealOrNoDeal (Lewis et al., 2017) involves two negotiators who split given quantities of three arbitrary items: books, balls, and hats. This provides a concrete structure to the negotiation, keeps the design tractable, and ensures a reliable evaluation based on final points scored. Many practical solutions in negotiations follow similar closed-domain designs (Mell and Gratch, 2016). However, most of the dialogues in these game settings reduce to merely an exchange of offers from both sides. For instance, 'i need the book and the balls you can have the hat' or 'i want the ball and 2 books' in DealOrNoDeal. One reason for this lack of richness in language use is that the items are arbitrarily defined, that is, there is no semantic context around the items that the participants are negotiating for. Hence, this setup fails to capture many realistic aspects of negotiations such as small talk, preference elicitation, emotion expression, and convincing strategies based on individual preferences and requirements. Emulating real-world negotiations is desirable for developing practical systems for social skills training and robust AI assistants that are useful in realistic scenarios.
On the other extreme, the CB dataset (He et al., 2018) involves buyer-seller negotiations to finalize the price of a given product. Targeting the collection of more open-ended dialogues, the participants are also encouraged to discuss side offers, such as free delivery or also selling other accessories at the same price. Although this promotes diversity and rich natural conversations, unfortunately, such open-ended domains make the evaluation of negotiation performance non-trivial, which also inhibits the practical applicability of the systems developed on such datasets. For instance, in skills training, it is desirable to judge the performance and provide critical feedback (Monahan et al., 2018).
To address these shortcomings, we design a novel negotiation task. Our design is based on a tractable closed-domain abstraction from the negotiation literature but is infused with a real-world camping scenario, resulting in rich dialogues for natural language research (Section 2). The task involves two participants who take the role of campsite neighbors and negotiate for additional Food, Water, and Firewood, based on individual preferences and requirements.
Based on this design, we collect CaSiNo: a corpus of 1030 Camp Site Negotiation dialogues in English. The dialogues contain various aspects of a realistic negotiation, such as rapport building, discussing preferences, exchanging offers, emotion expression, and persuasion with personal and logical arguments. We also collect the participants' satisfaction from the outcome and how much they like their opponents, both being important metrics in negotiations (Mell et al., 2019). We annotate 9 persuasion strategies that span cooperative to selfish dialog behaviors (Section 3). We perform an extensive correlational analysis to investigate the relationship among the final outcomes and explore how they relate to the use of negotiation strategies (Section 4). Further, we propose a multi-task framework with task-specific self-attention mechanisms to recognize these strategies in a given utterance (Section 5). Our insights form the foundation for the development of practical negotiation systems that engage in free-form natural conversations. We release the dataset along with the annotations to enable future work in this direction.
The CaSiNo Dataset
Our data was crowd-sourced on Amazon Mechanical Turk. We describe our design by following the journey of a specific participant in our study. Pre-Survey: We start by collecting demographics and psychological personality traits of the participants which relate to their negotiation behaviors. For demographics, we gather age, gender, ethnicity, and the highest level of education. We consider two measures of individual personality differences: Social Value Orientation or SVO (Van Lange et al., 1997) and Big-5 personality (Goldberg, 1990) that have been heavily studied in the context of negotiations (Bogaert et al., 2008;Curtis et al., 2015). SVO classifies the participants as Prosocial, who tend to approach negotiations cooperatively, or Proself, who tend to be more individualistic. Big-5 personality test assesses the participants on five dimensions: Extraversion, Agreeableness, Conscientiousness, Emotional Stability, and Openness to Experiences. Our participants exhibit diverse demography and psychological personality. We provide aggregate statistics in Appendix A.
Negotiation Training: Research shows that the average human is bad at negotiating (Wunderle, 2007;Babcock and Laschever, 2009), which can adversely impact the quality of the collected dialogues and consequently, the system trained on them. One way to mitigate this is by using reinforcement learning to optimize on a reward that measures the negotiation performance. RL training has proved to be challenging and often leads to degeneracy (Lewis et al., 2017). Further, this ignores prior work in human-human negotiations that provides guidelines for achieving favorable outcomes in realistic negotiations (Lewicki et al., 2016).
To incorporate these best practices in a principled way, we design a training module. Each participant is asked to watch a video tutorial before their negotiation. The tutorial takes an example of a negotiation between two art collectors to encourage them to follow some of the best practices in negotiations (Lewicki et al., 2016), including 1) Starting with high offers, 2) Discussing preferences, 3) Appropriate emotion expression, and 4) Discussing individual requirements to make convincing arguments. This results in a rich and diverse set of dialogues, as we explore further in later sections. We release the complete video tutorial publicly, with the hope that it promotes reproducibility and helps researchers to design similar data collection experiments in the future: https://youtu.be/7WLy8qjjMTY. Preparation Phase: Several requirements guide our design choices: 1) Semantically Meaningful: The context must be meaningful and relatable for MTurk participants and for anyone who negotiates with the system trained on this dataset. This allows the participants to indulge in personal and contextual conversations, making the resulting system more useful for downstream applications. 2) Symmetric task: The task should be symmetric for both the participants so that a dialogue system may leverage both sides of the conversations during modelling, and 3) Symmetric items: The items
Preferences & Arguments P1 P2 High: Water: We like to go on runs and it increases the need of this.
High: Food: Food really increases everyones morale.
Medium: Food: Food overall is a good mood booster.
Medium: Firewood: We like to have a large fire. Low: Firewood: We do not care for fire and it is not necessary to us.
Low: Water: We don't drink water that often.
Conversation
Annotation P1: How are you today? Did you have any preferences on the supplies we will be trading?
Small-Talk, Coordination, Elicit-Pref P2: I am good. How about yourself? I think I would like some firewood to start off with. We like to have bigger fires. What about you?
Small-Talk, Self-Need, Other-Need, Elicit-Pref P1: I am good as well. That is good to hear that you like to have bigger fires as we do not care much for that. We would much rather have some extra water.
Small-Talk, Empathy, No-Need P2: Water is a little important to us too though , if possible maybe we can split that or maybe we can get some more food in replacement. which the participants are negotiating for should be symmetric in the sense that an individual can resonate with any preference order assigned to them. Hence, every category of items can be more desirable over others depending on a real-world context.
Our scenario is an instance of a common and useful abstraction for studying negotiations in scientific literature known as the multi-issue bargaining task (Fershtman, 1990). The task involves campsite neighbors who negotiate for additional Food, Water, and Firewood packages, each with a total quantity of three. Instead of choosing an arbitrary set of items, each item represents quite relatable, basic requirements that one might plausibly have for an actual camping trip. The items were only broadly defined to encourage diversity. One challenge when dealing with a realistic context like camping is the inherent bias that one might have towards one item over others, which violates our symmetry constraint. To mitigate this, we emphasize that the camping authorities have already provided the basic essentials and the participants will be negotiating for extras, based on their individual plans for camping. We present the negotiation scenario, as seen by participants, in Appendix B.
The three item types are assigned a random priority order for every participant using a permutation of {High, Medium, Low}. As in realistic negotiations, the participants are asked to prepare for their negotiation by coming up with justifications for the given preferences before the negotiation begins (precise question format in Appendix G), for instance, needing more water supplies for a hike or firewood for a bonfire with friends. We find that the participants are able to come up with a variety of arguments from their own camping experiences, such as Personal Care, Recreational, Group Needs or Emergency requirements. We illustrate some of these arguments in Appendix B. The participants were encouraged to use their justifications as they feel fit, to negotiate for a more favorable deal.
Negotiation Dialogue: Finally, two participants are randomly paired to engage in an alternating dialogue for a minimum total of 10 utterances. We also provide the option to use emoticons for four basic emotions, namely, happy, sad, anger, and surprise. After coming to an agreement, the participants submit the deal formally using the provided options. They can also walk away from the negotiation if they are unable to come to an agreement. The primary evaluation metric to assess the negotiation performance is the number of points scored by a negotiator. Every High, Medium, and Low priority item is worth 5, 4, and 3 points respectively, such that a participant can earn a maximum of 36 points if she is able to get all the available items.
Post-Survey: We collect two other evaluation metrics relevant to negotiations: 1) 5-point scale for satisfaction (How satisfied are you with the negotiation outcome?) and 2) 5-point scale for opponent likeness (How much do you like your opponent?). Back-to-back negotiation (Aydogan et al., 2020) is an interesting case where the relationship with the partner is crucial. In such a case, a poor relationship in earlier negotiations can adversely impact the performance in later rounds. Further, for some cases in CaSiNo, we observed that the participants were satisfied with their performance, despite performing poorly because they thought that the arguments of their partners for claiming the items were justified. One might argue that this is still a successful negotiation. Hence, we believe that all the metrics defined in the paper are important in the context of real-world negotiations and propose that they should be looked at collectively. We will further analyze these outcome variables in Section 4 where we study the correlations between the participants' negotiation behaviors and these metrics of negotiation performance.
Data Collection: We collected the dataset over a month using the ParlAI framework (Miller et al., 2017). Screenshots from the interface are provided in Appendix G. The participant pool was restricted to the United States, with a minimum 500 assignments approved and at least 95% approval rate. We post-process the data to address poor quality dialogues and inappropriate language use. We describe these post-processing steps in Appendix C.
Finally, we end up with 1030 negotiation dialogues between 846 unique participants. On average, a dialogue consists of 11.6 utterances with 22 tokens per utterance. We present a sample dialogue with the associated participant profile in Table 1. The participants are rewarded a base amount of $2 for their time (around 20 minutes). Further, they were incentivized with a performance-based bonus of 8.33 cents for every point that they are able to negotiate for. If a participant walks away, both parties get the amount corresponding to one high item or the equivalent of 5 points. The bonus is paid out immediately after the task to encourage participation. We discuss ethical considerations around our data collection procedure in Section 8. Overall, the participants had highly positive feedback for our task and could relate well with the camping scenario, engaging in enjoyable, interesting, and rich personal conversations. We discuss their feedback with examples in Appendix D. Vouch-Fairness That would leave me with no water.
Strategy Annotations
0.62
Proself About Preferences Self-Need I can't take cold and would badly need to have more firewood.
0.75
Other-Need we got kids on this trip, they need food too.
0.89
Non-strategic Hello, I need supplies for the trip! 1455 - Table 2: Utterance-level strategy annotations. α refers to Krippendorff's alpha among 3 annotators on a subset of 10 dialogues (∼ 120 utterances). An utterance can have multiple labels.
After collecting the dataset, we developed an annotation schema to analyze the negotiation strategies used by the participants, and to facilitate future work. We follow the conceptual content analysis procedure (Krippendorff, 2004) to design the scheme. Being a natural conversational dataset, we find several instances where a strategy spans multiple sentences in an utterance, as well as instances where the same sentence contains several strategies. Hence, we define an utterance as the level of analysis. Each utterance is annotated with one or more labels. If no strategy is evident, the utterance is labelled as Non-strategic. Although we label entire utterances, self-attention shows some promise as an automatic way to identify which part of an utterance corresponds to a given strategy, if desirable for a downstream application (Section 5).
Human negotiation behaviors can be broadly cat-egorized as Prosocial, which promote the interests of others or the common good, and Proself, which tend to promote self-interest in the negotiations (Yamagishi et al., 2017;Van Lange et al., 2007). Another important criterion is discussing preferences. Prior work suggests that humans negotiate with a fixed-pie bias, assuming that the partner's preferences align, and hence achieving sub-optimal solutions (Kelley, 1996). Based on these distinctions and manual inspection, we define 9 strategies used in the CaSiNo dataset. The usage of these negotiation strategies correlates with both the objective and subjective metrics of negotiation performance.
Prosocial
Prosocial strategies address the concerns of both self and the negotiation partner. We define three strategies that exhibit generic Prosocial behavior. Small-Talk: Participants engage in small talk while discussing topics apart from the negotiation, in an attempt to build a rapport with the partner. For example, discussing how the partner is doing during the pandemic or sharing excitement for the camping trip. Rapport has been well studied to positively impact negotiation outcomes (Nadler, 2003). Small talk usually appears either at the beginning or at the end of the negotiation.
Empathy: An utterance depicts Empathy when there is evidence of positive acknowledgments or empathetic behavior towards a personal context of the partner, for instance, towards a medical emergency. Empathy promotes Prosocial behaviors in interpersonal interactions (Klimecki, 2019).
Coordination is used when a participant promotes coordination among the two partners. This can be, for instance, through an explicit offer of a trade or mutual concession, or via an implicit remark suggesting to work together towards a deal. Further, we define two strategies that relate to Prosocial behavior about individual preferences:
No-Need is when a participant points out that they do not need an item based on personal context such as suggesting that they have ample water to spare. No-Need can directly benefit the opponent since it implies that the item is up for grabs.
Elicit-Pref is an attempt to discover the preference order of the opponent. CaSiNo covers a range of scenarios based on how aligned the preferences of the two parties are. Generally, we find that discussing preferences upfront leads to smoother negotiations without much back and forth.
Proself
Proself behavior attempts to serve personal performance in a negotiation. We define two strategies exhibiting generic Proself behavior.
Undervalue-Partner or UV-Part, refers to the scenario where a participant undermines the requirements of their opponent, for instance, suggesting that the partner would not need more firewood since they already have the basic supplies or a suggestion that there might be a store near the campsite where the partner can get the supplies instead.
Vouch-Fairness is a callout to fairness for personal benefit, either when acknowledging a fair deal or when the opponent offers a deal that benefits them. For instance, through an explicit callout 'this deal is not fair', or implicitly saying 'this does not leave me with anything'.
Finally, we consider two Proself strategies that relate to individual preferences:
Self-Need refers to arguments for creating a personal need for an item in the negotiation. For instance, a participant pointing out that they sweat a lot to show preference towards water packages.
Other-Need is similar to Self-Need but is used when the participants discuss a need for someone else rather than themselves. For instance, describing the need for firewood to keep the kids warm. Negotiating on behalf of others is densely studied as a competitive strategy, where negotiators engage in contentious, demanding, and inflexible bargaining behaviors (Adams, 1976;Clopton, 1984). Collecting annotations: Three expert annotators 1 independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff's alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores. For the rest of the paper, we will refer to this annotated subset of CaSiNo as CaSiNo-Ann.
Correlational Analysis
We next perform correlational analysis on CaSiNo-Ann to understand how the points scored by a participant relate to their satisfaction from the outcome and their opponent perception. We further shed light on what kind of strategies are more likely to lead to better outcomes. Such insights motivate our experiments on strategy prediction and would direct future efforts in building negotiation systems. We present complete results in Appendix E and discuss the significant observations below.
Relationship among outcome variables: We consider the points scored, satisfaction from the outcome, and opponent likeness. We find that the points scored by a participant are positively correlated with their own satisfaction (r=0.376, p < 0.01) and with their perception of the opponent (r=0.276, p < 0.01). Similar trends are visible with the corresponding variables of the negotiation partner as well, suggesting that the participants secured more points while still maintaining a positive perception in the eyes of their opponents.
Discovering the integrative potential: Integrative potential in a negotiation is based on how aligned the partner preferences are. Complete alignment leads to a distributive (or zero-sum) negotiation, having a low integrative potential where the benefit of one results in a high loss for the other. A negotiation is integrative if the preferences do not align, allowing for solutions that maximize mutual points. We assign each dialogue either 1, 2, or 3, depending on whether the integrative potential is low, medium, or high. The maximum joint points possible in these cases are 36, 39, and 42 respectively. We find that the participants are able to discover this integrativeness, thereby achieving significantly more joint points as the potential increases (r = 0.425, p < 0.001).
Use of negotiation strategies: Overall, we find that greater use of Prosocial strategies shows a general pattern to predict higher ratings for both subjective measures of satisfaction and likeness, for self as well as the partner. Engaging in small talk shows significant positive correlations (ps < 0.01), confirming our hypothesis from prior work that it relates to healthier relationships among the negotiators. Similar effects are visible for No-Need (ps < 0.05), where the participant decides to let go one of their low-priority items. Since this directly benefits the opponent, it is likely to improve the participant's perception. On the other hand, Proself strategies show a general pattern to predict lower satisfaction and likeness ratings for both self and the partner. We observe significant negative correlation for both Other-Need and Vouch-Fair (ps < 0.01). Further, we find that these competitive strategies are also associated with lower points scored by the participant and the opponent, and hence, the joint points (ps < 0.01). These correlations are not influenced by the integrative potential in the scenario, as when the integrated potential is controlled for, the effects generally remain unchanged and demonstrate the same patterns.
We further observe that the dialogue behavior of a negotiator significantly relates to the behavior of their opponent, where both tend to use similar negotiation strategies (ps < 0.01). Our findings show that Prosocial strategies are more likely to be associated with Prosocial behavior in the opponents and achieve more favorable outcomes in our negotiation scenario as compared to Proself. These results suggest that an automated negotiator can benefit by employing different strategies based on Prosocial or Proself behaviors of the opponent, for instance, by matching Prosocial behaviors but not Proself. The first step in this direction is to recognize them in a given utterance, which is our focus in the next section.
Strategy Prediction
For building an automated dialogue system that incorporates the negotiation strategies discussed above, an important first step is to build computational models that recognize their usage in the observed utterances. Hence, we explore the task of strategy prediction, given an utterance and its previous dialogue context.
Methodology
Pre-trained models have proved to be useful on a number of supervised tasks with limited in-domain datasets. Inspired by this success, we use BERTbase (Devlin et al., 2019) as the core encoding module. A natural way to use pre-trained models for our task is to fine-tune the model for every label independently in a binary classification setup, where the positive class represents the presence of a strategy, and the negative represents its absence. However, most of the utterances in the CaSiNo-Ann dataset are Non-strategic, resulting in a high imbalance where most of the data points belong to the negative class. As we later show, directly fine-tuning the BERT model fails to recognize the strategies for which the data is most skewed.
We instead propose a multi-task learning framework to allow parameter sharing between the dif- Figure 1: Architecture for multi-task strategy prediction. + represents element-wise summation. ferent prediction tasks. Our architecture involves a common BERT-base encoder shared with all the tasks but uses task-specific self-attention to allow the model to focus on the most relevant parts of the input for each task separately. Consequently, this also enables interpretability by allowing us to visualize which parts of an utterance are attended for any given strategy. Our input consists of a finite size context window, which loses the turn index for a specific utterance. Hence, we also capture the turn position for each utterance using sinusoidal positional embeddings (Vaswani et al., 2017). We present the complete architecture in Figure 1.
In-Domain Pre-Training (IDPT): CaSiNo-Ann is nearly 40% of the entire CaSiNo dataset. To incorporate the unannotated dialogues, we employ In-Domain Pre-training of the BERT encoder (Sun et al., 2019). For this purpose, we consider each unannotated dialogue as a separate sequence and fine-tune the BERT-base architecture on the Masked Language Modelling (MLM) objective (Devlin et al., 2019). This allows us to use the complete CaSiNo dataset in a principled way.
Experiment Design
Evaluation Metrics: We compare our methods for each strategy label on F1-score for positive class (presence of strategy label). To capture the overall performance, we report average F1 across all labels with uniform weights. Inspired by Joint Goal Accuracy from Dialog State Tracking (Kumar et al., 2020), we define another overall met-ric called Joint-A, which measures the percentage of utterances for which the model predicts all the strategies correctly.
Methods: Fine-tuning the pre-trained models has achieved state-of-the-art results across many supervised tasks. Hence, our primary baseline is BERT-FT, which fine-tunes the BERT-base architecture for binary classification of each strategy label separately. We consider a Majority baseline, where the model directly outputs the majority class in the training data. We also implement a Logistic Regression model for each label separately based on a bag-of-words feature representation of the input utterance. We refer to this model as LR-BoW. We refer to our complete architecture presented in Figure 1 as Full, and consider its ablations by freezing the BERT layer (Freeze), removing task-specific self-attention (No Attn), or removing the turn position embeddings (No Feats). We also implement a simple over-sampling strategy where every utterance with at least one strategy is considered twice while training (referred to as OS). For IDPT, we fine-tune BERT for 20 epochs using a masking probability of 0.3. We also tried a lower masking probability of 0.15, however, in that case, the model is unable to learn anything useful on our relatively small dataset.
Training Details: Our context window considers past 3 utterances and concatenates them using an EOS token. The embedding dimension is 768 for the encoder and the task-specific self-attention layers, each having only one attention head. We use the turn position embeddings of 32 dimensions. We train the models with Adam optimizer with a learning rate of 5e −05 and weight decay of 0.01. We use ReLU activation for feed-forward layers, and a dropout of 0.1 to prevent overfitting. The models were trained for a maximum of 720 iterations with a batch size of 64 (∼ 13 epochs). We checkpoint and evaluate the model after every 72 iterations and the best performing checkpoint on a held-out 5% validation set is used for evaluation. We provide further training details including specifics of the architecture design, computing infrastructure, and hyper-parameter tuning in Appendix F. Table 3 summarizes the results on 5-fold cross-validation. Majority baseline fails to recognize any of the strategies due to the data being skewed towards the negative class. It still achieves 39.4% Joint-A, indicating that these many utterances have none of the seven strategies present.
Results:
Model
Small- Talk Table 3: Performance on strategy prediction task for 5-fold cross validation. F1 score corresponds to the positive class. Incorporating the bag-of-words features, LR-BoW performs much better than Majority. BERT-FT highly improves the performance on all strategies except No-Need and UV-Part, for which the dataset is the most skewed. However, our Full multi-tasking framework is able to tackle the imbalance in these strategies through parameter sharing between all tasks. It achieves 36.4% F1 for No-Need and 44.5% F1 for UV-Part, indicating more than 100% improvements in both the cases. The model also improves F1 scores for all other metrics, but the improvement is not that substantial. Relatively lower scores for Freeze and No Attn suggest that both fine-tuning and task-specific attention layers are essential for the performance. Turn position embeddings, however, only help for a few strategies, indicating the diverse usage of strategies in CaSiNo-Ann. Overall, we find that using over-sampling and in-domain pre-training further helps the performance, especially for No-Need and UV-Part. Although there is no clear winner among OS and IDPT, our final model, Full+IDPT+OS, that combines both these strategies performs the best for us, achieving an overall F1 score of 68.3% and 70.2% Joint Accuracy.
Attention Visualization:
To understand if the model learns meaningful representations, we vi-sualize the task-specific self-attention layers of the trained Full+IDPT+OS model. We consider two instances in Figure 2. For meaningful comparisons, the instances were picked randomly from the pool of all utterances that contain two strategies. As evident, the model is able to focus on the most relevant parts for each strategy label. For instance, in case of Other-Need, the scores are higher where the participant talks about their kids needing more food. The token we gets the most attention, which is commonly used by the participants when referring to group needs. We see similar trends in the second case as well. Remarkably, this suggests that although our annotations are at an utterance level, it might be possible to automatically retrieve the most relevant phrases for any given strategy − this requires further investigation which we aim to explore in the future.
Related Work
Historically, negotiations have been widely studied across multiple disciplines, in game theory (Nash Jr, 1950), understanding human behaviour (
Conclusions and Future Work
We described the design and development of the CaSiNo dataset and the associated annotations. Our design is based on a relatable campsite scenario that promotes constrained, yet linguistically rich and personal conversations. We next plan to explore two main projects: first, extending the analysis to demographic and personality traits in the data, and second, using our insights towards the development of practical automated negotiation systems that engage in free-form dialogue and portray well-studied strategies from the prior negotiation literature. Our work fuels other tasks to advance the research in human-machine negotiations, such as predicting satisfaction and opponent perception from dialog behaviors, and building a feedback mechanism for skills training by identifying the use of pro-social versus pro-self strategies.
Finally, we note that there are many interesting extensions to our task design that make the scenario more complicated, but useful in specific realistic settings. For instance, incorporating more than two negotiating parties, and considering other modalities like facial expressions or embodied agents. In some realistic settings, the individual preferences may change during the negotiation and our setup assumes a fixed set of preferences throughout. Further, in complex settings, it may be possible to break down an individual item and claim sub-parts, such as negotiating for who gets an orange, but one party ends up taking the husk and the other takes the pulp for their own purposes. This is again not considered in our work and opens up exciting avenues for future work.
Broader Impact and Ethical Considerations 8.1 Data Collection
Our study was approved by our Institutional Review Board (IRB). Each participant signed an Informed Consent document at the beginning of the study which covered the purpose of the study, warned about potential discomfort, and noted the collection of data and its later use. Further, the participants were informed that they can withdraw at any time. They were also instructed to not use any offensive or discriminative language. The compensation was determined in accordance with the fairness rules defined by our IRB approval process. Additionally, we release the anonymized version of the data for future work by the research community. All personally identifiable information such as MTurk Ids or HIT Ids was removed before releasing the data. Lastly, any mention of the demographics or the psychological personality of the participants is based on self-identified information in our pre-survey and standard procedures of collecting personality metrics in the literature.
Automatic Negotiation Systems
Students entering the modern workforce must have a number of interpersonal skills that are crucial across a wide range of jobs. One of the key interpersonal skills needed to address conflicts and work well with others is the ability to negotiate. Unfortunately, research shows that the average human is bad at negotiating. This can adversely impact work opportunities (Babcock and Laschever, 2009), legal settlements (Eisenberg and Lanvers, 2009), and cross-cultural border peace (Wunderle, 2007). The typical way to teach negotiation skills to students is by in-class simulations, which are expensive. Automated systems can dramatically reduce the costs of, and increase access to, negotiation training. Systems developed on CaSiNo would be useful in this context. Further, the techniques developed find use-cases for advancing conversational AI and imparting the negotiation skills to existing AI assistants, making them more aware of our preferences and requirements. One such prototype is Google Duplex (Leviathan and Matias, 2018), where the AI system engages in a simple form of negotiation to book a haircut appointment over the phone. How humans negotiate has been actively studied for decades in Economics, Psychology, and Affective Computing (Carnevale and Pruitt, 1992). With this huge progress in our understanding of humanhuman negotiations, ethics has also been a wellstudied topic in the literature (Lewicki et al., 2016). Primary concerns include the acts of emotion manipulation, deception, bias, and misrepresentation. Naturally, these ethical concerns may creep into the automated systems, trained on a human-human negotiation dataset.
To mitigate these ethical impacts, we recommend that standard guidelines for deploying conversational AI assistants should be followed. It is essential to maintain transparency about the identity of the system. Ethical principles must be in place before the deployment of such systems with a regular update cycle. Our camping scenario is quite relatable to anyone who negotiates with the system, hence, it is important to be upfront about the potential behaviors of the deployed system. We recommend continuous monitoring by keeping humans in the loop, ensuring that the system is neither offensive nor discriminative. Further, it should be made easy for the users negotiating with the system to directly contact the team behind the deployment. Finally, any data which is collected during the deployment phase should be informed to the users and its future purpose should be properly laid out.
A Pre-Survey
After an internal pilot with 9 participants, the entire CaSiNo dataset was collected on Amazon Mechanical Turk over a period of a month. In total, 846 subjects took part in our data collection study. The statistics presented in this section are based on selfidentified demographical attributes and standard ways of collecting personality traits from the literature. We had a highly diverse participant pool, representing different age groups, gender, ethnic backgrounds and education levels. The mean Age among our participants is 36.97 with a standard deviation of 10.81. One participant was removed from this computation since the age entered was 3, which we believed to be in error. Among the participants, 472 identified themselves as Female, 372 were Male, and 2 belonged to Other category. While most of the participants were White American (625 in count), our study also involved a mix of Asian American, Black or African American, Hispanic or Latino, and Multi-Racial groups, among others. Most common highest level of education was found to be a 4-year Bachelor degree (346 participants), although the complete pool represents a mixture of Master and PhD degree holders, 2-year and 4-year college graduates without degrees, and high school graduates, among others. For the personality traits, 364 participants were classified as Proself, 463 as Prosocial, and 19 were unclassified based on their Social Value Orientation 2 . The mean scores for the Big-5 personality traits were found to be as follows: Agreebleness: 5.27, Conscientiousness: 5.6, Emotional Stability: 4.91, Extraversion: 3.69, Openness to Experiences: 5.04. We use the Ten-Item Personality Inventory (TIPI) 3 to compute these attributes, where each of them takes a value between 1 and 7.
B Preparation Phase
We present the scenario description seen by the participants in Table 4. Several arguments that the participants come up with are presented in Table 5. 2 https://static1.squarespace.com/ static/523f28fce4b0f99c83f055f2/t/ 56c794cdf8baf3ae17cf188c/1455920333224/ Triple+Dominance+Measure+of+SVO.pdf 3 https://gosling.psy.utexas. edu/scales-weve-developed/ ten-item-personality-measure-tipi/ ten-item-personality-inventory-tipi/ Imagine that you are on a camping trip! Woohoo! Apart from some basic amount of supplies which are provided to everyone, you can collect some additional food packages, water bottles and firewood, to make your camping trip even better. Since these are limited in quantity, you will have to split these additional packages with your campsite neighbor! Each of these items will be of either High, Medium or Low priority for you. Each of them only has an available quantity of 3. You will negotiate with another MTurker by chatting in English, using reasons from your personal experiences to justify why you need additional packages apart from the basic supplies. Try hard to get as many items as you can! Table 4: The camping scenario description as seen by the participants in our data collection.
C Data Post-processing steps
We list the data post-processing and filtering steps below:
1. Removal of incomplete dialogues: During the data collection, many negotiation sessions could not be completed due to one of the participants' disconnecting in the middle. Any dialogue for which we had missing data, including pre-survey and post-survey responses for both the participants, was removed from the final dataset.
2.
Removal of bad quality dialogues: We also removed dialogues where we observed a lack of effort or an irrelevant dialogue between the participants. We removed dialogues where the participants used very short utterances or failed to answer the dummy questions about their own preferences correctly, suggesting a lack of effort. Further, we removed the instances where the participants talked about the MTurk task itself, rather than the negotiation. These cases were identified based on a list of keywords: {'mturk', 'amt', 'turns', 'messages', 'amazon', '10'}. In a few cases, it was possible to retain the complete dialogue structure by just removing a few utterances. Hence, in these cases, we only removed the irrelevant utterances, while retaining the rest of the dialogue and the associated metadata.
3. Tackling inappropriate language use: Rarely, some participants also used inappropriate language in their utterances. These dialogues were identified using the lexicon of English swear words on Wikipedia 4 . All these dialogues were also removed from the final dataset.
D Participant Feedback
Role-playing has been a key technique to teach negotiation skills in classroom settings. One of the key application areas for automated negotiation systems is to augment such exercises by allowing the human participants to negotiate with an AI and practice their social skills. To maximize the utility of the system developed using our dataset, we choose the camping scenario, which we expected to be easily relatable for our participants and also for any individual who negotiates with a system developed on our dataset. This is essential to ensure that the collected dialogues are engaging, interesting, and capture the rich personal context of the individuals, albeit in a closed-domain setting. One way to judge whether the participants are able to relate to the scenario is via their feedback after the study. With this in mind, we used a feedback column in the Post-survey and asked several questions to the participants throughout the data collection process. These questions included: 1) How was your overall experience? 2) Were you able to see yourself in the 'role' and follow best practices?, 3) Could you relate to camping?, and 4) How helpful was the preparation phase? Based on manual inspection, we observed an overall positive feedback for all the above questions. Most of the participants were able to easily relate to camping. They frequently pointed out that the experience was 'fun', 'interesting', and 'nice'. Many saw this as an opportunity to talk to someone during these tough times of the pandemic. Several cherry-picked feedback responses which indicate that the participants enjoyed the task as a whole and were in fact able to connect well and engage in the negotiation, have been provided in Table 6.
E Correlational Analysis
The analysis discussed in the paper is presented in Tables 7, 8, 9, and 10.
F Strategy Prediction
F.1 Architecture
We provide some more details on the strategy prediction multi-task architecture in this section. The self-attention layer is itself represented using the BERT encoder architecture, but with a single transformer layer and just one attention head. After the self-attention layer, we first extract the 768 dimensional representation for the [CLS] token. This is passed through a feed-forward network, which converts it to 128 dimensions. The feature embedding is also converted to a 128 dimensional vector using a feed-forward network. Both the above embeddings are then combined using an element-wise summation, which further passes through two feedforward layers with hidden dimensions of 64 and 1, and a sigmoid layer to finally output the probability for each annotation strategy. I could do this all day I am camping right now! My partner had better reasons for needing the firewood I enjoyed talking about camping, I haven't been in a while. It reminded me of all of the things that I used to do. The best thing I did was ask him what his preferences were. He had no interest in firewood which was my highest priority.
F.2 Computing Infrastructure
All experiments were performed on a single Nvidia Tesla V100 GPU. The training takes two hours to complete for a single model on all the crossvalidation folds.
F.3 Training Details
To search for the best hyperparameters, we use a combination of randomized and manual search for the Full model. For each cross fold, 5% of the training data was kept aside for validation. The metric for choosing the best hyper-parameters is the mean F1 score for the positive class on the validation dataset. The mean is over all the labels and over 5 cross-validation folds. We vary the learning rate in {3e −5 , 4e −5 , 5e −5 }, weight decay in {0.0, 0.01, 0.001} and dropout in {0.0, 0.1, 0.2, 0.3}. The rest of the hyperparameters were fixed based on the available computational and space resources. We report the best performing hyper-parameters in the main paper, which were used for all the experiments. We report the performance on the validation set corresponding to the chosen hyper-parameters and the number of trainable parameters in Table 11.
G Screenshots from the data collection interface
To provide more clarity on the data collection procedure, we provide several screenshots from our interface in Figures 3, 4, 5, and 6. We design the pre-survey using the Qualtrics platform 5 . The rest of the data collection is based on the ParlAI framework (Miller et al., 2017).
Joint Points Integrative potential .425*** Table 9: Pearson Correlation Coefficients (r) for strategy annotation counts with the outcome variables. Variables with P. prefix denote the corresponding attributes of the negotiation partner of an individual. These correlations have been computed on the annotated subset of the CaSiNo dataset. * denotes significance with p < 0.05 (2-tailed). ** denotes significance with p < 0.01 (2-tailed).
P.Small-Talk P.Self-Need P.Other-Need P.No-Need P.Elicit-Pref P.UV-Part P.Vouch-Fair Small- Talk Figure 3: Screenshots from the data collection interface: Task Preview. This is a brief task description which the MTurkers see before signing up for our data collection task.
(a) Onboarding Phase 1: The first step takes the participant to Qualtrics which collects the demographics, introduces the camping scenario and gives a tutorial on negotiation best practices.
(b) Onboarding Phase 2: In this phase, we explicitly ask the participants to come up with arguments from their past experiences, which justify their preferences. The preference order is randomly assigned by us. This provides a personal context around the negotiation for each participant. (a) Chat Interface: The right portion allows two participants to negotiate in English using alternating messages. They also have the option to use emoticons. Once they come to an agreement, one of the participant must enter the exact deal on the left.
(b) Response to the Deal: When one of the participants enters the deal, the other gets an option to either accept, reject, or walk away from the deal. In the CaSiNO dataset, a participant walks away in 36 dialogues.
Figure 2 :
2Visualizing task-specific self-attention layers for two examples from the test dataset for the first cv fold. The heatmap shows the attention scores for each token in the utterance for corresponding strategy labels.
Adair et al., 2001), and building automatic negotiation agents(Beam and Segev, 1997; Baarslag et al., 2016). Most efforts focused on agent-agent interactions(Williams et al., 2012; Lin et al., 2014; Cao et al., 2018), although there is an increasing interest in human-agent negotiations (Mell and Gratch, 2017) as well.DeVault et al. (2015) used a multi-issue bargaining design similar to ours. However, they focus on face-toface negotiations, including speech and virtual embodied systems, which can be interesting future extensions to our current focus in chat-based dialogue systems. Other datasets looked at negotiation dialogues such as game settings(Asher et al., 2016; Lewis et al., 2017), and buyer-seller negotiations (He et al., 2018). These datasets have fueled a number of efforts on developing negotiation systems (Cheng et al., 2019; Parvaneh et al., 2019) and building a negotiation coach (Zhou et al., 2019). Our focus is on campsite negotiations, targeting a realistic and a closed-domain environment.Several other related efforts have explored problems between task-oriented and open-domain scenarios, such as persuasion for a charity(Wang et al., 2019), anti-scam (Li et al., 2020, collecting cards in a maze(Potts, 2012), and searching for a mutual friend(He et al., 2017). Instead, we focus on rich personal negotiations, which differ from these tasks in their ultimate goal and downstream applications.
Figure 4 :
4Screenshots from the data collection interface: Participant On-boarding.
Figure 5 :
5Screenshots from the data collection interface: Chat Interface.
Figure 6 :
6Screenshots from the data collection interface: Post-Survey. Once the deal is accepted (or someone walks away), both the participants are asked to fill in the post-survey having the above questions. The figure contains dummy responses.
Coordination P1 :
P1That may be possible.... What did you have in mind for the food replacement? Non-strategic P2: You can have all the water if we can have all the food?Non-strategic P1: I dont think I am okay with that . Food is essential to our groups morale when camping. We would like 1 additional food preferably.Self-Need, Other-Need P2: Well you guys did say you did not care much about large fires. What if you gave all the firewood in replace for the water and you can still keep 1 food? UV-Part, Coordination P1: So I would get 3 water and 1 food and youd get 3 firewood and 2 food?Non-strategic P2: Yea that seems like an alright trade to me Non-strategic P1: Hmm... alright then Non-strategic P2: Submit-Deal P1: Accept-Deal
Table 1 :
1Sample dialogue from the CaSiNo dataset. P1 and P2 represent two participants in our study.
Group NeedsI have two teenage boys who require a lot of food, especially when expending so much energy with all the activities of camping.I need more water because I have more people to keep hydrated and do not have enough.Category
Item type
Food
Water
Firewood
Personal Care because I'm normally eat more
because of my big size
I have to take a lot of medicine
so hydration is very important
I have arthritis and being sure I
am warm is important for my
comfort.
Recreational
Need many snacks throughout
the day for energy to hike
I am a very active camper. I like
to hike when I camp and I once
ran out of water during a
strenuous hike.
I like having campfires so I need
all the firewood.
I need more firewood due to
having several people join on the
trip and needing a bigger fire
overall.
Emergency
Some could have been damaged
during the trip. I would need
more.
our car overheated we had to use
the water
It may get cold and firewood can
be hard to come by at certain
campsites.
Table 5 :
5Example arguments that the participants come up for their individual requirements during the preparation phase. The categories defined are not exhaustive.
Table 6 :
6A few positive feedback responses which we obtained from the participants during the collection of the CaSiNo dataset.Points-Scored Satisfaction Opp-Likeness
Points-Scored
1
.376**
.276**
Satisfaction
.376**
1
.702**
Opp-Likeness
.276**
.702**
1
P.Points-Scored −.092**
.105**
.132**
P.Satisfaction
.105**
.180**
.244**
P.Opp-Likeness .132**
.244**
.344**
Table 7 :
7Pearson Correlation Coefficients (r) between the outcome variables. Variables with P. prefix denote the corresponding attributes of the negotiation partner of an individual. These correlations have been computed on the entire CaSiNo dataset. * denotes significance with p < 0.05 (2-tailed). ** denotes significance with p < 0.01 (2-tailed).
Table 8 :
8Pearson Correlation Coefficient (r) between integrative potential and the joint negotiation performance. *** denotes significance with p < 0.001. Points Points-Scored Satisfaction Opp-Likeness P.Points-Scored P.Satisfaction P.Opp-LikenessJoint Prosocial Generic
Small-Talk
−.022
−.002
.086*
.115**
−.025
.068
.127**
Prosocial About Preferences
No-Need
−.003
−.066
.035
.023
.063
.083*
.089*
Elicit-Pref
.053
.055
.058
.015
.010
.022
.055
Proself Generic
UV-Part
−.037
.008
−.051
−.112**
−.054
−.131**
−.151**
Vouch-Fairness −.140**
−.084*
−.159**
−.196**
−.090*
−.185**
−.180**
Proself About Preferences
Self-Need
−.003
.022
−.061
−.065
−.026
−.091*
−.086*
Other-Need
−.176**
−.045
−.101**
−.118**
−.174**
−.160**
−.113**
Table 10 :
10Pearson Correlation Coefficients (r) between strategy annotation counts. Variables with P. prefix denote the corresponding attributes of the negotiation partner of an individual. These correlations have been computed on the annotated subset of the CaSiNo dataset. * denotes significance with p < 0.05 (2-tailed). ** denotes significance with p < 0.01 (2-tailed).Model
Overall Validation F1 Trainable Parameters
Majority
0.0
0
LR-BoW
49.6
2646.2 (27.2)
BERT-FT
69.9
109, 590, 529
Multi-task training
Freeze
62.3
221, 361, 031
No Attn
66.6
110, 235, 271
No Feats
77.6
330, 840, 583
Full
78.1
330, 844, 807
+OS
77.9
330, 844, 807
+IDPT
79.6
330, 844, 807
+IDPT+OS
79.6
330, 844, 807
Table 11 :
11Training details for the strategy prediction task. The Overall F1 scores are for the positive class. For LR-BoW, the exact number of features varies slightly based on the CV split. Hence, we report Mean (Std) across the five splits.
Researchers involved in the project.
https://en.wiktionary.org/wiki/ Category:English_swear_words
https://www.qualtrics.com/core-xm/ survey-software/
AcknowledgmentsWe would like to thank Shivam Lakhotia, along with colleagues at the Institute for Creative Technologies and Information Sciences Institute for their comments and helpful discussions. We further thank Mike Lewis, He He, Weiyan Shi, and Zhou Yu for their guidance. We also thank the anonymous reviewers for their valuable time and feedback. Our research was sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-20-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Negotiation behavior when cultures collide: the united states and japan. L Wendi, Tetsushi Adair, Jeanne M Okumura, Brett, Journal of Applied Psychology. 863371Wendi L Adair, Tetsushi Okumura, and Jeanne M Brett. 2001. Negotiation behavior when cultures collide: the united states and japan. Journal of Applied Psy- chology, 86(3):371.
| [] |
[
"Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification",
"Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification"
] | [
"Ruixuan Tang \nUniversity of Virginia\nUniversity of Virginia\nUniversity of Virginia\n\n",
"Hanjie Chen \nUniversity of Virginia\nUniversity of Virginia\nUniversity of Virginia\n\n",
"Yangfeng Ji yangfeng@virginia.edu \nUniversity of Virginia\nUniversity of Virginia\nUniversity of Virginia\n\n"
] | [
"University of Virginia\nUniversity of Virginia\nUniversity of Virginia\n",
"University of Virginia\nUniversity of Virginia\nUniversity of Virginia\n",
"University of Virginia\nUniversity of Virginia\nUniversity of Virginia\n"
] | [] | Some recent works observed the instability of post-hoc explanations when input side perturbations are applied to the model. This raises the interest and concern in the stability of posthoc explanations. However, the remaining question is: is the instability caused by the neural network model or the post-hoc explanation method? This work explores the potential source that leads to unstable post-hoc explanations. To separate the influence from the model, we propose a simple output probability perturbation method. Compared to prior input side perturbation methods, the output probability perturbation method can circumvent the neural model's potential effect on the explanations and allow the analysis on the explanation method. We evaluate the proposed method with three widely-used post-hoc explanation methods (LIME (Ribeiro et al., 2016), Kernel Shapley (Lundberg and Lee, 2017a), and Sample Shapley(Strumbelj and Kononenko, 2010)). The results demonstrate that the posthoc methods are stable, barely producing discrepant explanations under output probability perturbations. The observation suggests that neural network models may be the primary source of fragile explanations. | 10.48550/arxiv.2212.05327 | [
"https://export.arxiv.org/pdf/2212.05327v1.pdf"
] | 254,564,449 | 2212.05327 | 8664038c68f12b3e6f6742379c05f115a69d31e7 |
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Ruixuan Tang
University of Virginia
University of Virginia
University of Virginia
Hanjie Chen
University of Virginia
University of Virginia
University of Virginia
Yangfeng Ji yangfeng@virginia.edu
University of Virginia
University of Virginia
University of Virginia
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Some recent works observed the instability of post-hoc explanations when input side perturbations are applied to the model. This raises the interest and concern in the stability of posthoc explanations. However, the remaining question is: is the instability caused by the neural network model or the post-hoc explanation method? This work explores the potential source that leads to unstable post-hoc explanations. To separate the influence from the model, we propose a simple output probability perturbation method. Compared to prior input side perturbation methods, the output probability perturbation method can circumvent the neural model's potential effect on the explanations and allow the analysis on the explanation method. We evaluate the proposed method with three widely-used post-hoc explanation methods (LIME (Ribeiro et al., 2016), Kernel Shapley (Lundberg and Lee, 2017a), and Sample Shapley(Strumbelj and Kononenko, 2010)). The results demonstrate that the posthoc methods are stable, barely producing discrepant explanations under output probability perturbations. The observation suggests that neural network models may be the primary source of fragile explanations.
Introduction
Despite the remarkable performance of neural network models in natural language processing (NLP), the lack of interpretability has raised much concern in terms of their reliability and trustworthiness (Zhang et al., 2021;Doshi-Velez and Kim, 2017;Hooker et al., 2019;Jiang et al., 2018). A common way to improve a model's interpretability is to generate explanations for its predictions from the posthoc manner. We call these explanations post-hoc explanations (Doshi-Velez and Kim, 2017;Molnar, 2018). Post-hoc explanations demonstrate the relationship between the input text and the model prediction by identifying feature importance scores (Du et al., 2019). In general, a feature with a higher importance score is more important in contributing to the prediction result. Based on feature importance scores, we can select top important features as the model explanation.
However, some recent works (Ghorbani et al., 2019;Subramanya et al., 2019;Zhang et al., 2020;Ivankay et al., 2022;Sinha et al., 2021) have observed explanation discrepancy when input-side perturbation is applied to the model. One question to this observation is what makes the explanation discrepant? Explanations generated by a post-hoc method (Ribeiro et al., 2016;Lundberg and Lee, 2017a;Friedman, 2001) depend on a model's prediction probabilities. If perturbations at the input side cause model prediction probabilities to change, post-hoc explanations may change accordingly.
In Figure 1 (a), we demonstrate a simple example of the process that generates explanations using a post-hoc method. The explanation is generated depending on the probability P . In Figure 1 (b), we demonstrate an example of the same process with perturbation at the input side. The explanation is generated depending on the probabilityP . The output probabilities in the two examples are not the same, i.e. P =P . In Figure 1 (a) and (b), it is noticeable that the feature importance score of the same feature has changed. For instance, the feature "love" has different importance scores in the two examples. Since feature importance scores are inconsistent, the explanations in the two examples are different. We call this explanation discrepancy, which will be introduced more in subsection 2.2.
However, the prediction label in Figure 1 (a), y, and the prediction label in Figure 1 (b),ȳ, are equal, which isŷ =ȳ = POSITIVE. This indicates that input side perturbations may not flip the model prediction label, while can make output probabilities change, hence further leading to explanation discrepancy. We argue that, under input side perturbations, it is difficult to identify the source causing the explanation discrepancy. One intuitive justifica- tion is that the perturbation at the input side has to pass through both the model and the post-hoc explanation method. Both the model and the post-hoc explanation method are possible factors that result in unstable explanations. For example, the model's prediction behavior may change under input side perturbations, that is focusing on different features to make predictions, hence resulting in the explanation discrepancy Ji, 2020, 2022). Or the explanation method itself may be vulnerable to input perturbations, producing discrepant explanations. The instability may not be told from the prediction results, but reflected in the explanations, i.e., explanation discrepancy
In this paper, we propose a simple strategy to demonstrate the potential source that causes explanation discrepancy. To circumvent the potential influence of the model on the explanations, we design an output probability perturbation method by slightly modifying the prediction probabilities, as shown in Figure 1 (c). In this work, we focus on the model-agnostic post-hoc methods, LIME (Ribeiro et al., 2016), Kernel Shapley (Lundberg andLee, 2017a), and Sample Shapley (Strumbelj and Kononenko, 2010), that explain the black-box models. If a similar explanation discrepancy can be observed when only output probability perturbation is applied, it would suggest that post-hoc explanation methods may be unstable because the potential influence from the black-box model has been blocked. Otherwise, we should not blame post-hoc explanation methods as the source of vul-nerability in fragile explanations (Sinha et al., 2021;Subramanya et al., 2019).
Method
Background
For a text classification task, x denotes the input text consisting of N words, x = [x (1) , · · · , x (N ) ], with each component x (n) ∈ R d representing the n-th word embedding. We define a black-box classifier as f (·) and its output probability of a given x on the corresponding label k is P (y = k | x) = f k (x), where k ∈ {1, . . . , C} and C is the total number of label classes.
To explain a black-box model's predictionŷ = f (x), a class of post-hoc explanation methods approximate the model locally via additive feature attributions (Lundberg and Lee, 2017b;Ribeiro et al., 2016;Shrikumar et al., 2017). Specifically, these algorithms demonstrate the relationship between the input text and the prediction result by evaluating the contribution of each input feature to the model prediction result. These methods would assign a feature importance score to each input feature to represent its contribution to the prediction. We use LIME (Ribeiro et al., 2016) as an example.
Example:
Post-hoc Explanation Method, LIME. It first sub-samples words from the input, x, to form a list of pseudo examples {z L j=1 }, and then the contributions of input features are estimated by a linear approximation fŷ(r) ≈ gŷ(r ), where r ∈ {x, z L j=1 }, gŷ(r) = w T y r , and r is a simple representation of r, e.g. bag-of-words representation. The weights {w (n) y } represent importance scores of input features {x (n) }. Let I(x,ŷ, P ) denote the explanation for the model prediction on x, whereŷ is the predicted label and P represents output probabilities.
Explanation Discrepancy
As mentioned in the previous section, the explanation discrepancy may happen when input perturbations are applied to the model. Let I(x,ŷ, P ) and I(x,ȳ,P ) denote the explanation to the model prediction based on the original input x and the perturbed inputx respectively, wherex = x + ε, and ε is the perturbation at input. Similarly, we define I(x, y, P ) as the explanation to the prediction based on the perturbed output probability P = P + ε , where ε is the perturbation on the output probability. Note that when ε and ε are small, the model prediction stay the same, which iŝ y =ȳ = y. The explanation discrepancy between I(x,ȳ,P ) and I(x,ŷ, P ) is denoted as δ input , and the discrepancy between I(x, y, P ) and I(x,ŷ, P ) is denoted as δ output .
We use Figure 1 in section 1 as an example to illustrate explanation discrepancy in details. The explanation, I(x,ŷ, P ), in Figure 1 (a) is "Love", "Classical", "I" and "Music", in the descending order of importance scores. The explanation, I(x,ȳ,P ), in Figure 1 (b) is "Classical", "Music", "Love", and "I", in the descending order. The explanation, I(x,ŷ, P ), in Figure 1 (c) is "Love", "Classical", "I" and "Music", in the descending order. Generally, after perturbation, explanation inconsistency reflects in two aspects. The first aspect is whether the overall ranking of the features based on their importance scores in the explanation remains the same. For example, "Love" ranks the first in the explanation in Figure 1 (a), while drops to the third in the explanation in Figure 1 (b). The discrepancy is denoted as δ input . The second aspect is whether the top K important features in the explanation are consistent. For example, if K = 2, the first two important words in Figure 1 (a) are "Love" and "Classical", while those in Figure 1 (b) are "Classical" and "Music". The difference can also be denoted as δ input mentioned above. Similarly, the same aspect of explanation discrepancy in Figure 1 (a) and Figure 1 (c) can be denoted as δ output .
Output Probability Perturbation Method
As mentioned in section 1, the limitation of input perturbation methods is the difficulty in identifying the primary source that causes explanation discrepancy. Motivated by this, we propose the output probability perturbation method to circumvent the influence of black-box models.
Specifically, given an example x, we add a small perturbation to the model output probabil-
ities {P (y = k | x) + ε y=k } C k=1 . To guarantee the modified {P (y = k | x) + ε y=k } C k=1
are still legitimate probabilities, we further normalize them as
P (y = k | x) = P (y = k | x) + ε y=k C i=1 {P (y = i | x) + ε y=i } (1)
The explanation in the case with output probability perturbation is computed based on the output probability P (y =ŷ | x). The proposed method well suits the motivation of investigating the source that causes explanation discrepancy. The main reason is that, unlike perturbation applied at the input side, the proposed method avoids the potential effects of the model's vulnerability on post-hoc explanations. We use LIME (Ribeiro et al., 2016) as an example to demonstrate the proposed method.
Example: Output probability perturbation in LIME algorithm. As denoted in subsection 2.1, r is the bag-of-words representation of the original input text, x. A simplified version 1 of LIME algorithm is equivalent to finding a solution of the following linear equation:
w T y r = pŷ (2) where pŷ = [ P (y =ŷ | x), P (y =ŷ | z 1 ), . . . , P (y =ŷ | z L )]
T are the perturbed probabilities on the labelŷ, and w T y is the weight vector, where each element measures the contribution of an input word to the predictionŷ. A typical explanation from LIME consists of top important words according to wŷ. Essentially, the proposed output perturbation is similar to the perturbation analysis in linear systems (Golub and Van Loan, 2013), which aims to identify the stability of these systems. Despite the simple formulation in Equation 2, a similar linear system can also be used to explain the Shapley-based explanation methods (e.g., Sample Shapley (Strumbelj and Kononenko, 2010)).
Experiment
Experiment Setup
Datasets. We adopt four text classification datasets: IMDB movie reviews dataset (Maas et al., 2011, IMDB), AG's news dataset (Zhang et al., 2015, AG's News), Stanford Sentiment Treebank dataset with binary labels (Socher et al., 2013, SST-2), and 6-class questions classification dataset TREC (Li and Roth, 2002, TREC). The summary statistics of datasets are shown in Table 1.
Models. We apply three neural network models, Convolutional Neural Network (Kim, 2014, CNN), Long Short Term Memory Network (Hochreiter and Schmidhuber, 1997, LSTM), and Bidirectional Encoder Representations from Transformers (Devlin et al., 2018, BERT).
The principle of CNN model is based on information processing in the visual system of humans. The core characteristics are that it can efficiently decrease the dimension of input, and it can efficiently retain important features of the input (Kim, 2014).
LSTM model is one advanced RNN model. Unlike the architecture of a standard feedforward deep learning neural network, it has feedback connections in the architecture, which helps to process sequential data (e.g., language and speech) (Hochreiter and Schmidhuber, 1997, LSTM).
BERT model is a Language Model (LM). In the NLP research, the main tasks of the BERT model are (1) Sentence pairs classification tasks and (2) Single sentence classification tasks (Devlin et al., 2018). In this work, we focus on the second task while we apply the BERT model in the experiment.
The prediction performance of the three models on the four datasets are recorded in Table 2.
Post-hoc Explanation Methods. We adopt three post-hoc explanation methods, Local Interpretable Model-Agnostic Explanations (Ribeiro et al., 2016, LIME), Kernel Shapley (Lundberg and Lee, 2017a), and Sample Shapley (Strumbelj and Kononenko, 2010). LIME, Kernel Shapley, and Sample Shapley are additive feature attribution methods. The additive feature method provides a feature importance score on every feature for each text input based on the model prediction. Evaluation Metrics. In the experiment, we apply two evaluation metrics, Kendall's Tau order rank correlation score, and the Top-K important words overlap score (Chen et al., 2019;Kendall, 1938;Ghorbani et al., 2019) to evaluate the discrepancy between explanations (i.e., δ input and δ output ).
As illustrated in subsection 2.2, explanation discrepancy can be evaluated in in two aspects. We use Kendall's Tau order rank correlation score to quantify the change of the overall ranking of feature importance scores in explanations. For example, in Figure 1 (a) and (b), we can apply Kendall's Tau order rank correlation score to identify how close the overall ranking of features in the two examples. If the score is close to 1, then the two explanations are similar. If the score is close to −1, then the two explanations differ significantly. We use Top-K important words overlap score to evaluate the discrepancy on the top K features in the explanations. This metric computes the overlap ratio among the top K features. In this work, we set K = 5.
Explanation Discrepancy Comparison Experiment
To explore the primary source causing fragile explanations, we conduct a comparison experiment to evaluate and compare between explanation discrepancy δ input , and explanation discrepancy δ output . The definition of δ input , and δ output are introduced in subsection 2.2. δ input denotes the discrepancy between the explanation generated by the black-box model with no perturbation, I(x,ŷ, P ), and the explanation generated by the black-box model with perturbation at the input, I(x,ŷ,P ). While δ output denotes the discrepancy of I(x,ŷ, P ) and the explanation generated by the black-box model with perturbation at the output probability, I(x,ŷ, P ).
In this experiment, for output probability perturbation, we directly add random noise to the model output probabilities. For comparison, we add the noise to word embeddings for input perturbations (Liu et al., 2020). Both input side perturbation and output probability perturbation are applied with noise sampled from a Gaussian distribution, N (0, σ 2 ). We apply Gaussian noise because it is easy to control the perturbation level by modifying the variance of the Gaussian distribution σ 2 . In experiments, we applied five different perturbation levels from "0" to "4". "0" means the slightest perturbation level, zero perturbation, while "4" represents the strongest perturbation level. The specific value of each perturbation level is shown in Table 3. Note that for each level, the input side perturbations and the output probability perturbations are different because we select different perturbations for the input side and the output probability to reach a similar accuracy at each level. If the model's accuracy is not close at each level, it is difficult to evaluate the results.
Perturbation Source
Level σ can be attributed to the potential influence caused by the black-box model. In Figure 2 (a), (b) and (c), the large gap between δ input and δ output is consistent across all three post-hoc explanation methods, LIME, Kernel Shapley and Sample Shapley. For output probability perturbations, it is noticeable that the values of Kendall's Tau order rank correlation scores remain the same with the perturbation level increasing from "0" to "4". This indicates that the overall ranking of feature importance scores are stable under output perturbations. Furthermore, the results suggest that for a given input, if x and prediction results stay unchanged,ŷ = y, the only perturbation ε at output probability is unlikely to influence explanations generated by the post-hoc methods. In other words, the explanation discrepancy observed in the previous study (Ivankay et al., 2022;Sinha et al., 2021) is unlikely caused by the post-hoc methods. Meanwhile, for the baseline results (perturbation applied at the input), it is notifiable that the values of Kendall's Tau order rank correlation scores decrease obviously with the increase of input perturbation intensity levels. This indicates that the black-box model is vulnerable to input perturbations, which causes fragile explanations. Based on the observations, we claim that the black-box model is more likely to be the primary source that results in fragile explanations.
Results and Discussion
Top-K word importance score evaluation results. Top-K word importance score evaluation shows the same result: the black-box model is the primary source causing explanation discrepancy. In Figure 2 (d), (e) and (f), δ input against δ output display an obvious discrepancy across the three post-hoc explanation methods. For output probability perturbations, δ output shows no change in the overlap among the top K important words. This indicates that, for the top five important features in the explanation of each corresponding prediction result, output probability perturbations will not cause any difference. The results under this metric also illustrate that the black-box model is more likely to cause fragile explanations than explanation methods themselves.
Further Analysis on LIME Algorithm
According to the previous results, we have a conclusion that post-hoc explanation methods are stable.
We further analyze the stability of the explanation algorithms. We use the LIME algorithm (Ribeiro et al., 2016) as an example.
L(fŷ(r), gŷ(r ), π) = r,r ∈R π(fŷ(r) − gŷ(r ))
(3) Equation 3 is definition of the loss function in LIME algorithm (Ribeiro et al., 2016). In the loss function, πgŷ(r ) is denoted the kernel calculation function of the algorithm. r represents the pseudo example based on the original example, r. gŷ(r ) represents the linear local explainable model on the pseudo example, r . Here, we use a general linear model representation to represent the explainable model, gŷ(r) = w T y r . w T y is the weight function of the linear model and it is the importance feature score calculation function as well. Equation 4 is the kernel calculation function of the LIME algorithm after expanding.
G = πgŷ(r) = πw T y r(4)
The form of the kernel calculation function can be interpreted as a general linear function, Ax = b. In the linear function, the condition number, (κ), is applied to evaluate how sensitive the linear function is due to a small change at the input and reflects in its output (Belsley et al., 2005). If the condition number, (κ), which is the largest eigenvalue in the matrix A divided by the smallest eigenvalue in the matrix A, is large, the solution x would change rapidly by a slight difference in b, which would cause sensitivity of the solution to the slight error in the input (Goodfellow et al., 2016). In Equation 4, πr can be considered as the matrix A, and the feature importance score function w T y can be considered as the solution x. If πr is a stable linear system, the feature importance score function w T y would be unlikely sensitive to a minor change at the linear system input side, and the corresponding post-hoc explanation method is stable. The form of the kernel calculation function can be interpreted as a general linear function, Ax = b. In the linear function, the condition number, (κ), is applied to evaluate how sensitive the linear function is due to a small change at the input and reflects in its output (Belsley et al., 2005). If the condition number, (κ), which is the largest eigenvalue in the matrix A divided by the smallest eigenvalue in the matrix A, is large, the solution x would change rapidly by a slight difference in b, which would cause sensitivity of the solution to the slight error in the input (Goodfellow et al., 2016). In Equation 4, πr can be considered as the matrix A, and the feature importance score function vecw T y can be considered as the solution r. If πr is a stable linear system, the feature importance score function w T y would be unlikely sensitive to a minor change at the linear system input side, and the corresponding post-hoc explanation method is stable. Since the kernel function is a pure numerical step without semantics involved. We conduct a simulation experiment to explore the stability of the LIME algorithm (Ribeiro et al., 2016).
Simulation Experiment and Results
In the simulation experiment, the pseudo example, r , is a matrix with the size of sub-sampling size, m, multiple with the length of a sentence, l. We select m = 200, which is the actual sample size value we applied in the comparison experiment. For the sentence length, first, we simulate the case when sentence length is fixed, that is l = 20. Then, to compare condition number distribution when sentence length is different, we apply two more cases, that are l = 30, and l = 40. For each length, we simulate it for 500 iterations. For π, it is the distance between the original input to the subsampling based on the original input in the LIME algorithm (Ribeiro et al., 2016). In the simulation experiment, we apply cosine distance to represent the value of π.
Total iteration number κ ∈ [5, 6) κ ∈ [6, 7) 500 392 108 In Table 4, the result of the simulation experiment when the sentence length is fixed, l = 20, demonstrates that the majority of the condition number κ of the matrix πr is lower than 7. In Goldstein's work, it suggests that the condition number of a stable or a well-conditioning matrix should be lower than 30 (Goldstein, 1993). It means that the feature importance score function, W , is less likely influenced by a small perturbation involved, which is also reflected in the real dataset results in the comparison experiment in subsection 3.3. In Figure 3, the result of the simulation experiment when sentence lengths are different shows that when the length of the sentence increases, the condition number κ of the matrix πr increases with a tiny amplitude. The majority of the condition number κ of the matrix is lower than 13 when the length is from 20 to 40. Although the result demonstrates that the condition number κ would increase with sentence length increasing, the increasing amplitude is small and the majority of the condition number is lower than the threshold number. The result suggests that the matrix πr in the LIME algorithm can remain a small condition number, which makes the linear system relatively stable. In other words, the LIME algorithm (Ribeiro et al., 2016) is a relatively stable post-hoc explanation method.
Previous Works
Post-hoc Explanation Methods Most works focus on explaining neural network models in a posthoc manner, especially generating a local explanation for each model prediction. The white-box explanation methods, such as gradient-based explanations (Hechtlinger, 2016), and attention-based explanations (Ghaeini et al., 2018), either require additional information (e.g. gradients) from the model or incur much debates regarding their faithfulness to model predictions (Jain and Wallace, 2019).
Another line of work focuses on explaining black-box models in the model-agnostic way. Li et al. (2016) proposed a perturbation-based explanation method, Leave-one-out, that attributes feature importance to model predictions by erasing input features one by one. Ribeiro et al. (2016) proposed to estimate feature contributions locally via linear approximation based on pseudo examples. Some other works proposed the variants of the Shapley value (Shapley, 1953b) to measure feature importance, such as the Sample Shapley method (Strumbelj and Kononenko, 2010), the Kernel Shapley method (Lundberg and Lee, 2017a), and the L/C-Shapley method (Chen et al., 2018).
Model robustness Recent works have shown the vulnerability of model due to adversarial attacks (Szegedy et al., 2013;Goodfellow et al., 2014;Zhao et al., 2017). Some adversarial examples are similar to original examples but can quickly flip model predictions, which causes concern on model robustness (Jia et al., 2019). In the text domain, a common way to generate adversarial examples is by heuristically manipulating the input text, such as replacing words with their synonyms (Alzantot et al., 2018;Ren et al., 2019;Jin et al., 2020), misspelling words (Li et al., 2018;Gao et al., 2018), inserting/removing words (Liang et al., 2017), or concatenating triggers .
Explanation Robustness Previous work explored explanation robustness by either perturbing the inputs (Ghorbani et al., 2019;Subramanya et al., 2019;Zhang et al., 2020;Heo et al., 2019) or manipulating the model Slack et al., 2020;Zafar et al., 2021). For example, Slack's group fooled post-hoc explanation methods by hiding the bias for black-box models based on the proposed novel scaffolding technique (Slack et al., 2020). However, all of these works cannot disentangle the sources that cause fragile explanation. Differently, the proposed method mitigates the influence of model to the explanation by perturbing model outputs.
Conclusion
In this work, our main contribution is to identify the primary source of fragile explanation, where we propose an output probability perturbation method. With the help of this proposed method, observation results can illustrate a conclusion that the primary potential source that caused fragile explanations in the previous studies is the black-box model itself, which also illustrate that some limitations of prior methods. Furthermore, in subsection 3.4, we analyze the kernel calculation inside the LIME algorithm (Ribeiro et al., 2016). We apply the condition number of the matrix and simulation experiments to demonstrate that the kernel calculation matrix inside LIME has a low condition number. This result further suggests the stability of the LIME algorithm.
Figure 1 :
1The pipeline of a simple example that post-hoc explanation methods generate explanations with (a) no perturbation applied. (b) perturbation applied at the input side. (c) perturbation applied at the output probabilities.
LIME and Kernel Shapley are two post-hoc methods adopting a similar strategy. The first step is to generate a set of pseudo examples and their corresponding labels based on the black-box model's predictions on them(Ribeiro et al., 2016;Lundberg and Lee, 2017a). The second step is to train an explainable machine learning model (eg: linear regression, LASSO) with the pseudo examples(Ribeiro et al., 2016;Lundberg and Lee, 2017a). The difference between the LIME algorithm and the Kernel Shapley algorithm is in the way to calculate the weight of pseudo examples in the explainable model(Molnar, 2018). LIME algorithm relies on the distance between the original example and the pseudo example(Ribeiro et al., 2016). Kernel Shapley algorithm relies on the Shapley value estimation (Lundberg and Lee, 2017a).Sample Shapley is a post-hoc method based on Shapley value (Shapley, 1953a), which stems from coalitional game theory. Shapley value provides an axiomatic solution to attribute the contribution of each word in a fair way. However, the exponential complexity of computing Shapley value is intractable. Sampling Shapley(Strumbelj and Kononenko, 2010) provides a solvable approximation to Shapley value via sampling.
Figure 2 Figure 2 :
22shows the results of the IMDB dataset. Due to the page limit, full results of other datasets are shown in Figure 4, Figure 5 and Figure 6 in Appendix A, which have similar tendencies. Kendall's Tau order rank correlation score plots are shown in Figure 2 (a), (b) and (c). Top-K important words overlap score plots are shown in Figure 2 (d), (e) and (f). Figure 2 (a) and (d) show the results of the LIME method. Figure 2 (b) and (e) show the results of the Kernel Shapley method. Figure 2(c) and (f) show the results of the Sample Shapley method.Kendall's Tau order rank correlation score evaluation results. Kendall's Tau order rank correlation score results indirectly illustrate the stability of post-hoc explanation methods. Furthermore, previous observation on the explanation difference Comparison experiment results on the IMDB dataset; (a) and (d) demonstrate results using LIME method; (b) and (e) demonstrate results using Kernel Sharply method; (c) and (f) demonstrate results using Sample Sharply method.
Figure 3 :
3Simulation Experiment Result.
Table 1 :
1Summary statistics for the datasets where C is the number of classes, L is the average sentence length, # counts the number of examples in train/dev/test sets, vocab is the vocab size, and the threshold is the low-frequency threshold, and length is mini-batch sentence length.Dataset
C L
#train #dev #test vocab threshold length
IMDB
2 268 20K
5K
25K
29571 5
250
SST-2
2 19
6920
872
1821
16190 0
50
AG's News 4 32
114K
6K
7.6K
21838 5
50
TREC
6 10
5000
452
500
8026
0
15
Dataset
CNN LSTM BERT
IMDB
86.30 86.60
90.80
SST-2
82.48 80.83
91.82
AG's News 89.90 88.90
95.10
TREC
92.41 90.80
97.00
Table 2 :
2Prediction accuracy(%) of the three neural
network models (CNN, LSTM and BERT) on the four
datasets (IMDB, SST-2, AG's News and TREC).
Table 3 :
3Perturbation levels applied to the input and output respectively.
Table 4 :
4Condition number (κ) distribution when l = 20
Without the example weight computed from a kernel function and the regularization term of explanation complexity.
A Figures of Comparison ExperimentsResult on SST-2, AG's News and TREC DatasetIn the section, we include comparison experiments results of the SST-2 dataset inFigure 4, the AG's News dataset inFigure 5, and the TREC dataset inFigure 6.
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang, arXiv:1804.07998Generating natural language adversarial examples. arXiv preprintMoustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. arXiv preprint arXiv:1804.07998.
Regression diagnostics: Identifying influential data and sources of collinearity. A David, Edwin Belsley, Roy E Kuh, Welsch, John Wiley & SonsDavid A Belsley, Edwin Kuh, and Roy E Welsch. 2005. Regression diagnostics: Identifying influential data and sources of collinearity. John Wiley & Sons.
Learning variational word masks to improve the interpretability of neural text classifiers. Hanjie Chen, Yangfeng Ji, 10.18653/v1/2020.emnlp-main.347Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnlineHanjie Chen and Yangfeng Ji. 2020. Learning varia- tional word masks to improve the interpretability of neural text classifiers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4236-4251, On- line. Association for Computational Linguistics.
Adversarial training for improving model robustness? look at both prediction and interpretation. Hanjie Chen, Yangfeng Ji, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceHanjie Chen and Yangfeng Ji. 2022. Adversarial train- ing for improving model robustness? look at both prediction and interpretation. In Proceedings of the AAAI Conference on Artificial Intelligence.
L-shapley and c-shapley: Efficient model interpretation for structured data. Jianbo Chen, Le Song, J Martin, Michael I Jordan Wainwright, arXiv:1808.02610arXiv preprintJianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. 2018. L-shapley and c-shapley: Efficient model interpretation for structured data. arXiv preprint arXiv:1808.02610.
Jiefeng Chen, Xi Wu, Vaibhav Rastogi, arXiv:1905.09957Yingyu Liang, and Somesh Jha. 2019. Robust attribution regularization. arXiv preprintJiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, and Somesh Jha. 2019. Robust attribution regular- ization. arXiv preprint arXiv:1905.09957.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Towards a rigorous science of interpretable machine learning. Finale Doshi, - Velez, Been Kim, arXiv:1702.08608arXiv preprintFinale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Techniques for interpretable machine learning. Mengnan Du, Ninghao Liu, Xia Hu, Communications of the ACM. 631Mengnan Du, Ninghao Liu, and Xia Hu. 2019. Tech- niques for interpretable machine learning. Commu- nications of the ACM, 63(1):68-77.
Greedy function approximation: a gradient boosting machine. H Jerome, Friedman, Annals of statistics. Jerome H Friedman. 2001. Greedy function approx- imation: a gradient boosting machine. Annals of statistics, pages 1189-1232.
Black-box generation of adversarial text sequences to evade deep learning classifiers. Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi, IEEE Security and Privacy Workshops. IEEEJi Gao, Jack Lanchantin, Mary Lou Soffa, and Yan- jun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pages 50-56. IEEE.
Interpreting recurrent and attention-based neural models: a case study on natural language inference. Reza Ghaeini, Z Xiaoli, Prasad Fern, Tadepalli, arXiv:1808.03894arXiv preprintReza Ghaeini, Xiaoli Z Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language in- ference. arXiv preprint arXiv:1808.03894.
Interpretation of neural networks is fragile. Amirata Ghorbani, Abubakar Abid, James Zou, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3681-3688.
Conditioning diagnostics: Collinearity and weak data in regression. Richard Goldstein, Richard Goldstein. 1993. Conditioning diagnostics: Collinearity and weak data in regression.
H Gene, Charles F Van Golub, Loan, Matrix computations. JHU press3Gene H Golub and Charles F Van Loan. 2013. Matrix computations, volume 3. JHU press.
Deep Learning. Ian Goodfellow, Yoshua Bengio, Aaron Courville, MIT PressIan Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. http://www. deeplearningbook.org.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.
Interpretation of prediction models using the input gradient. Yotam Hechtlinger, arXiv:1611.07634arXiv preprintYotam Hechtlinger. 2016. Interpretation of prediction models using the input gradient. arXiv preprint arXiv:1611.07634.
Fooling neural network interpretations via adversarial model manipulation. Juyeon Heo, Sunghwan Joo, Taesup Moon, arXiv:1902.02041arXiv preprintJuyeon Heo, Sunghwan Joo, and Taesup Moon. 2019. Fooling neural network interpretations via adversarial model manipulation. arXiv preprint arXiv:1902.02041.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
A benchmark for interpretability methods in deep neural networks. Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim, Advances in neural information processing systems. 32Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. 2019. A benchmark for interpretabil- ity methods in deep neural networks. Advances in neural information processing systems, 32.
Adam Ivankay, Ivan Girardi, arXiv:2206.03178Chiara Marchiori, and Pascal Frossard. 2022. Fooling explanations in text classifiers. arXiv preprintAdam Ivankay, Ivan Girardi, Chiara Marchiori, and Pascal Frossard. 2022. Fooling explanations in text classifiers. arXiv preprint arXiv:2206.03178.
Attention is not explanation. Sarthak Jain, C Byron, Wallace, arXiv:1902.10186arXiv preprintSarthak Jain and Byron C Wallace. 2019. Attention is not explanation. arXiv preprint arXiv:1902.10186.
Certified robustness to adversarial word substitutions. Robin Jia, Aditi Raghunathan, Kerem Göksel, Percy Liang, arXiv:1909.00986arXiv preprintRobin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. arXiv preprint arXiv:1909.00986.
To trust or not to trust a classifier. Advances in neural information processing systems. Heinrich Jiang, Been Kim, Melody Guan, Maya Gupta, 31Heinrich Jiang, Been Kim, Melody Guan, and Maya Gupta. 2018. To trust or not to trust a classifier. Ad- vances in neural information processing systems, 31.
Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence34Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong base- line for natural language attack on text classification and entailment. In Proceedings of the AAAI con- ference on artificial intelligence, volume 34, pages 8018-8025.
A new measure of rank correlation. G Maurice, Kendall, Biometrika. 301Maurice G Kendall. 1938. A new measure of rank cor- relation. Biometrika, 30(1/2):81-93.
Convolutional neural networks for sentence classification. Yoon Kim, 10.3115/v1/D14-1181Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsYoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.
Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang, arXiv:1812.05271Textbugger: Generating adversarial text against real-world applications. arXiv preprintJinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. Textbugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271.
Jiwei Li, Will Monroe, Dan Jurafsky, arXiv:1612.08220Understanding neural networks through representation erasure. arXiv preprintJiwei Li, Will Monroe, and Dan Jurafsky. 2016. Un- derstanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220.
Learning question classifiers. Xin Li, Dan Roth, COLING 2002: The 19th International Conference on Computational Linguistics. Xin Li and Dan Roth. 2002. Learning question clas- sifiers. In COLING 2002: The 19th International Conference on Computational Linguistics.
Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi, arXiv:1704.08006Deep text classification can be fooled. arXiv preprintBin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006.
Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, Jianfeng Gao, arXiv:2004.08994Adversarial training for large neural language models. arXiv preprintXiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994.
A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Proceedings of the 31st international conference on neural information processing systems. the 31st international conference on neural information processing systemsScott M Lundberg and Su-In Lee. 2017a. A unified approach to interpreting model predictions. In Pro- ceedings of the 31st international conference on neural information processing systems, pages 4768- 4777.
A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Scott M Lundberg and Su-In Lee. 2017b. A uni- fied approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4765-4774. Curran Associates, Inc.
Learning word vectors for sentiment analysis. Andrew Maas, Raymond E Daly, T Peter, Dan Pham, Huang, Y Andrew, Christopher Ng, Potts, Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies. the 49th annual meeting of the association for computational linguistics: Human language technologiesAndrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the as- sociation for computational linguistics: Human lan- guage technologies, pages 142-150.
A guide for making black box models explainable. Christoph Molnar, Christoph Molnar. 2018. A guide for making black box models explainable. URL: https://christophm. github. io/interpretable-ml-book.
Generating natural language adversarial examples through probability weighted word saliency. Yihe Shuhuai Ren, Kun Deng, Wanxiang He, Che, Proceedings of the 57th annual meeting of the association for computational linguistics. the 57th annual meeting of the association for computational linguisticsShuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In Proceedings of the 57th annual meeting of the as- sociation for computational linguistics, pages 1085- 1097.
why should I trust you?": Explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco, CA, USAMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144.
A value for n-person games. S Lloyd, Shapley, Contributions to the Theory of Games. 2Lloyd S Shapley. 1953a. A value for n-person games. Contributions to the Theory of Games, 2(28).
Quota solutions op n-person games1. Ls Shapley, Emil Artin and Marston Morse343LS Shapley. 1953b. Quota solutions op n-person games1. Edited by Emil Artin and Marston Morse, page 343.
Learning important features through propagating activation differences. Avanti Shrikumar, Peyton Greenside, Anshul Kundaje, PMLRInternational Conference on Machine Learning. Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning, pages 3145-3153. PMLR.
Sanchit Sinha, Hanjie Chen, arXiv:2108.04990Arshdeep Sekhon, Yangfeng Ji, and Yanjun Qi. 2021. Perturbing inputs for fragile interpretations in deep natural language processing. arXiv preprintSanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji, and Yanjun Qi. 2021. Perturbing inputs for fragile interpretations in deep natural language processing. arXiv preprint arXiv:2108.04990.
Fooling lime and shap: Adversarial attacks on post hoc explanation methods. Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. the AAAI/ACM Conference on AI, Ethics, and SocietyDylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Confer- ence on AI, Ethics, and Society, pages 180-186.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Manning, Y Andrew, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
An efficient explanation of individual classifications using game theory. Erik Strumbelj, Igor Kononenko, The Journal of Machine Learning Research. 11Erik Strumbelj and Igor Kononenko. 2010. An efficient explanation of individual classifications using game theory. The Journal of Machine Learning Research, 11:1-18.
Fooling network interpretation in image classification. Akshayvarun Subramanya, Vipin Pillai, Hamed Pirsiavash, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionAkshayvarun Subramanya, Vipin Pillai, and Hamed Pirsiavash. 2019. Fooling network interpretation in image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2020-2029.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, arXiv:1312.6199Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprintChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh, arXiv:1908.07125Universal adversarial triggers for attacking and analyzing nlp. arXiv preprintEric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing nlp. arXiv preprint arXiv:1908.07125.
Gradient-based analysis of nlp models is manipulable. Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh, arXiv:2010.05419arXiv preprintJunlin Wang, Jens Tuyls, Eric Wallace, and Sameer Singh. 2020. Gradient-based analysis of nlp models is manipulable. arXiv preprint arXiv:2010.05419.
Muhammad Bilal Zafar, Michele Donini, Dylan Slack, arXiv:2106.04631Cédric Archambeau, Sanjiv Das, and Krishnaram Kenthapadi. 2021. On the lack of robust interpretability of neural text classifiers. arXiv preprintMuhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, and Krishnaram Kenthapadi. 2021. On the lack of robust inter- pretability of neural text classifiers. arXiv preprint arXiv:2106.04631.
Xiang Zhang, Junbo Zhao, Yann Lecun, arXiv:1509.01626Character-level convolutional networks for text classification. arXiv preprintXiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. arXiv preprint arXiv:1509.01626.
Interpretable deep learning under fire. Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang, 29th {USENIX} Security Symposium. {USENIX} Security 20Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, and Ting Wang. 2020. Interpretable deep learning under fire. In 29th {USENIX} Secu- rity Symposium ({USENIX} Security 20).
Yu Zhang, Peter Tiňo, Aleš Leonardis, and Ke Tang. 2021. A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence. Yu Zhang, Peter Tiňo, Aleš Leonardis, and Ke Tang. 2021. A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computa- tional Intelligence.
Zhengli Zhao, Dheeru Dua, Sameer Singh, arXiv:1710.11342Generating natural adversarial examples. arXiv preprintZhengli Zhao, Dheeru Dua, and Sameer Singh. 2017. Generating natural adversarial examples. arXiv preprint arXiv:1710.11342.
| [] |
[
"Beyond the limitations of any imaginable mechanism: large language models and psycholinguistics",
"Beyond the limitations of any imaginable mechanism: large language models and psycholinguistics"
] | [
"Conor Houghton ",
"Nina Kazanina ",
"¶ ",
"Priyanka Sukumaran ",
"\n†Department of Computer Science\n‡School of Psychological Sciences\nUniversity of Bristol\nUK\n",
"\n¶International Laboratory of Social Neurobiology, Institute for Cognitive Neuroscience, National Research University Higher School of Economics\nUniversity of Bristol\nUK\n",
"\nHSE University\nMoscowRussia\n"
] | [
"†Department of Computer Science\n‡School of Psychological Sciences\nUniversity of Bristol\nUK",
"¶International Laboratory of Social Neurobiology, Institute for Cognitive Neuroscience, National Research University Higher School of Economics\nUniversity of Bristol\nUK",
"HSE University\nMoscowRussia"
] | [] | Large language models are not detailed models of human linguistic processing. They are, however, extremely successful at their primary task: providing a model for language. For this reason and because there are no animal models for language, large language models are important in psycholinguistics: they are useful as a practical tool, as an illustrative comparative, and philosophically, as a basis for recasting the relationship between language and thought.This is a commentary on Bowers et al. (2022). | 10.48550/arxiv.2303.00077 | [
"https://export.arxiv.org/pdf/2303.00077v1.pdf"
] | 257,255,569 | 2303.00077 | 920fdefeeacff8685b099f63007ac2827c822a73 |
Beyond the limitations of any imaginable mechanism: large language models and psycholinguistics
28 Feb 2023
Conor Houghton
Nina Kazanina
¶
Priyanka Sukumaran
†Department of Computer Science
‡School of Psychological Sciences
University of Bristol
UK
¶International Laboratory of Social Neurobiology, Institute for Cognitive Neuroscience, National Research University Higher School of Economics
University of Bristol
UK
HSE University
MoscowRussia
Beyond the limitations of any imaginable mechanism: large language models and psycholinguistics
28 Feb 2023
Large language models are not detailed models of human linguistic processing. They are, however, extremely successful at their primary task: providing a model for language. For this reason and because there are no animal models for language, large language models are important in psycholinguistics: they are useful as a practical tool, as an illustrative comparative, and philosophically, as a basis for recasting the relationship between language and thought.This is a commentary on Bowers et al. (2022).
Neural-network models of language are optimized to solve practical problems such as machine translation. Currently, when these large language models (LLMs) are interpreted as models of human linguistic processing they have similar shortcomings to those that deep neural network have as models of human vision. Two examples can illustrate this. First, LLMs do not faithfully replicate human behaviour on language tasks (Marvin and Linzen, 2018;Kuncoro et al., 2018;Linzen and Leonard, 2018;Mitchell et al., 2019). For example, an LLM trained on a wordprediction task shows similar error rates to humans overall on long-range subject-verb number agreement but errs in different circumstances: unlike humans, it makes more mistakes when sentences have relative clauses (Linzen and Leonard, 2018), indicating differences in how grammatical structure is represented. Second, the LLMs with better performance on language tasks do not necessarily have more in common with human linguistic processing or more obvious similarities to the brain. For example, Transformers learn efficiently on vast corpora and avoid human-like memory constraints but are currently more successful as language models than recurrent neural networks such as the Long-Short-Term-Memory LLMs (Devlin et al., 2018;Brown et al., 2020), which employ sequential processing, as humans do, and can be more easily compared to the brain.
Furthermore, the target article suggests that, more broadly, the brain and neural networks are unlikely to resemble each other because evolution differs in trajectory and outcome from the optimization used to train a neural network. Generally, there is an unanswered question about which aspects of learning in LLMs are to be compared to the evolution of our linguistic ability and which to language learning in infants but in either case, the comparison seems weak. LMMs are typically trained using a next-word prediction task; it is unlikely our linguistic ability evolved to optimize this and next-word prediction can only partly describe language learning: for example, infants generalize word meanings based on shape (Landau et al., 1988) while LLMs lack any broad conceptual encounter with the world language describes.
In fact, it would be peculiar to suggest that LLMs are models of the neural dynamics that support linguistic processing in humans; we simply know too little about those dynamics. The challenge presented by language is different to that presented by vision: language lacks animal models and debate in psycholinguistics is occupied with broad issues of mechanisms and principles, whereas visual neuroscience often has more detailed concerns. We believe that LLMs have a valuable role in psycholinguistics and this does not depend on any precise mapping from machine to human. Here we describe three uses of LLMs: (1) the practical, as a tool in experimentation; (2) the comparative, as an alternate example of linguistic processing and (3) the philosophical, recasting the relationship between language and thought.
(1): An LLM models language and this is often of practical quantitative utility in experiment. One straight-forward example is the evaluation of surprisal: how well a word is predicted by what has preceded it. It has been established that reaction times, (Fischler and Bloom, 1979;Kleiman, 1980), gaze duration, (Rayner and Well, 1996), and EEG responses, (Dambacher et al., 2006;Frank et al., 2015), are modulated by surprisal, giving an insight into prediction in neural processing. In the past, surprisal was evaluated using n-grams, but n-grams become impossible to estimate as n grows and as such they cannot quantify long-range dependencies. LLMs are typically trained on a task akin to quantifying surprisal and are superior to n-grams in estimating word probabilities. Differences between LLM-derived estimates and neural perception of surprisal may quantify which linguistic structures, perhaps poorly represented in the statistical evidence, the brain privileges during processing.
(2): LLMs are also useful as a point of comparison. LLMs combine different computational strategies, mixing representations of word properties with a computational engine based on memory or attention. Despite the clear differences between LLMs and the brain, it is instructive to compare the performance of different LLMs on language tasks to our own language ability. For example, although LLMs are capable of long range number and gender agreement, (Linzen et al., 2016;Gulordava et al., 2018;Bernardy and Lappin, 2017;Sukumaran et al., 2022), they are not successful in implementing another long-range rule: Principle C, (Mitchell et al., 2019), a near-universal property of languages which depends in its most straight-forward description on hierarchical parsing. Thus, LLMs allow us to recognize those aspects of language which require special consideration while revealing others to be within easy reach of statistical learning.
(3): In the past, philosophical significance was granted to language as evidence of thought or personhood. Turing (1950), for example, proposes conversation as a proxy for thought and Chomsky (1966) describes Descartes as attributing the possession of mind to other humans because the human capacity for innovation and for the creative use of language, is 'beyond the limitations of any imaginable mechanism'. It is significant that machines are now capable of imitating the use of language. While machine-generated text still has attributes of awkwardness and repetition that make it recognizable on careful reading, it would seem foolhardy to predict these final quirks are unresolvable or are characteristic of the division between human and machine. Nonetheless, most of us appear to feel intuitively that LLMs enact an imitation rather than a recreation of our linguistic ability: LLMs seem empty things whose pantomime of language is not underpinned by thought, understanding or creativity. Indeed, even if an LLM were capable of imitating us perfectly, we would still distinguish between a loved one and their simulation. This is a challenge to our understanding of the relationship between language and thought: either we must claim that, despite recent progress, machine-generated language will remain unlike human language in vital respects, or we must defy our intuition and consider machines to be as capable of thought as we are, or we must codify our intuition to specify why a machine able to produce language should, nonetheless, be considered lacking in thought.
Using deep neural networks to learn syntactic agreement. Linguistic Issues in Language Technology. J.-P Bernardy, S Lappin, J.-P. Bernardy and S. Lappin. Using deep neural networks to learn syntactic agreement. Linguistic Issues in Language Technology, 2017.
Deep problems with neural network models of human vision. J S Bowers, G Malhotra, M Dujmović, M L Montero, C Tsvetkov, V Biscione, G Puebla, F Adolfi, J E Hummel, R F Heaton, file:/localhost/opt/grobid/grobid-home/tmp/doi.org/10.1017/S0140525X22002813Behavioral and Brain Sciences. J. S. Bowers, G. Malhotra, M. Dujmović, M. L. Montero, C. Tsvetkov, V. Biscione, G. Puebla, F. Adolfi, J. E. Hummel, R. F. Heaton, and et al. Deep problems with neu- ral network models of human vision. Behavioral and Brain Sciences, page 1-74, 2022. doi.org/10.1017/S0140525X22002813.
Language models are few-shot learners. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, Advances in Neural Information Processing Systems. 33T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877-1901, 2020.
N Chomsky, Cartesian linguistics: A chapter in the history of rationalist thought. Cambridge University PressN. Chomsky. Cartesian linguistics: A chapter in the history of rationalist thought. Cambridge University Press, 1966.
Frequency and predictability effects on event-related potentials during reading. M Dambacher, R Kliegl, M Hofmann, A M Jacobs, Brain Research. 10841M. Dambacher, R. Kliegl, M. Hofmann, and A. M. Jacobs. Frequency and predictability effects on event-related potentials during reading. Brain Research, 1084(1):89-103, 2006.
J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805, 2018.
Automatic and attentional processes in the effects of sentence contexts on word recognition. I Fischler, P A Bloom, Journal of Verbal Learning and Verbal Behavior. 181I. Fischler and P. A. Bloom. Automatic and attentional processes in the effects of sentence contexts on word recognition. Journal of Verbal Learning and Verbal Behavior, 18(1):1-20, 1979.
The ERP response to the amount of information conveyed by words in sentences. S L Frank, L J Otten, G Galli, G Vigliocco, Brain and Language. 140S. L. Frank, L. J. Otten, G. Galli, and G. Vigliocco. The ERP response to the amount of information conveyed by words in sentences. Brain and Language, 140:1-11, 2015.
K Gulordava, P Bojanowski, E Grave, T Linzen, M Baroni, arXiv:1803.11138Colorless green recurrent networks dream hierarchically. K. Gulordava, P. Bojanowski, E. Grave, T. Linzen, and M. Baroni. Colorless green recurrent networks dream hierarchically. arXiv:1803.11138, 2018.
Sentence frame contexts and lexical decisions: Sentence-acceptability and wordrelatedness effects. G M Kleiman, Memory & Cognition. 84G. M. Kleiman. Sentence frame contexts and lexical decisions: Sentence-acceptability and word- relatedness effects. Memory & Cognition, 8(4):336-344, 1980.
The perils of natural behaviour tests for unnatural models: the case of number agreement. Poster presented at Learning Language in Humans and in Machines. A Kuncoro, C Dyer, J Hale, P Blunsom, Paris, FrA. Kuncoro, C. Dyer, J. Hale, and P. Blunsom. The perils of natural behaviour tests for unnatural models: the case of number agreement. Poster presented at Learning Language in Humans and in Machines, Paris, Fr., July, 5(6), 2018.
The importance of shape in early lexical learning. B Landau, L B Smith, S S Jones, Cognitive Development. 33B. Landau, L. B. Smith, and S. S. Jones. The importance of shape in early lexical learning. Cognitive Development, 3(3):299-321, 1988.
Distinct patterns of syntactic agreement errors in recurrent networks and humans. T Linzen, B Leonard, arXiv:1807.06882T. Linzen and B. Leonard. Distinct patterns of syntactic agreement errors in recurrent networks and humans. arXiv:1807.06882, 2018.
Assessing the ability of LSTMs to learn syntax-sensitive dependencies. T Linzen, E Dupoux, Y Goldberg, Transactions of the Association for Computational Linguistics. 4T. Linzen, E. Dupoux, and Y. Goldberg. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521-535, 2016.
Targeted syntactic evaluation of language models. R Marvin, T Linzen, arXiv:1808.09031R. Marvin and T. Linzen. Targeted syntactic evaluation of language models. arXiv:1808.09031, 2018.
Do LSTMs know about Principle C. J Mitchell, N Kazanina, C Houghton, J Bowers, file:/localhost/opt/grobid/grobid-home/tmp/doi.org/10.32470/CCN.2019.1241-02019 Conference on Cognitive Computational Neuroscience. J. Mitchell, N. Kazanina, C. Houghton, and J. Bowers. Do LSTMs know about Principle C? In 2019 Conference on Cognitive Computational Neuroscience, 2019. doi.org/10.32470/CCN.2019.1241-0.
Effects of contextual constraint on eye movements in reading: A further examination. K Rayner, A D Well, Psychonomic Bulletin & Review. 34K. Rayner and A. D. Well. Effects of contextual constraint on eye movements in reading: A further examination. Psychonomic Bulletin & Review, 3(4):504-509, 1996.
Do LSTMs see gender? Probing the ability of LSTMs to learn abstract syntactic rules. P Sukumaran, C Houghton, N Kazanina, arXiv:2211.00153P. Sukumaran, C. Houghton, and N. Kazanina. Do LSTMs see gender? Probing the ability of LSTMs to learn abstract syntactic rules. arXiv:2211.00153, 2022.
Computing machinery and intelligence. Mind. A M Turing, 49A. M. Turing. Computing machinery and intelligence. Mind, 49:433-460, 1950.
| [] |
[
"Decoding Demographic un-fairness from Indian Names",
"Decoding Demographic un-fairness from Indian Names"
] | [
"Medidoddi Vahini vahinimirididdi@gmail.com \nIndian Institute of Technology\nKharagpurWest BengalIndia\n",
"Jalend Bantupalli jalend.bantupalli@gmail.com \nIndian Institute of Technology\nKharagpurWest BengalIndia\n",
"Souvic Chakraborty \nIndian Institute of Technology\nKharagpurWest BengalIndia\n",
"Animesh Mukherjee animeshm@gmail.com \nIndian Institute of Technology\nKharagpurWest BengalIndia\n"
] | [
"Indian Institute of Technology\nKharagpurWest BengalIndia",
"Indian Institute of Technology\nKharagpurWest BengalIndia",
"Indian Institute of Technology\nKharagpurWest BengalIndia",
"Indian Institute of Technology\nKharagpurWest BengalIndia"
] | [] | Demographic classification is essential in fairness assessment in recommender systems or in measuring unintended bias in online networks and voting systems. Important fields like education and politics, which often lay a foundation for the future of equality in society, need scrutiny to design policies that can better foster equality in resource distribution constrained by the unbalanced demographic distribution of people in the country. We collect three publicly available datasets to train state-of-the-art classifiers in the domain of gender and caste classification. We train the models in the Indian context, where the same name can have different styling conventions (Jolly Abraham/Kumar Abhishikta in one state may be written as Abraham Jolly/Abishikta Kumar in the other). Finally, we also perform cross-testing (training and testing on different datasets) to understand the efficacy of the above models. We also perform an error analysis of the prediction models. Finally, we attempt to assess the bias in the existing Indian system as case studies and find some intriguing patterns manifesting in the complex demographic layout of the sub-continent across the dimensions of gender and caste. | 10.48550/arxiv.2209.03089 | [
"https://export.arxiv.org/pdf/2209.03089v1.pdf"
] | 252,110,852 | 2209.03089 | 25fde520f9767a88dd03103cd08643419edb8c20 |
Decoding Demographic un-fairness from Indian Names
Medidoddi Vahini vahinimirididdi@gmail.com
Indian Institute of Technology
KharagpurWest BengalIndia
Jalend Bantupalli jalend.bantupalli@gmail.com
Indian Institute of Technology
KharagpurWest BengalIndia
Souvic Chakraborty
Indian Institute of Technology
KharagpurWest BengalIndia
Animesh Mukherjee animeshm@gmail.com
Indian Institute of Technology
KharagpurWest BengalIndia
Decoding Demographic un-fairness from Indian Names
Caste · Gender · Fairness · Demographic bias · India
Demographic classification is essential in fairness assessment in recommender systems or in measuring unintended bias in online networks and voting systems. Important fields like education and politics, which often lay a foundation for the future of equality in society, need scrutiny to design policies that can better foster equality in resource distribution constrained by the unbalanced demographic distribution of people in the country. We collect three publicly available datasets to train state-of-the-art classifiers in the domain of gender and caste classification. We train the models in the Indian context, where the same name can have different styling conventions (Jolly Abraham/Kumar Abhishikta in one state may be written as Abraham Jolly/Abishikta Kumar in the other). Finally, we also perform cross-testing (training and testing on different datasets) to understand the efficacy of the above models. We also perform an error analysis of the prediction models. Finally, we attempt to assess the bias in the existing Indian system as case studies and find some intriguing patterns manifesting in the complex demographic layout of the sub-continent across the dimensions of gender and caste.
Introduction
The name of a person can convey various demographic features of the individual. This demographic information plays a crucial role in multiple studies related to racial inequality, recommendation systems, biomedical studies, hate-speech target identification, group sentiment analysis etc. [14,1,4,15,5]. Consequently, much work has been done on demographic classification and a variety of online web APIs and tools capable of predicting demographics from user name [2,3,11]) exist. Most of this research work, however, is focused on US demographics [14,12], and many of the works [2,3,11,14] build classifiers that require a proper division of the first name, middle name, and last name to work. So, we introduce caste and gender-specific datasets on Indian demographics, which hosts one-seventh of the world population. Indian names vary significantly over states when compared to other countries due to high religious, ethno-demographic and linguistic variance 1 . Also, Indian names do not always fall under the division of first/middle/last name, primarily because the name of a person may contain the name of their ancestors (e.g., Avul Pakir Jainulabdeen Abdul Kalam, former President of India -here the first name of the person is Abdul). Indian names may also contain the name of father in the name of a woman. This can make gender or race detection difficult in faulty name segmentation.
So, to achieve a realistic outcome for name to demography detection, we train our classifiers as an end-to-end sequence classification task, overcoming the need for a segmentation model. The Indian society is divided by gender and caste, unlike the US demographics, divided by races. Thus we focus on caste and gender prediction in this study. To summarize, the objective of this study is to predict the gender and the caste from any complete Indian name overcoming the need to build a segmentation model. We list down our contributions in this paper below.
1. Toward fulfilling the classification objective, we build large datasets acquiring already public data of India-wide examination records and parsing electoral rolls containing over 7.63 million unique names. 2. We demonstrate the efficacy of our model through several case studies and make interesting observations about caste and gender based discrimination in India both online and on the ground. 3. We show that there has been an upward trend of participation in competitive exams among women and backward classes over the years. We perform a multi-dimensional study to understand the nuance of caste and gender-based discrimination present in Indian society to help policymakers make datadriven choices. 4. We perform state-wise chronological analysis to understand the efficacy of discrimination-limiting practices/laws implemented in the Indian states. 5. We also analyze the Indian social media 'Koo' to understand the degree of representation a weaker section of the Indian society has on the web and its improvement over time.
We have opensourced our codebase encouraging further research 2 2 Related work . We have used these APIs as baselines.
Ethnicity classification and demographic bias: Classifying the ethnic category, one of the most telling demographic property of a user, provides an essential data add-on for social science research. Other important applications include biomedical research, population demographic studies, and marketing toward a specific group of individuals [1], [17]. Despite numerous applications, ethnic information of users are often not directly available.
To bridge this gap, Sood et al.(2018) [14] made use of registered voters in Florida to infer race and ethnicity from names obtaining a F1-score of 0.83 using LSTMs. Ambedkar et al. [1] presented a model that classifies names into 13 cultural/ethnic groups with data extracted from Wikipedia. Giles et al. [17] proposed a name-ethnicity classifier that identified ethnicity from personal names in Wikipedia with 85% accuracy. The present work: Studies on computational bias in Indian datasets are rare due to unavailability of good published datasets [6]. In this work, we specifically focus on India and attempt to quantify bias in diverse Indian datasets across the two major dimensions Indian Society is divided along-caste and gender. To this purpose we collected multi-year data from (i) electoral records of different Indian states, (ii) data corresponding to the India wide 10 th and 12 th standard examination, (iii) data corresponding to the top Indian engineering and medical entrance examinations and (iv) data from one of the fastest growing Indian social network. We use pre-trained transformer models to obtain better gender and caste classification performance and perform various case studies using the prediction from the best models to gain insights into the underlying demographic biases based on gender and caste in India.
Datasets
We use datasets for two purposes: training models and conducting case studies. We collected three massive datasets 3 to gather training data on diverse Indian names from the Central Board of Secondary Education (CBSE), the All India Engineering Entrance Examination (AIEEE), and the Electoral Rolls (ER).
To conduct case studies on the social media data, we resorted to the Koo social network 4 . For educational data in addition to the multi-year CBSE (standard X and XII) and the AIEEE data mentioned earlier we also included the All India Pre Medical Test (AIPMT) data. CBSE dataset: Data for training models -The Central Board of Secondary Education (CBSE) keeps a record of all students' grades 5 . We scraped a sample of about 100K records from their website for the 2014 and 2015 academic years. CBSE data includes information such as the student's name, father's name, mother's name, and grades. It comprises information from students in the 12 th grade during the previous years. In this dataset, the gender labels for students are not available. However the name of the father and the mother of every student are present. This gives us an easy way to get the names and the corresponding gender labels. Data for case study -We collected CBSE grade 10 student data from 2004 to 2010 and CBSE grade 12 data from 2004 to 2012 to conduct the case studies. The number of unique names in the CBSE grade 10 and grade 12 datasets are 70.09% and 73.12% respectively. AIEEE dataset: Data for training models -The All India Engineering Entrance Examination (AIEEE) is a national examination for admission to engineering colleges. The AIEEE data records 6 used for training corresponded to the years 2009, 2010, and 2011. It includes the students' names, state and caste categories -general/reserved (i.e., OBC/SC/ST) 7 , the fathers' and the mothers' names. Data for the case study -The marksheets of students for AIEEE exams spanning the years 2004 to 2011 are randomly sampled and gathered for the case studies. ER dataset: Electoral roll data is gathered from the electoral roll websites of each state government. We collected only English language data from these rolls. We show the states considered for the gender classification in Table 4(Appendix). AIPMT dataset: The All India Pre-Medical Test (AIPMT) is a test for admission to medical schools in India. The AIPMT data obtained spans over the years 2004 to 2011. This dataset is solely used to conduct case studies and provides information on 435,288 students with 327,665 (75.27%) unique names. Social media dataset: Apart from educational data for our case studies, we also gathered data from Koo 8 which is a rapidly growing social network in India. For our study we have used the data of all Koo users that has been recently released by [13]. We applied our models to this dataset and analyzed the degree of representation based on caste and gender, as shown in section 7.
Methodology
We can determine the gender/caste from a user's name using either the first or full name. In India, extracting the first name from the full name is dependent on the state, religion and local culture; for example, the first name appears as the final word in the name in certain states for certain religions. Hence we used a person's full name as input for both gender and caste classification tasks.
Classification models Baselines and models:
We used the top APIs available as baselines: Gender
Experimental setup
Gender and Caste labels: Only binary categories -male and female -are used for gender classification task. As for the caste, the categories that one finds are General (upper caste people who did not face historical discrimination and benefited from the caste system), Scheduled Caste (SC: who were discriminated historically), Scheduled Tribe (ST: who were out castes and faced the maximum discrimination), and Other Backward Castes (OBC). For the purpose of this study, we divide castes into broad groups: General and Reserved (SC/ST/OBC for whom the government guarantees reservation to ensure a level-playing field). Repetition of names: Many names in our datasets repeat. Thus it is possible that test points (chosen randomly) can overlap with the training points. In order to avoid this, we run our experiments on unique names only. The label for this instance is the majority label of all the individual instances of the name. For our experiments we use a train-test split of 70:30. More details related to dataset division is included in Appendix A.
Results
Main results: The main results are noted in Table 1. We observe that simple ML models like LR and SVM do not perform well. Character based models show greater improvement over LR, SVM for gender detection showing the benefits of the choice of character sequences for this task. Transformer based models perform best for both gender and caste classification with no clear winner among them. Overall, MuRIL does well in gender classification and mBERT in caste classification.
Baseline APIs: We have used only 500 unseen instances due to API request limit per day for each of these baselines. For a set of randomly chosen 500 data points, we observe that the transformer based models (MuRIL and mBERT) by far outperform all the three baselines (see Table 2). Among the baselines, Onograph performs the best. Cross dataset evaluation From Table 3 we see that models trained on CBSE and AIEEE datasets (having similar pan-India demographics) perform well on each other's test sets. Further, the models trained on the ER dataset perform reasonably well when tested on CBSE and AIEEE datasets but the reverse setups perform poorly due to lesser representation of north-eastern states in CBSE/AIEEE data.
Decoding unfairness across gender and caste lines
We use social media datasets, longitudinal educational records, and electoral roll datasets to identify and quantify the inter-sectional bias caused by caste and gender prejudice. We also display the results over an 8-year period to better understand how government-sponsored social programs and globalization are influencing Indian society and reducing unfairness in resource distribution. For all these studies, we have used the MuRIL-based model, which is also shown to be one of the top-performing models. To understand the state and evolution of discrimination in the current Indian system, we draft the following set of research questions (RQ): Indian Government has the promise of equality embedded in its emblem, and the reservation system was introduced as a mechanism to ensure the continuous integration of backward caste people into the mainstream social structure. This reservation system ensures equality of representation in government institutes of higher education and workplaces by reserving some seats for backward class people who have faced historic injustice.
However, a similar system does not exist for women. Hence, we try to understand the system's effect on education for both gender and caste divisions.
In Figure 1, we plot the ratio of women and backward castes in the sampled dataset for each year and each exam. We observe that for AIEEE, women's representation has steadily increased from 2005 to 2010. Also, the engineering exam AIEEE saw much lower participation from women when compared to the medical entrance exam AIPMT or the board exams CBSE 10th and CBSE 12th grade. In the medical examination AIPMT, women's representation has increased and has achieved the population average from 2007. In the CBSE 10th and CBSE 12th grade, we see a stable representation of women over time.
However, the representation of backward castes in the medical entrance exam and both the board exams has remained extremely low all over time. AIEEE, the engineering entrance exam, stands out here with much higher representation from backward castes, and the participation is also slightly increasing with time.
In Figure 1 we plot the median percentile attained by women and backward castes for each year and each exam. While the average percentile for the whole dataset will be the 50th percentile, this metric, when applied to a specific community, will tell us the meritocratic position of that community, i.e., whether that community is doing better relative to other communities or lesser and how that position is changing over time concerning specific exams. We have the ranks available only for AIEEE data, so we try to analyze only this data. We see that women have achieved equality in percentile and are doing better than men in later years. However, the same is not valid for backward caste students. The above percentile changes show need for exam-specific policy modifications. For women, the inequality is mostly in participation, and women need more encouragement to participate in competitive exams, whereas backward class students need better support to score higher in the competitive exams.
7.2 Impact of social bias on online media and web representation RQ3: Are women and backward caste people less represented in the Indian social networks?
We analyze the Indian social network 'Koo' 9 to investigate the extent of representation of women and lower caste people in this network. We use the massive dataset with 4 million usernames and metadata from Koo provided by [13]. Using our classifier, we predicted the caste and the gender of these names. In Figure 1, we plot the ratio of women and backward castes in the network for each month across Koo's existence. We observe that upper caste people of India mostly occupied Koo during its inception stage; however the situation quickly changed over time with a slightly higher representation from the lower caste people. On the other hand, women's distribution continuously decreased in this Indian social network. RQ4: How vocal are the women and backward caste voices (#followers*#tweets) on Indian social media?
We quantify the overall 'voice' of a group by the sum of the #followers*#tweets for each person in that group. As we do not have the number of followers data for each month, we only compute the aggregate statistics at the end of the last month in the dataset. The ratio of male voice to the female voice in the network is 3.59. Similarly the ratio of the general to the backward class voice is 10.47. These results demonstrate a striking inequality. Further we observe that a random message posted on the platform is 37.58 times more likely to be from a forward caste male than a backward-caste female.
Conclusion and future work
The paper introduced various large-scale datasets of Indian names and extensively explored the possibility of gender and caste detection from these names. We showed that the state-of-the-art APIs do not perform well on the task of gender detection from Indian names; in contrast, the recent transformer based models performed extremely well in this task. Further, to the best of our knowledge, this is the first large-scale caste classification task undertaken to understand the existing demographic disparities in across India. Through a series of rigorous case studies we have shown the gender and caste based biases that exist in basic and higher education as well as in the representation in social media. We have also opensourced our codebase for further research and contribution. In future, we will like to consider more caste varieties and data from all states for a nuanced evaluation. Genderize API [7]: It is a simple API that predicts a person's gender based on their name. The request will generate a response with the following keys: name, gender, likelihood, and count. The probability denotes the certainty of the gender assigned. The count indicates the number of data rows reviewed to calculate the response.
A.3 Model description
Logistic regression: We concatenate the different parts of the name and compute character n-grams. Next we obtain TF-IDF scores from the character ngrams and pass them as features to the logistic regression model. SVM: The objective of the support vector machine algorithm is to identify a hyperplane in N-dimensional space (N = the number of features) that categorizes the data points clearly. Then, we accomplish classification by locating the hyperplane that best distinguishes the two classes. There are several hyperplanes that might be used to split the two groups of data points. Our goal is to discover a plane with the greatest margin or the greatest distance between data points from both classes. Char CNN: Character-level CNN (char-CNN) is a well-known text classification algorithm. Each character is encoded with a fixed-length trainable embedding. A 1-D CNN is applied to the matrix created by concatenating the above vectors. In our model, we utilize 256 convolution filters in a single hidden layer of 1D convolution with a kernel size of 7.
Char LSTM: A name is a sequence of characters. Like char-CNN, each character of the input name is transformed into trainable embedding vectors and provided as input. Our model employs a single LSTM layer with 64 features and a 20% dropout layer. Transformer models -We choose BERT for demographic categorization, using full names as inputs because it has proven to be highly efficient in English data sequence modeling. -mBERT is trained using a masked language modeling (MLM) objective on the top 104 languages with the largest Wikipedia. -IndicBERT is a multilingual ALBERT model that has only been trained on 12 major Indian languages 10 . IndicBERT has much fewer parameters than other multilingual models. -MuRIL is pre-trained on 17 Indian languages and their transliterated counterparts. It employs a different tokenizer from the BERT model. This model is an appropriate candidate for categorization based on Indian names because it is pre-trained on Indian languages.
Hyperparameters: LR: learning rate = 0.003, n-gram range = (1-6) SVM: kernel=rbf, n-gram range = (1-6), degree = 3, gamma = scale Char CNN: learning rate = 0.001, hidden layers = 1, filters = 256, kernel size = 7, optimizer = adam Char LSTM: learning rate = 0.001, dropout = 0.2, hidden layers = 1, features = 64, optimizer = adam Transformer models: models = [bert-base-uncased, google/muril-base-cased, ai4bharat/indic-bert, bert-base-multilingual-uncased], epochs = 3, learning rate = 0.00005
A.4 Results
More detailed results are given in Table 5 and 6. Handling of corner cases : As a name can be common across both genders or caste, we use majority voting inorder to label a name with binary label for both gender and caste classification tasks. In case of equality we considered arbitrarily decided labels. Table 7 lists some of the best and worst test cases for the best performing baselines and the best performing transformer based models. Both these types of models perform the best when the first name (first word) is a good representative of the gender (e.g., Karishma Chettri). Baselines usually fail in three cases: the presence of parental name or surname (e.g., Avunuri Aruna), longer names where gender is represented by multiple words (e.g., Kollipara Kodahda Rama Murthy), and core Indian names (e.g., Laishram Priyabati, Gongkulung Kamei). The main reason for the better performance of transformer models might be that they are trained on complete names and larger datasets. As a result, they handle the complexity of Indian names. However, both these types of models tend to fail in presence of unusual and highly complicated names (e.g., Raj Blal Rawat, Pullammagari Chinna Maddileti). ARQ1: Which states in India have the highest representation of females and backward castes in higher education compared to its population? The AIEEE dataset has the state information for each data point. We also collect the state wise population record from Census 2011 11 . We compute the population normalized fraction of women and backward caste people writing the AIEEE 2011 exam. From the plotted results in Figure 2, we observe that the top states with population normalized higher representation of women writing the AIEEE exam are Jammu & Kashmir, Himachal Pradesh, Punjab, West Bengal, and Maharashtra. Similarly, the states with population normalized higher representation of backward castes writing the AIEEE exam are West Bengal, Maharashtra, Punjab, Uttarakhand, and Jammu & Kashmir. We believe that the education policies of these states could act as a suitable guidance to improve the condition of the other Indian states. ARQ2: Which states in India have been successful in achieving a significant decrease in bias toward females and backward castes over time? Which states are lacking in this aspect?
A.5 Error Analysis -Baseline APIs vs Our Models
One way to measure the reduction (increase) in bias would be to check for the increase (decrease) in the population normalized percentage of women and backward caste over time. To this purpose, we obtained the rate of change of population normalized women and backward class candidates taking the AIEEE exam. For each state, the rate of change is measured as the slope of the best fit line (linear regression) of the year versus population normalized percentage scatter plot. The year range considered was 2004 to 2011.
From Figure 3, we observe that the most successful states in reducing the gender inequality are Himachal Pradesh, Andhra Pradesh (Seemandhra and Telangana), Haryana and Maharashtra. With respect to reducing caste inequality we find West Bengal, Punjab, Uttarakhand, Maharashtra, Karnataka are the most successful. In Table 9 we show the % breakup of the cross-sectional categories in the Koo dataset. We observe that the largest representation is from the general category males while the smallest is from the reserved category females. In the latest time point (see Table 11) we observe higher female representation than in the oldest time point (see Table 10). The % of females (both general and reserved) in top 1% users sorted by followers is relatively larger than in the bottom 1% followers (see Tables 12 and 13). This is exactly the opposite (see Tables 12 and 13) for males (both general and reserved). We believe that a possible reason could be that women have closed coteries of followership.
A.8 Distribution of Caste and Gender in Koo
A.9 Ethical implications
Like any other classification task, it can also be potentially misused when in the hands of malicious actors. Instead of reduction of bias, the same technology can be used to enforce discrimination. Hence, we request the researchers to exercise caution while using this technology as some demography classification APIs are already publicly available. Further, to keep personally identifiable data private, we opensource the codebase to collect the datapoints instead of sharing the datasets, a policy ubiquitous for social science researchers.
- RQ1 :Fig. 1 .
RQ11Is the representation of females (no reservation) and backward castes (reserved) in public education increasing over time? -RQ2: Is the representation of females (no reservation) and backward castes (reserved) in competitive engineering/medical entrance exams increasing over time? -RQ3: Are females and backward caste people less represented in the Indian social networks? -RQ4: How vocal are the females and backward caste voices (#followers*#posts) in Indian social media? Distribution of women and backward castes are shown in the left most figure. Middle figure displays the median percentile in the same. The right most diagram shows the temporal evolution of women and backward caste people in Koo. 7.1 Impact of social bias on education RQ1 & RQ2: Is the representation of females (no reservation) and backward castes (reserved) in public education and competitive exams increasing over time?
A. 6
6Case studies -Values of Median percentile
Fig. 3 .
3Rate of change of population normalized percentage of women and backward caste students across Indian states appearing in AIEEE exams.
Table 1 .
1API[2], Genderize API[11], and Forebears API[7]. For non-DL models, we use logistic regression and SVM. We use CharCNN and CharLSTM as neural models trained from scratch. maintained in the same website as CBSE 7 https://en.wikipedia.org/wiki/Scheduled_Castes_and_Scheduled_Tribes 8 https://www.kooapp.com/feed We used BERT, mBERT, IndicBERT, and MuRIL as pretrained neural models. Details are added in Appendix A.2. Performance of the models(Accuracy) for gender and caste classification.6
Table 2 .
2Comparison with baseline APIs. All results are on a held out set of 500 data points.Model
ER AIEEE CBSE
APIS
Gender [2]
53.2
64.0
81.0
Onograph [11] 71.46 82.00 92.8
Genderize [7]
49.79 63.86 82.38
Models
MuRIL-CBSE 74.85 89.00 97.00
MuRIL-ER
93.81 94.20 97.40
MuRIL-AIEEE 77.45 95.40 97.60
BERT-CBSE
77.05 86.20 97.20
BERT-ER
93.81 94.20 97.00
BERT-AIEEE 76.25 97.00 97.60
Table 3 .
3Performance in a crossdataset settingModel Train Test Accuracy F1-ScoreMuRIL ER
CBSE
97.31
97.28
ER AIEEE
95.40
95.31
CBSE
ER
78.03
77.93
CBSE AIEEE
90.82
90.72
AIEEE ER
79.47
79.35
AIEEE CBSE
97.03
97.00
BERT
ER
CBSE
97.31
97.28
ER AIEEE
94.94
94.84
CBSE
ER
78.27
78.15
CBSE AIEEE
89.64
89.50
AIEEE ER
79.66
79.53
AIEEE CBSE
96.99
96.96
Table 4 .
4The table below contains information on datasets that are used to train models and conduct case studies.Gender classification
Data (full)
Female Male
CBSE
194423 194413
ER
10405236 11632598
AIEEE
358522 358522
CBSE-breakup Female Male
2014
25779
31573
2015
51744
63434
ER-breakup
Female Male
Daman
53391
53605
Manipur
580415 589948
Meghalaya
748820 737951
Nagaland
253274 295039
Arunachal
292158 292544
Delhi
966324 1430743
Sikkim
76145
88209
Goa
372029 361380
Mizoram
134305 158144
AIEEE-breakup Female Male
2009
66286
84615
2010
70826
91687
2011
68965
89490
Caste classification
AIEEE Reserved General
2009
47681
64892
2010
54703
68163
2011
57262
65810
Case study -education data
Dataset Total
Unique names
AIEEE 665227 525631
CBSE 10 487080 341430
CBSE 12 378123 276476
AIPMT 435288 327665
Case study -social media data
Dataset Total
Valid names
Koo
4061670 1761958
Table 5 .Table 6 .
56Performance of the models for gender classification on each dataset. Performance of the models for caste classification on AIEEE dataset.Gender classification
Model
CBSE
ER
AIEEE
F1-Score Accuracy F1-Score Accuracy F1-Score Accuracy
LR
90.93
91.03
73.23
73.55
87.24
87.38
SVM
93.69
93.82
37.91
46.85
85.12
85.31
Char-CNN 96.12
96.18
89.72
89.74
94.54
94.13
Char-LSTM 95.75
95.81
90.23
90.41
94.62
94.72
BERT
96.94
96.97
92.52
92.56
95.99
96.06
MuRIL
97.04
97.07
92.45
92.49
95.90
95.97
IndicBERT 96.28
96.32
91.48
91.52
94.48
94.59
mBERT
96.76
96.80
92.46
92.50
95.76
95.84
Caste classification -AIEEE Data
Model
Complete Name
Name & State
F1-Score Accuracy F1-Score Accuracy
LR
68.64
68.71
69.58
69.62
SVM
53.82
61.73
59.58
64.82
Char-CNN 71.18
71.57
72.74
73.21
BERT
71.80
72.62
73.99
74.70
MuRIL
71.57
71.91
73.04
73.79
IndicBERT 69.72
70.66
72.03
72.86
mBERT
71.34
73.05
73.60
74.61
Table 7 .
7Table listingsome common errors by the best performing baselines and the best performing transformer models. Here W stands for wrong and C stands for correct. And XX denotes the model, API results respectively; for e.g., WC lists names where transformer predicted wrong while API predicted correct. The letter in bracket denotes the gender (M for male and F for female). The listed names have multiple instances in the datasets. So none of the names uniquely identify any personDataset CC
CW
Table 8
8displays values that are plotted in the left plot of figure 1.A.7 Case Studies -State wise ResultsTo understand state wise distribution of Caste and Gender, we answer following additional research questions(ARQ).All Data Top 1% Top 10% All Data Top 1% Top 10%Table 8. Median perctile of Women and Reserved students in AIEEE data -ARQ1: Which states in India have the highest representation of females and backward castes in higher education compared to its population? -ARQ2: Which states in India have been successful in achieving a significant decrease in bias toward females and backward castes over time? Which states are lacking in this aspect?Fig. 2. Population normalized distribution of women and backward caste students in AIEEE 2011 data across Indian states.Dataset
Women
Reserved
AIEEE 2004 50.00
48.53 55.60
54.31
57.81 52.67
AIEEE 2005 47.82
60.38 56.48
54.87
43.68 52.43
AIEEE 2006 48.67
49.08 57.88
54.06
47.23 52.78
AIEEE 2007 47.90
47.14 56.35
54.99
57.38 53.40
AIEEE 2008 46.78
55.85 55.03
55.20
45.97 53.46
AIEEE 2009 48.10
57.56 54.98
54.35
49.39 53.98
AIEEE 2010 50.71
55.55 55.82
54.22
52.8
54.28
AIEEE 2011 51.04
48.35 53.83
53.88
57.14 54.94
Table 9 .
9Gender and caste breakup (%) in the Koo data.Table 10. % users at in the oldest 1% data sorted by creation date. General Reserved Male 67.1 12.66 Female 17.65 2.58 Table 11. % users at in the most recent 1% data sorted by creation date. General Reserved Male 73.44 13.84 Female 11.22 1.50 Table 12. % users in the top 1% data sorted by number of followers.General Reserved
Male 73.26 12.66
Female 12.52 1.56
Table 13 .
13% users in the bottom 1% data sorted by number of followers. General Reserved Male 74.91 12.08 Female 11.74 1.27
https://www.britannica.com/place/India/Indo-European-languages 2 https://github.com/vahini01/IndianDemographics
Detailed stats are available in Appendix A 4 https://www.kooapp.com/ 5 https://resultsarchives.nic.in
https://www.kooapp.com/
IndicBERT supports the following 12 languages: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, and Telugu.
https://en.wikipedia.org/wiki/2011_Census_of_India
A AppendixA.1 Dataset statisticsTable 4displays the dataset stats.A.2 Baseline APIs and ModelsWe used a bunch of APIs available for gender classification as baselines and compared them with the results obtained from our transformer based methods. Gender API[2]: Gender-API.com is a simple-to-implement solution that adds gender information to existing records. It receives input via an API and returns the split-up name (first name, last name) and gender to the app or the website. According to the website, it will search for the name in a database belonging to the specific country, and if it is not found, it will perform a global lookup. If it cannot find a name in a global lookup, it performs several normalizations on the name to correct typos and cover all spelling variants. Onograph API [11]: OnoGraph is a set of services that predicts a person's characteristics based on their name. It can predict nationality, gender, and location (where they live). The services are based on the world's largest private database of living people, which contains over 4.25 billion people (as of July 2020). According to the documentation, "OnoGraph's results are the most accurate of any comparable service; and it recognizes around 40 million more names than the nearest comparable service."
Name-ethnicity classification from open sources. A Ambekar, C B Ward, J Mohammed, S Male, S Skiena, Association for Computing MachineryNew York, NY, USAAmbekar, A., Ward, C.B., Mohammed, J., Male, S., Skiena, S.: Name-ethnicity classification from open sources. In: KDD. p. 49-58. Association for Computing Machinery, New York, NY, USA (2009)
. G Api, API, G.: https://gender-api.com (2021)
. N Api, API, N.: https://www.nameapi.org/en/home/ (2021)
CRUSH: Contextually regularized and user anchored self-supervised hate speech detection. S Chakraborty, P Dutta, S Roychowdhury, A Mukherjee, 10.18653/v1/2022.findings-naacl.144Findings of the Association for Computational Linguistics: NAACL 2022. Seattle, United StatesAssociation for Computational LinguisticsChakraborty, S., Dutta, P., Roychowdhury, S., Mukherjee, A.: CRUSH: Con- textually regularized and user anchored self-supervised hate speech detection. In: Findings of the Association for Computational Linguistics: NAACL 2022. pp. 1874-1886. Association for Computational Linguistics, Seattle, United States (Jul 2022). https://doi.org/10.18653/v1/2022.findings-naacl.144, https:// aclanthology.org/2022.findings-naacl.144
Aspect-based sentiment analysis of scientific reviews. S Chakraborty, P Goyal, A Mukherjee, 10.1145/3383583.3398541Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020. the ACM/IEEE Joint Conference on Digital Libraries in 2020New York, NY, USAJCDL '20, Association for Computing MachineryChakraborty, S., Goyal, P., Mukherjee, A.: Aspect-based sentiment analysis of scientific reviews. In: Proceedings of the ACM/IEEE Joint Conference on Digi- tal Libraries in 2020. p. 207-216. JCDL '20, Association for Computing Machin- ery, New York, NY, USA (2020). https://doi.org/10.1145/3383583.3398541, https://doi.org/10.1145/3383583.3398541
(im) balance in the representation of news? an extensive study on a decade long dataset from india. S Chakraborty, P Goyal, A Mukherjee, International Conference on Social Informatics, SocInfoChakraborty, S., Goyal, P., Mukherjee, A.: (im) balance in the representation of news? an extensive study on a decade long dataset from india. International Confer- ence on Social Informatics, SocInfo (2022), https://arxiv.org/abs/2110.14183
What's in a name? -gender classification of names with character based machine learning models. Y Hu, C Hu, T Tran, T Kasturi, E Joseph, M Gillingham, Hu, Y., Hu, C., Tran, T., Kasturi, T., Joseph, E., Gillingham, M.: What's in a name? -gender classification of names with character based machine learning models (2021)
Can an online service predict gender? on the state-ofthe-art in gender identification from texts. S Krüger, B Hermann, 10.1109/GE.2019.00012Proceedings of the 2nd International Workshop on Gender Equality in Software Engineering. p. 13-16. GE '19. the 2nd International Workshop on Gender Equality in Software Engineering. p. 13-16. GE '19CanadaIEEE PressKrüger, S., Hermann, B.: Can an online service predict gender? on the state-of- the-art in gender identification from texts. In: Proceedings of the 2nd International Workshop on Gender Equality in Software Engineering. p. 13-16. GE '19, IEEE Press, Canada (2019). https://doi.org/10.1109/GE.2019.00012, https://doi. org/10.1109/GE.2019.00012
Gender inference using statistical name characteristics in twitter. J Mueller, G Stumme, S I Misnc, 10.1145/2955129.2955182Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics. the The 3rd Multidisciplinary International Social Networks Conference on SocialInformaticsNew York, NY, USAAssociation for Computing MachineryMueller, J., Stumme, G.: Gender inference using statistical name character- istics in twitter. In: Proceedings of the The 3rd Multidisciplinary Interna- tional Social Networks Conference on SocialInformatics 2016, Data Science 2016. MISNC, SI, DS 2016, Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2955129.2955182, https://doi.org/10. 1145/2955129.2955182
racebert -a transformer-based model for predicting race and ethnicity from names. P Parasurama, 10.48550/ARXIV.2112.03807Parasurama, P.: racebert -a transformer-based model for predicting race and eth- nicity from names (2021). https://doi.org/10.48550/ARXIV.2112.03807, https: //arxiv.org/abs/2112.03807
What's kooking? characterizing india's emerging social network, koo. A K Singh, C Jain, J Jain, R R Jain, S Sehgal, T Pandey, P Kumaraguru, Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and MiningNew York, NY, USAAssociation for Computing MachinerySingh, A.K., Jain, C., Jain, J., Jain, R.R., Sehgal, S., Pandey, T., Kumaraguru, P.: What's kooking? characterizing india's emerging social network, koo. In: Pro- ceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. pp. 193-200. Association for Computing Machin- ery, New York, NY, USA (2021)
Predicting race and ethnicity from the sequence of characters in a name. G Sood, S Laohaprapanon, Sood, G., Laohaprapanon, S.: Predicting race and ethnicity from the sequence of characters in a name (2018)
Ltrc iiith at ibereval 2017: Stance and gender detection in tweets on catalan independence. S Swami, A Khandelwal, M Shrivastava, S Akhtar, 2nd Workshop on Evaluation of Human Language Technologies for Iberian Languages. CEUR Workshop Proceedings 1881. Conference dateSwami, S., Khandelwal, A., Shrivastava, M., Akhtar, S.: Ltrc iiith at ibereval 2017: Stance and gender detection in tweets on catalan independence. CEUR Workshop Proceedings 1881, 199-203 (Jan 2017), 2nd Workshop on Evaluation of Human Language Technologies for Iberian Languages, IberEval 2017 ; Conference date: 19-09-2017
What's in a name: A study of names, gender inference, and gender behavior in facebook. C Tang, K Ross, N Saxena, R Chen, J Xu, G Yu, S Zhou, Database Systems for Adanced Applications -16th International Conference. Unland, R.International WorkshopsTang, C., Ross, K., Saxena, N., Chen, R.: What's in a name: A study of names, gender inference, and gender behavior in facebook. In: Xu, J., Yu, G., Zhou, S., Unland, R. (eds.) Database Systems for Adanced Applications -16th International Conference, DASFAA 2011, International Workshops. pp. 344-356 (2011)
Name-ethnicity classification and ethnicity-sensitive name matching. P Treeratpituk, C L Giles, Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. the Twenty-Sixth AAAI Conference on Artificial IntelligenceCanadaAAAI Press12Treeratpituk, P., Giles, C.L.: Name-ethnicity classification and ethnicity-sensitive name matching. In: Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. p. 1141-1147. AAAI'12, AAAI Press, Canada (2012)
Gender prediction of indian names. A Tripathi, M Faruqui, 10.1109/TECHSYM.2011.5783842IEEE Technology Students' Symposium. KharagpurIEEETripathi, A., Faruqui, M.: Gender prediction of indian names. In: IEEE Technology Students' Symposium. pp. 137-141. IEEE, Kharagpur (2011). https://doi.org/ 10.1109/TECHSYM.2011.5783842
| [
"https://github.com/vahini01/IndianDemographics"
] |
[
"An EM Approach to Non-autoregressive Conditional Sequence Generation",
"An EM Approach to Non-autoregressive Conditional Sequence Generation"
] | [
"Zhiqing Sun ",
"Yiming Yang "
] | [] | [] | Autoregressive (AR) models have been the dominating approach to conditional sequence generation, but are suffering from the issue of high inference latency. Non-autoregressive (NAR) models have been recently proposed to reduce the latency by generating all output tokens in parallel but could only achieve inferior accuracy compared to their autoregressive counterparts, primarily due to a difficulty in dealing with the multi-modality in sequence generation. This paper proposes a new approach that jointly optimizes both AR and NAR models in a unified Expectation-Maximization (EM) framework. In the E-step, an AR model learns to approximate the regularized posterior of the NAR model. In the M-step, the NAR model is updated on the new posterior and selects the training examples for the next AR model. This iterative process can effectively guide the system to remove the multi-modality in the output sequences. To our knowledge, this is the first EM approach to NAR sequence generation. We evaluate our method on the task of machine translation. Experimental results on benchmark data sets show that the proposed approach achieves competitive, if not better, performance with existing NAR models and significantly reduces the inference latency. | null | [
"https://arxiv.org/pdf/2006.16378v1.pdf"
] | 220,265,867 | 2006.16378 | 3dd6ceabc36725fa8f8debdaa4a87ec4e35e8c22 |
An EM Approach to Non-autoregressive Conditional Sequence Generation
Zhiqing Sun
Yiming Yang
An EM Approach to Non-autoregressive Conditional Sequence Generation
Autoregressive (AR) models have been the dominating approach to conditional sequence generation, but are suffering from the issue of high inference latency. Non-autoregressive (NAR) models have been recently proposed to reduce the latency by generating all output tokens in parallel but could only achieve inferior accuracy compared to their autoregressive counterparts, primarily due to a difficulty in dealing with the multi-modality in sequence generation. This paper proposes a new approach that jointly optimizes both AR and NAR models in a unified Expectation-Maximization (EM) framework. In the E-step, an AR model learns to approximate the regularized posterior of the NAR model. In the M-step, the NAR model is updated on the new posterior and selects the training examples for the next AR model. This iterative process can effectively guide the system to remove the multi-modality in the output sequences. To our knowledge, this is the first EM approach to NAR sequence generation. We evaluate our method on the task of machine translation. Experimental results on benchmark data sets show that the proposed approach achieves competitive, if not better, performance with existing NAR models and significantly reduces the inference latency.
Introduction
State-of-the-art conditional sequence generation models (Bahdanau et al., 2014;Gehring et al., 2017;Vaswani et al., 2017) typically rely on an AutoRegressive (AR) factorization scheme to produce the output sequences. Denoting by x = (x 1 , . . . , x T ) an input sequence of length T , and by y = (y 1 , . . . , y T ) a target sequence of length T , the 1 Carnegie Mellon University, Pittsburgh, PA 15213 USA. Correspondence to: Zhiqing Sun <zhiqings@cs.cmu.edu>.
Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). conditional probability of y given x is factorized as:
p AR (y|x) = T i=1
p(y i |x, y 1 , y 2 , . . . , y i−1 ).
(1)
As such a sequential factorization cannot take the full advantage of parallel computing, it yields high inference latency as a limitation.
Recently, Non-AutoRegressive (NAR) sequence models (Gu et al., 2017;Lee et al., 2018) are proposed to tackle the problem of inference latency, by removing the sequential dependencies among the output tokens as:
p N AR (y|x) = p(T |x) T i=1 p(y i |x, T ).(2)
This formulation allows each token to be decoded in parallel and hence brings a significant reduction of the inference latency. However, NAR models also suffer from the conditional independence assumption among the output tokens, and usually do not perform as well as their AR counterparts. Such a performance gap is particularly evident when the output distributions exhibit a multi-modality phenomenon (Gu et al., 2017). which means that the input sequence can be mapped to multiple correct output sequences. Such a multi-modal output distribution cannot be represented as the product of conditionally independent distributions for each position in NAR models (See 3.2 for a detailed discussion).
How to overcome the multi-modality issue has been a central focus in recent efforts for improving NAR models. A standard approach is to use sequence-level knowledge distillation (Hinton et al., 2015;Kim & Rush, 2016), which means to replace the target part y of each training instance (x, y) with the system-predictedŷ from a pre-trained AR model (a.k.a. the "teacher model"). Such a replacement strategy removes the one-to-many mappings from the original dataset. The justification for doing so is that in practice we do not really need the sequence generation models to mimic a diverse output distribution for sequence generation tasks such as machine translation 1 and text summarization. Such a knowledge distillation strategy has shown to be effective for improving the performance of NAR models in multiple studies (Gu et al., 2017;Kaiser et al., 2018;Li et al., 2019;Ma et al., 2019;Sun et al., 2019;Ghazvininejad et al., 2019;Gu et al., 2019).
We want to point out that in all the NAR methods with knowledge distillation, the teacher AR model is pre-trained only once on ground-truth data and then is used to generate the output training examples for the NAR model. We argue that such a single-pass knowledge distillation process may not be sufficient for optimizing the NAR model as sequencê y predicted by the AR model cannot be perfect. More importantly, it is not necessarily the best choice for alleviating the multi-modality problem in the NAR model. In other words, without knowing how the choice ofŷ by the AR model would effect the training process in the NAR model, the current knowledge distillation approach is unavoidably sub-optimal.
To address this fundamental limitation, we propose a novel Expectation-Maximization (EM) approach to the training of NAR models, where both the teacher (an AR model) and the student (a NAR model) are helping each other in a closed loop, and the iterative updates of the models are guaranteed to converge to a local optimum. This approach gives an extra power to knowledge distillation between AR and NAR models, and is the first EM approach to NAR models, to our knowledge. Fig. 1 illustrates our new framework. In addition, we develop a principled plug-and-play decoding module for effective removal of word duplication in the model's output. Our experiments on three machine translation benchmark datasets show that the proposed approach achieves competitive performance as that of the best NAR models in terms of prediction accuracy, and reduces the inference latency significantly.
Related Work
Related work can be divided into two groups, i.e., the nonautoregressive methods for conditional sequence generation, and the various approaches to knowledge distillation in nonautoregressive models.
Recent work on non-autoregressive sequence generation has developed ways to address the multi-modality problem. Several work try to design better training objectives (Shao et al., 2019;Wei et al., 2019) or regularization terms Guo et al., 2018). Other methods focus on direct modeling of multi-modal target distributions via hidden variables (Gu et al., 2017;Kaiser et al., 2018;Ran et al., 2019;Ma et al., 2019) or sophisticated output structures (Libovickỳ & Helcl, 2018;Sun et al., 2019). There are also a few recent work (Lee et al., 2018;Stern et al., 2019;Ghazvininejad et al., 2019;Gu et al., 2019) focusing on a multiple-pass iterative-refinement process to generate the final outputs, where the first pass produce an initial output sequence and the following passes refine the sequence iteratively in the inference phase.
As for knowledge distillation in NAR models, Gu et al. (2017) is the first effort to use knowledge distillation (Hinton et al., 2015;Kim & Rush, 2016). Recently, Zhou et al.
(2019) analyzed why knowledge distillation would reduce the complexity of datasets and hence be helpful in the training of NAR models. They also used Born-Again networks (BANs) (Furlanello et al., 2018) to produce simplified training data for NAR models.
All the above methods take a pre-trained AR model as the teacher model for knowledge distillation; none of them iteratively updates the teacher model based on the feedback from (or the measured performance of) the NAR model. This is the fundamental difference between existing work and our proposed EM approach in this paper.
Problem Definition
Conditional Sequence Generation
Let us describe the problem of conditional sequence generation in the context of machine translation and use the terms of "sequence" and "sentence", "source" and "input", "target" and "output" interchangeably. We use x and y to denote the source and target sentences, x i to indicate the i-th token in x, and X = {x 1 , x 2 , . . . , x N } and Y = {y 1 , y 2 , . . . , y N } to be a parallel dataset of N sentence pairs in the source and target languages, respectively.
The training of both AutoRegressive (AR) and Non-AutoRegressive (NAR) sequence generation models is performed via likelihood maximization over parallel data How powerful are NAR models? Are they as expressive as AR models? Our answer is both yes and no. On one hand, it is easy to see that NAR models can only capture distributions that can be factorized into conditional independent parts. On the other hand, we will show in the next that if the instance-level multi-modality can be removed from the training data (e.g., via sequence-level knowledge distillation), then NAR models can be as powerful just as AR models.
Theoretical expressiveness of NAR models
Let us focus on the expressive power of NAR models when the instance-level multi-modality is removed, that is, when there are only one-to-one and many-to-one mappings in training examples. More specifically, we consider the ability of the vanilla Non-Autoregressive Transformer (NAT) models (Gu et al., 2017;Vaswani et al., 2017) in approximating arbitrary continuous R d×n → R d×m sequence-to-sequence single-valued functions, where n and m are the input and output sequence length, and d is the model dimension.
Given the definition of a distance between two functions f 1 , f 2 : R d×n → R d×m as:
d p (f 1 , f 2 ) = f 1 (X) − f 2 (X) p p dX 1/p .(8)
where p ∈ [1, ∞). We can make the following statement:
Theorem 4.1. Let 1 ≤ p < ∞ and > 0, then for any given continuous sequence-to-sequence function f :
R d×n → R d×m , there exists a non-autoregressive Trans- former network g such that d p (f, g) ≤ .
This theorem is a corollary of the Theorem 2 in Yun et al. (2020). For completeness, we provide the formal theorem with proof in the appendix.
What limits the success of NAR models in practice?
Theorem 4.1 shows that for any sequence-to-sequence dataset containing no instance-level multi-modality, we can always find a good NAT model to fit this dataset. However, in reality, it is still a big challenge for NAT models to fit the distilled deterministic training data very well.
The gap between theory and practice is due to the fact that in theory we may use as many Transformer layers as needed, but in reality, there are only a few layers (e.g., 6 layers) in the Transformer model. This would greatly restrict the model capacity of real NAR models.
To further understand the limitation let us examine the following two hypotheses:
• The NAT model intrinsically cannot accurately produce very long output sequences when it has only a few Transformer layers.
• The corpus-level multi-modality in data makes it hard for NAT models to deal with (i.e., to memorize the "mode" in the output for each input).
These hypotheses focus on two different reasons that might cause the poor performance of NAR models. In order to source target Experiment I 2 1 4 3 → 2 2 1 4 4 4 4 3 3 3 2 2 3 → 2 2 2 2 3 3 3 2 1 5 → 2 2 1 5 5 5 5
Experiment II 2 1 4 3 → 0 2 2 1 4 4 4 4 3 3 3 0 0 0 2 2 3 → 0 0 0 2 2 2 2 3 3 3 0 2 1 5 → 2 2 1 5 5 5 5 5 0 0 0 0 verifying which one is true, we design two types of synthetic data and experiments.
In Experiment I, a synthetic translation dataset is constructed as follows: The source and target sides share the same vocabulary of {1, 2, 3, 4, 5}. 1, 2, 3, and 4, and 5 are translated into 1, 2 2, 3 3 3, 4 4 4 4, and 5 5 5 5 5, respectively. The translation is deterministic, i.e., no multi-modality in the resulted parallel dataset. In Experiment II, we randomly insert four 0 in the front or the back of each target sentence in the 1st dataset. In other words, the source-to-target translation in the 2nd dataset is non-deterministic and hence is with corpus-level multi-modality. In addition, we filter the source data in Experiment II to make sure that there is no instance-level multi-modality in this dataset. The toy examples illustrating the two types of datasets can be found in Tab. 1. Following such rules we randomly generated 2,000,000 sentences for training and 1000 sentences for testing; both the training and testing source sentences have the length of 30.
We trained both the AR transformer and the NAR Transformer on these synthetic datasets, each of which consists of a 3-layer encoder and a 3-layer decoder. The detailed model settings can be found in the appendix. In our evaluation we used the ground-truth lengths for the decoding of both the AR and NAR Transformers; the whole-sentence matching accuracy of those models are listed in Tab. 2.
The results in Experiment I show that both the autoregressive Transformer and the non-autoregressive Transformer can achieve an high accuracy of 99.9% and 95.7%, respective, when the training data do not have the multi-modality. In contrast, the results of Experiment II show that the nonautoregressive Transformer model failed completely on the synthetic dataset with corpus-level multi-modality. The sharp contrast in these synthetic experiments indicates that the real problem with NAR models is indeed due to the corpus-level multi-modality issue.
Proposed Method
Let us formally introduce our EM approach to addressing the multi-modality issue in NAR models, followed by a principled decoding module for effective removal of word duplication in the predicted output.
The EM Framework
With the definition of the corpus-level multi-modality (i.e., CM in Eq. 5), we consider how to reduce this quantity for the better training of NAR models. Formally, given source data X , we want to find target data Y * that satisfies the following property:
Y * = arg min Y CM X (Y) = arg min Y E (x,y)∼(X ,Y) − log p N AR (y|x; θ * ) .
However, there can be many trivial solutions for Y * . For example, we can simply construct a dataset with no variation in the output to achieve zero corpus-level multi-modality.
To avoid triviality, we may further apply a constraint to Y * . This leads us to the posterior regularization framework.
Posterior Regularization (Ganchev et al., 2010) is a probabilistic framework for structured, weakly supervised learning. In this framework, we can re-write our objective as following:
L Q (θ) = min q∈Q KL(q(Y) p N AR (Y|X ; θ)),
where q is the posterior distribution of Y and Q is a constraint posterior set that controls the quality of the parallel data given by:
Q = {q(Y) : E Y∼q [Q X (Y)] ≥ b},
where Q is a metric for quality mapping from (X , Y) to R N in the training set and b is a bound vector.
For sequence generation tasks, there are many corpus-level quality metrics, such as BLEU (Papineni et al., 2002) and ROUGE (Lin & Hovy, 2003). However, they are known to be inaccurate for measuring the quality of single sentence pairs. Thus, we use the likelihood score of a pre-trained AR model as a more reliable quality metric:
[Q X (Y)] i = Q x i (y i ) = log p AR (y i |x i ; φ 1 ),
where φ 1 denotes that the AR model trained on the original ground-truth dataset.
Given the posterior regularization likelihood L Q (θ), we use the EM algorithm (McLachlan & Krishnan, 2007;Ganchev et al., 2010) to optimize it. In the E-step (a.k.a,the inference procedure), the goal is to fix p N AR and update the posterior distribution:
q t+1 = arg min q∈Q KL(q(Y) p N AR (Y|X ; θ t )),
In the M-step (a.k.a., learning procedure), we will fix q(Y) and update θ to maximize the expected log-likelihood:
θ t+1 = arg max θ E q t+1 [log p N AR (Y|X ; θ)],
Next, we introduce the details of the E-step and the M-step in our framework.
Inference Procedure The E-step aims to compute the posterior distribution q(Y) that minimizes the KL divergence between q(Y) and p N AR (Y|X ). Ganchev et al. (2010) show that for graphical models, q(Y) can be efficiently solved in its dual form. Specifically, the primal solution q * is given in terms of the dual solution λ * by:
q * (Y) ∝ p N AR (Y|X ; θ t ) exp{λ * · Q X (Y)} ∝ N i=1 p N AR (y i |x i ; θ t ) p AR (y i |x i ; φ 0 ) λ * i ,
However, a problem here, as pointed out by Zhang et al. (2018), is that it is hard to specify the hyper-parameter b to effectively bound the expectation of the features for neural models. Besides, even when b is given, calculating λ * is still intractable for neural models. Therefore, in this paper, we introduce another way to compute q(Y).
We first factorize q(Y) as the product of {q(y i )}, and then follow the idea of amortized inference (Gershman & Goodman, 2014) to parameterize q(y i ) with an AR sequence generation model:
q(Y) = N i=1 p AR (y i |x i ; φ).
The E-step can thus be re-written as follows:
φ t+1 = arg min φ∈Q E x∼X KL(p AR (y|x; φ) p N AR (y|x; θ t )).
where the new constraint posterior set Q is defined as
φ : E p AR (Y|X ;φ) [CQ X (Y)] ≥ b .
We further apply the REINFORCE algorithm (Williams, 1992) to estimate the gradient of L Q w.r.t. φ ∈ Q :
∇ φ L Q = E x∼X E y∼p AR (y|x;φ) − log p N AR (y|x; θ t ) p AR (y|x; φ) ∇ φ log p AR (y|x; φ) .
This can be intuitively viewed as to construct a weighted pseudo training dataset (X t+1 , Y t+1 ), where the training examples are sampled at random from p AR (y|x; φ) and weighted by log p N AR (y|x;θ t ) p AR (y|x;φ) . In practice, we find that there are two problems in implementing this algorithm: One is that sampling from p AR (y|x; φ) is very inefficient; the other is that the constraint φ ∈ Q cannot be guaranteed. Therefore, we instead use a heuristic way when constructing the pseudo training dataset (X t+1 , Y t+1 ): We first follow Wu et al. (2018) to replace the inefficient sampling process with beam search (Sutskever et al., 2014) on p AR (y|x; φ t ), and then filter out the candidates that doesn't satisfy the following condition:
Q x (y) ≥b i ,
whereb i is a newly introduced pseudo bound that can be empirically set by early stopping. In this way, we control the quality of p AR (y|x; φ t+1 ) by manipulating the quality of its training data. Finally, we choose the ones with the highest p AR (y|x; φ t ) log p N AR (y|x;θ t ) p AR (y|x;φ t ) score as the training examples in Y t+1 , and X t+1 is merely a copy of X .
In each E-step, in principle, we should let φ t+1 converge under the current NAR model. Although this can be achieved by constructing the pseudo datasets and train the AR model for multiple times, it is practically prohibitive due to the expensive training cost. We, therefore, use only a single update iteration of the AR model in the inner loop of each E-step.
Learning Procedure In the M-step, we seek to learn the parameters θ t+1 with the parameterized posterior distribution p AR (Y|X ; φ t+1 ). However, directly sampling training examples from the AR model will cause the instance-level multi-modality problem. Therefore, we apply sequencelevel knowledge distillation (Kim & Rush, 2016) to solve this problem, that is, we only use the targets with maximum likelihood in the AR model to train the NAR model:
θ t+1 = arg max θ E (x,y)∼(X ,Ŷ t+1 ) log p N AR (y|x; θ).
whereŶ t+1 denotes the training examples produced by the AR model p AR (Y|X ; φ t+1 ).
Joint Optimization We first pre-train an AR teacher model on the ground-truth parallel data as p AR (y|x; φ 1 ). Then we alternatively optimize p N AR and p AR until convergence. We summarize the optimization algorithm in Alg. 1. In our EM method, the AR and NAR models are jointly optimized to reduce the corpus-level multi-modality.
Algorithm 1 An EM approach to NAR models
Input: Parallel training dataset (X , Y) t = 0 Pre-train p AR (y|x; φ 1 ) on (X , Y). while not converged do t = t + 1 M-Step: Learning Procedure Construct the distillation datasetŶ t with p AR (y|x; φ t ). Train p N AR (y|x; θ t ) on (X ,Ŷ t ). E-Step: Inference Procedure Construct the pseudo dataset (X t+1 , Y t+1 ). Train p AR (y|x; φ t+1 ) on (X t+1 , Y t+1 ). end while Output: A NAR model p N AR (y|x; θ t ).
The Optimal De-duplicated Decoding Module
Word duplication is a well-known problem in NAR models caused by the multi-modality issue. To improve the performance of NAR models, some previous work (Lee et al., 2018;Li et al., 2019) remove any duplication in the model prediction by collapsing multiple consecutive occurrences of a token. Such an empirical approach is not technically sound. After collapsing, the length of the target sequence is changed. This will cause a discrepancy between the predicted target length and the actual sequence length and thus make the final output sub-optimal.
We aim to solve the word duplication problem in NAR models, while preserving the original sequence length. Similar to Sun et al. (2019), we use the Conditional Random Fields (CRF) model (Lafferty et al., 2001) for the decoding of NAR models. The CRF model is manually constructed as follows. It treats the tokens to be decoded as the predicted labels. The unitary scores of the labels in each position are set to be NAR models' output distribution and the transition matrix is set to −∞ · I, where I is an identity matrix.
Our model is able to find the optimal decoding when considering only the top-3 candidates w.r.t. the unitary scores in each position:
Proposition 5.1. In a CRF with a transition matrix of −∞ · I, only top-3 likely labels for each position are possible to appear in the optimal (most likely) label sequence.
We can thus crop the transition matrix accordingly by only keeping a 3 × 3 transition sub-matrix between each pair of adjacent positions. The forward-backward algorithm (Lafferty et al., 2001) is then applied on the top-3 likely labels and the 3 × 3 transition sub-matrices to find the optimal decoding with the linear time complexity of O(|y|).
The proposed decoding module is a lightweight plug-andplay module that can be used for any NAR models. Since this principled decoding method is guaranteed to find the optimal prediction that has no word duplication, we refer it as optimal de-duplicated (ODD) decoding method 2 .
6. Experiments
Experimental Settings
We use several benchmark tasks to evaluate the effectiveness of the proposed method, including IWSLT14 3 German-to-English translation (IWSLT14 De-En) and WMT14 4 English-to-German/German-to-English translation (WMT14 En-De/De-En). For the WMT14 dataset, we use Newstest2014 as test data and Newstest2013 as validation data. For IWSLT14/WMT14 datasets, we split words into BPE tokens (Sennrich et al., 2015), forming a 10k/32k vocabulary shared by source and target languages.
We use the Transformer (Vaswani et al., 2017) model as the AR teacher, and the vanilla Non-Autoregressive Transformer (NAT) (Gu et al., 2017) model with sequence-level knowledge distillation (Kim & Rush, 2016) as the NAR baseline. For both AR and NAR models, we use the original base setting for the WMT14 dataset, and a small setting for the IWSLT14 dataset. To investigate the influence of the model size on our method, we also evaluate large/base NAT models on WMT14/IWSLT14 datasets as a larger model setting. These larger NAT models are not used in the EM iterations. They are merely trained with the final AR teacher from the EM iterations of the original model (base/small for WMT14/IWSLT14). The detailed settings of the model architectures can be founded in the appendix.
We use Adam optimizer (Kingma & Ba, 2014) and employ a label smoothing (Szegedy et al., 2016) of 0.1 in all experiments. The base and large models are trained for 125k steps on 8 TPUv3 nodes in each iteration, while the small models are trained for 20k steps. We use a beam size of 20/5 for the AR model in the M/E-step of our EM training algorithm. The pseudo bounds {b i } is set by early stopping with the accuracy on the validation set.
Inference
During decoding, the target length l = |y| is predicted by an additional classifier conditional on the source sentence: l = arg max T p(T |x). We can also try different target lengths ranging from (l − b) to (l + b) and obtain multiple translations with different lengths, where b is the half-width, and then use the AR model p AR (y|x; φ 1 ) as the rescorer to select the best translation. Such a decoding and rescoring process can be conducted in parallel and is referred as parallel length decoding. To make a fair comparison with previous work, we set b to 4 and use 9 candidate translations for each sentence. For each dataset, we evaluate the model performance with the BLEU (Papineni et al., 2002) score. 5 We evaluate the average per-sentence decoding latency 6 on WMT14 En-De test sets with batch size 1 on a single NVIDIA GeForce RTX 2080 Ti GPU by averaging 5 runs.
Main Results
We Tab. 3 provides the performance of our method with maximum likely target length l, together with other NAR baselines that generate output sequences in a single pass. From the table, we can see that the EM training contributes most to the improvement of the performance. The optimal deduplicated (ODD) decoding also significantly improves the model performance. Compared with other models, our method significantly outperforms all of them, with nearly no additional overhead compared with the vanilla NAT.
Tab. 4 illustrates the performance of our method equipped with rescoring and other baselines equipped with rescoring or iterative refinement. Since our method has nearly no additional overhead compared with the vanilla NAT, to make a fair comparison with previous work ( , we also show the results of our method with a larger model setting. From the table, we can still find that the EM training significantly improves the performance of the vanilla NAT model, but the effect of the OOD decoding is not as significant as in the single-pass setting. This shows that the rescoring process can mitigate the word duplication problem to some extent. Surprisingly, we also find that using the larger model also does not bring much gain. A potential explanation for this is that since our EM algorithm significantly simplifies the training dataset and the NAT model can be over-parameterized, there is no much gain in further increasing the model size. Compared with other baselines, our method significantly outperforms these rescored single-pass NAR methods and achieves competitive performance to iterative-refinement models with a much better speedup. Note that these iterative-refinement models (Lee et al., 2018;Ghazvininejad et al., 2019;Gu et al., 2019) still rely on sequence-level knowledge distillation in training, which indicates that it is still a hard problem for these approaches to capture multi-modality in the real data. Our EM algorithm may further improve their performance. We leave combining the two techniques for future work.
Analysis of the Convergence
We analyze the convergence of the EM algorithm. We show the dynamics of the performance of the NAR model (test BLEU), the performance of the AR model (test BLEU), and the Normalized Corpus-level Multi-modality (NCM, defined by Eq. 7) on the WMT14 En-De dataset. The results are shown in Tab. 5.
We describe the detailed optimization process here to clarify how our EM algorithm works with early stopping in this example. In the first 5 iterations, as we have no idea how to set {b i } precisely, we simply set them to zeros. But after the 5 th iteration, we find an accuracy drop in the validation set. So we will re-use the quality metrics at the 4 th iteration to set {b i } and continue the EM algorithm until convergence.
We can see that our EM algorithm takes only a few iterations to convergence, which is very efficient. With the EM algorithm continues, the NCM metric, which can be regarded as the optimization objective, decreases monotonically. The performance of the NAR model and the performance of the AR model also converge after 5 iterations.
Analysis of the Amortized Inference
In our EM method, we employ amortized inference and parametrize the posterior distribution of the target sequences by an AR model. In this section, we investigate the importance of amortized inference. Specifically, we try to directly train the NAR model on (X t+1 , Y t+1 ) in the M-step. The results are shown in Tab. 6. We can see that parameterizing the posterior distribution by a unified AR model always improves the performance of the NAR model. Finally, we analyze the effect of the proposed optimal deduplicated (ODD) decoding approach. We compare it with another plug-and-play de-duplicated decoding approach, that is, "removing any repetition by collapsing multiple consecutive occurrences of a token" (Lee et al., 2018), which we refer to as post-de-duplication. The results are shown in Tab. 7. We can see that the proposed ODD decoding method consistently outperforms this empirical method. This shows that our proposed method can overcome the sub-optimal problem of the post-de-duplication method.
Conclusion
This paper proposes a novel EM approach to nonautoregressive conditional sequence generation, which effectively addresses the multi-modality issue in NAR training by iterative optimizing both the teacher AR model and the student NAR model. We also developed a principled plugand-play decoding method for efficiently removing word duplication in the model's output. Experimental results on three tasks prove the effectiveness of our approach. For future work, we plan to examine the effectiveness of our method in a broader range of applications, such as text summarization.
Shao, C., Zhang, J., Feng, Y., Meng, F., and Zhou, J. Minimizing the bag-of-ngrams difference for nonautoregressive neural machine translation. arXiv preprint arXiv:1911.09320, 2019. Zhang, J., Liu, Y., Luan, H., Xu, J., and Sun, M. Prior knowledge integration for neural machine translation using posterior regularization. arXiv preprint arXiv:1811.01100, 2018.
Zhou, C., Neubig, G., and Gu, J. Understanding knowledge distillation in non-autoregressive machine translation. arXiv preprint arXiv:1911.02727, 2019.
A. Non-autoregressive Transformers are Universal Approximators of Sequence-to-Sequence Functions
A.1. Problem Definition
A non-autoregressive Transformer (Vaswani et al., 2017) a Transformer encoder and a non-autoregressive Transformer decoder. More concretely, both encoder and decoder consist of two types of layers: the multi-head attention layer Attn and the token-wise feed-forward layer FF, with both layers having a skip connection 7 (He et al., 2016). The encoder block in the non-autoregressive Transformer t enc maps an input X ∈ R d×n consisting of d-dimensional embeddings of n tokens to an output t enc (X) ∈ R d×m . It consists of a self-attention layer and a feed-forward layer. The decoder block in the non-autoregressive Transformer t dec maps an input Y ∈ R d×m consisting of d-dimensional embeddings of m tokens and a context X ∈ R d×n consisting of d-dimensional embeddings of n tokens to an output t dec (X, Y ) ∈ R d×m . It consists of a self-attention layer, a encoder-decoder attention layer, and a feed-forward layer:
Attn(X, Y ) = Y + h i=1 W i O W i V X · σ (W i Q X) T (W i K Y ) ,(9)FF(Y ) = Y + W 2 · ReLU(W 1 · Y ),(10)t enc (X) = FF(Attn enc−self (X, X)), (11) t dec (X, Y ) = FF(Attn enc−dec (X, Attn dec−self (Y , Y ))),(12)
where
W i O ∈ R d×k , W i V , W i K , W i Q ∈ R k×d , W 2 ∈ R d×r ,
and W 1 ∈ R r×d are learnable parameters. σ is the softmax function. Following Yun et al. (2020), we also do not use layer normalization (Ba et al., 2016) in the setup of our analysis.
The family of the Transformer encoders is R d×n → R d×n functions and can be defined as:
T h,k,r enc := h : R d×n → R d×n X 0 = X X i = t h,k,r enc (X i−1 ) h(X) = X M ,(13)
where t h,k,r enc denotes a Transformer encoder block defined by an attention layer with h heads of size k each, and a feed-forward layer with r hidden nodes. M is the number of stacked blocks.
Similarly, the family of the non-autoregressive Transformer decoders is R d×(n+m) → R d×m functions and can be defined as:
T h,k,r dec := h : R d×(n+m) → R d×m Y 0 = Y Y i = t h,k,r dec (X, Y i−1 ) h(X, Y ) = Y N ,(14)
where t h,k,r dec denotes a Transformer decoder block defined by attention layers with h heads of size k each and a feed-forward layer with r hidden nodes. N is the number of stacked blocks.
Finally, the family of non-autoregressive Transformers is R d×n → R d×m functions and can be defined as:
T h,k,r := g(X) = h 2 (h 1 (X + E 1 ), E 2 ) h 1 ∈ T h,k,r enc and h 2 ∈ T h,k,r dec ,(15)
where E 1 ∈ R d×n and E 2 ∈ R d×m are the trainable positional embeddings.
A.2. Transformer Encoders are Universal Approximators of Sequence-to-Sequence Functions (Yun et al., 2020)
Recently, Yun et al. (2020) show that the Transformer encoders equipped with positional embeddings are universal approximators of all continuous R d×n → R d×n functions that map a compact domain in R d×n to R d×n .
We first describe the results in Yun et al. (2020). Let us start by defining the target function class F enc , which consists of all continuous sequence-to-sequence functions with compact support that map a compact domain in R d×n to R d×n . Here continuity is defined with respect to any entry-wise p norm, 1 ≤ p < ∞. Given two functions f 1 , f 2 : R d×n → R d×n , for 1 ≤ p < ∞, we define a distance between them as
d p (f 1 , f 2 ) := f 1 (X) − f 2 (X) p p dX 1/p .(16)
The Transformer encoders with positional embeddings is defined as:
T h,k,r P−enc := h P X = h(X + E)|h ∈ T h,k,r enc and E ∈ R d×n ,(17)
where E is learnable positional embeddings. The following result shows that a Transformer encoder with positional embeddings with a constant number of heads h, head size k, and hidden layer of size r can approximate any function in F enc :
Theorem A.1. Let 1 ≤ p < ∞ and > 0, then for any given f ∈ F enc , there exists a Transformer encoder h ∈ T 2,1,4
P−enc such that we have d p (f, h) ≤ .
We provide the sketch of the proof in Yun et al. (2020) here. Without loss of generality, we can assume that the compact support of f is contained in [0, 1] d×n . The proof of Theorem A.1 can be achieved in the following three steps:
Step 1: Approximate F enc with piece-wise constant functions. We first use (a variant of) the classical result that any continuous function can be approximated up to arbitrary accuracy by piece-wise constant functions. For δ > 0, we define the following class of piece-wise constant functions:
F enc (δ) := f : X → L∈G δ A L 1{X ∈ G L } A L ∈ R d×n ,(18)
where G δ := {0, δ, . . . , 1 − δ} d×n and, for a grid point L ∈ G δ ,
S L := d j=1 n k=1 [L j,k , L j,k + δ) ⊂ [0, 1] d×n denotes the associated cube of width δ. Let f ∈ F enc (δ) be such that d p (f, f ) ≤ /3.
Step 2: Approximate F enc (δ) with modified Transformer encoders. We then consider a slightly modified architecture for Transformer networks, where the softmax operator σ[·] and ReLU(·) are replaced by the hardmax operator σ H [·] and an activation function φ ∈ Φ, respectively. Here, the set of allowed activations Φ consists of all piece-wise linear functions with at most three pieces, where at least one piece is constant. Let T h,k,r enc denote the function class corresponding to the sequence-to-sequence functions defined by the modified Transformer encoders. The following result establishes that the modified Transformer encoders in T 2,1,1 enc can closely approximate functions in F enc (δ). Proposition A.1. For each f ∈ F enc (δ) and 1 ≤ p < ∞, ∃ g ∈ T 2,1,1 enc such that d p (f , g) = O(δ d/p ).
Step 3: Approximate modified Transformer encoders with (original) Transformer encoders. Finally, we show that g ∈ T 2,1,1 can be approximated by T 2,1,4 . Let g ∈ T 2,1,4 be such that d p (g, g) ≤ /3.
Theorem A.1 now follows from these three steps, because we have
d p (f, g) ≤ d p (f, f ) + d p (f , g) + d p (g, g) ≤ 2 /3 + O(δ d/p ).(19)
Choosing δ small enough ensures that d p (f, g) ≤ .
A.3. Proof Sketch of Proposition A.1 (Yun et al., 2020) Especially, the proof of Proposition A.1 is decomposed into three steps:
Sub-step 1: Quantization by feed-forward layers Given an input X ∈ R d×n , a series of feed-forward layers in the modified Transformer encoder can quantize X to an element L on the extended grid G + δ := {−δ −nd , 0, δ, . . . , 1 − δ} d×n .
Sub-step 2: Contextual mapping by self-attention layers Next, a series of self-attention layers in the modified Transformer encoder can take the input L and implement a contextual mapping q : G δ → R n such that, for L and L that are not permutation of each other, all the elements in q(L) and q(L ) are distinct.
Sub-step 3: Function value mapping by feed-forward layers Finally, a series of feed-forward layers in the modified Transformer encoder can map elements of the contextual embedding q(L) to the desired output value of f ∈ F enc at the input X.
A.4. Non-autoregressive Transformers are Universal Approximators of Sequence-to-Sequence Functions
In this paper, we take a further step and show that the non-autoregressive Transformers are universal approximators of all continuous R d×n → R d×m functions that map a compact domain in R d×n to R d×m , where n and m can be different.
We start with describing the formal form of Theorem 4.1 in the main text. In the non-autoregressive conditional sequence generation problem, the target function class F s2s becomes the set of all continuous sequence-to-sequence functions with compact support that map a compact domain in R d×n to R d×m , where n and m can be different. Given two functions f 1 , f 2 : R d×n → R d×m , for 1 ≤ p < ∞, similar to Eq. 16, we define a distance between them as
d p (f 1 , f 2 ) := f 1 (X) − f 2 (X) p p dX 1/p .(20)
With the definition of non-autoregressive Transformers in Eq. 15, we have the following result:
Theorem A.2. Let 1 ≤ p < ∞ and > 0, then for any given f ∈ F s2s , there exists a non-autoregressive Transformer g ∈ T 2,1,4 such that we have d p (f, g) ≤ .
The proof of Theorem A.2 can be done in a similar way as Theorem A.1. Especially, the step 1 and step 3 in the proof of Theorem A.1 can be seamlessly used here. We refer the readers to Yun et al. (2020) for the detailed proof of these two steps.
The step 2 in the proof Theorem A.2 is a bit different. Basically, we need to prove the following result:
Proposition A.2. For each f ∈ F s2s (δ) and 1 ≤ p < ∞, ∃ g ∈ T 2,1,1 such that d p (f , g) = O(δ d/p ).
where F s2s (δ) and T 2,1,1 are defined in a similar way as F enc (δ) and T 2,1,1 enc , respectively. Similar to the proof of Proposition A.1, the proof of Proposition A.2 can be decomposed into three steps:
Sub-step 1 * : Quantization by feed-forward layers in the encoder Given an input X ∈ R d×n , a series of feed-forward layers in the encoder of the modified non-autoregressive Transformer can quantize X to an element L on the extended grid G + δ := {−δ −nd , 0, δ, . . . , 1 − δ} d×n .
Sub-step 2 * : Contextual mapping by attention layers in the encoder and the decoder Next, a series of attention layers in the encoder and decoder of the modified non-autoregressive Transformer can take the input L and implement a contextual mapping q : G δ → R m such that, for L and L that are not permutation of each other, all the elements in q(L) and q(L ) are distinct.
Sub-step 3 * : Function value mapping by feed-forward layers in the decoder Finally, a series of feed-forward layers in the decoder of the modified non-autoregressive Transformer can map elements of the contextual embedding q(L) to the desired output value of f ∈ F s2s at the input X.
Since Sub-step 1 * and Sub-step 3 * are exactly the same as Sub-step 1 and Sub-step 3 in the proof of Proposition A.1, we only provide the proof of Sub-step 2 * . We refer the readers to Yun et al. (2020) for the detailed proof of these two sub-steps.
A.5. Proof of Sub-step 2 * in Proposition A.2
Without loss of generality, we can assume that the compact support of f is contained in [0, 1] d×n . Following Yun et al.
(2020), we choose
E 1 =
0 1 2 · · · n − 1 0 1 2 · · · n − 1 . . . . . . . . . . . .
0 1 2 · · · n − 1 . and E 2 = 0 1 2 · · · m − 1 0 1 2 · · · m − 1 . . . . . . . . . . . . 0 1 2 · · · m − 1 .
By Sub-step 1 * , we quantize any input X + E 1 to its quantized version with the feed-forward layers in the Transformer encoder. We call this quantized version L:
L ∈ [0 : δ : 1 − δ] d × [1 : δ : 2 − δ] d × · · · × [n − 1 : δ : n − δ] d .
We do not need to quantize E 2 in our Sub-step 1 * with the feed-forward layers in the Transformer decoder because E 2 is already quantized.
As done in Lemma 6 in Yun et al. (2020), we define u := (1, δ −1 , . . . , δ −d+1 ) and l j := u T L :,j , for all j ∈ [n]. Next, following the construction in Appendix C.2 in Yun et al. (2020), with n(1/δ) d self-attention layers in the Transformer encoder, we can get l 1 < l 2 < · · · < l n such that the map from L to l n is one-to-one. In addition, l n is bounded by nδ −(n+1)d .
Finally, in a similar way as Appendix B.5.1 in Yun et al. (2020), we add an extra encoder-decoder attention layer with attention part nδ −(n+1)d−1 Ψ(·; 0). This layer shifts all the layers in the Transformer decoder by nδ −(n+1)d−1 l n . We define the output of this layer as g c (L). In this way, we ensure that different contexts L are mapped to distinct numbers in u T g c (L), thus implementing a contextual mapping.
B. Proof of Proposition 5.1
We prove this proposition by contradiction. Assuming that the k-th likely label in position i is chosen by the CRF algorithm and k > 3, we consider the following two cases:
Case 1: i = 0 is the first position or i = n − 1 is the last position. Without loss of generality, we can assume i = 0. The first and second labels in the current decoding is denoted by l * 0 and l * 1 . We also denote the top 2 labels in position 0 as l 0,1 and l 0,2 . If l 0,1 = l * 1 , we can set l * 0 to be l 0,2 , which construct a label sequence with higher likelihood in the CRF model. Otherwise, we can set l * 0 to be l 0,1 . In both cases, k > 3 is not the optimal solution.
Case 2: i is the neither the first position nor the last position. We denote the labels on the position before and after i as l * i−1 and l * i+1 . We denote the j-th likely label on the position i as l i,j . In this case, we will always find such a j ≤ 3 that l i,j = l * i−1 and l i,j = l * i+1 . Therefore, k > 3 is still not the optimal solution. Fig. 2 shows how the proposed optimal de-duplicated decoding method solves the sub-optimal decoding problem of the post de-duplication method.
C. Illustration of Different Decoding Approaches
D. Model Settings
The Non-Autoregressive Transformer model (NAT) is developed using the general encoder-decoder framework which is the same as the Autoregressive Transformer (AT) model. Fig. 3 shows the architectures of NAT and AT. We use a simplified version of the NAT model in Gu et al. (2017), that is, we do not copy the source embeddings as the input of the Transformer decoder and do not use the positional attention proposed in Gu et al. (2017). The input to our Transformer decoder is simply the padding symbols. More details about the description about the architectures can be found in Vaswani et al. (2017); Gu et al. (2017).
We use four model settings in our experiments, including toy, small, base, and large. The detailed configurations of these four model settings can be found in Tab. 8. An EM Approach to Non-autoregressive Conditional Sequence Generation Source , um den Korb zu verkleinern ( bis 75 % ) und in die Ecke zu schieben . Ground Truth to resize the fragment ( by 75 % ) and move it to the lower right corner . Iteration 0 to reduce the basket ( up to 75 % ) and move it to the corner . Iteration 1 to reduce the basket ( up to 75 % ) and push it into the corner . Iteration 2 to reduce the basket ( up to 75 % ) and put it into the corner .
Source
In den Interviews betonten viele Mnner , dass ihre Erwerbsabweichung ihre Karriere behindere . Ground Truth
In the interviews , many men emphasized that their employment deviation has hindered their careers . Iteration 0
In the interviews , many men emphasized that their divorce in employment hinders their careers . Iteration 1
In the interviews , many men stressed that their deviation from employment hindered their careers . Iteration 2
In the interviews , many men stressed that their deviation in employment hinders their careers .
Source
Um einen Pferd gesund und munter zu halten , mssen Sie seine physischen Bedrfnisse beachten .
Ground Truth
To keep your horse well , healthy and content you must satisfy its physical needs . Iteration 0
In order to keep a horse healthy and healthy , you must take into account its physical needs . Iteration 1
In order to keep a horse healthy and cheerful , you must take into account its physical needs . Iteration 2
In order to keep a horse healthy and cheerful , you must take into account your physical needs .
Source
Der Effekt von ' Eiskltefalle ' wird nun bei erlittenem Schaden abgebrochen .
Ground Truth
Freezing Trap now breaks on damage . Iteration 0 Ice Cat Trap effect will now be discarded if damage is dealt .
Iteration 1
The effect of Ice Cage Trap will now be aborted in case of damage suffered . Iteration 2
The effect of ice cold trap is now aborted in the event of damage suffered .
Source
Wir haben in der Europischen Union Mglichkeiten , wirksam gegen die Arbeitslosigkeit vorzugehen und zwar so , da man da , wo es am ntigsten ist , auch etwas davon sprt .
Ground Truth
We have the opportunity , in the EU , to do something that will have a positive effect on unemployment , characterised by taking action where there is the greatest need . Iteration 0
We in the European Union have the means to combat unemployment effectively , in such a way that we feel something of it where it is most necessary . Iteration 1
We in the European Union have opportunities to take effective action against unemployment , in such a way that we can feel something about it where it is most necessary . Iteration 2
We in the European Union have opportunities to take effective action against unemployment , in such a way that we also feel something of it where it is most necessary .
Source
In der Altstadt sind die Gassen so eng und verwinkelt , dass ein Auto nur mhsam vorankommt .
Ground Truth
Kaneo Settlement -Start the walk to Kaneo from St. Sophia church . Iteration 0
In the old town , the streets are so narrow and winding that a car can only progress with difficulty . Iteration 1
In the old town , the streets are so narrow and winding that a car is progressing hard . Iteration 2
In the old town , the streets are so narrow and winding that a car is only progressing laboriously .
Source
Sie mssen sich nur einmal die weltweit steigende Anzahl und Hufigkeit von Naturkatastrophen ansehen , um die Folgen der Klimavernderung zu erkennen .
Ground Truth
They only need to look at the increasing number and frequency of natural disasters worldwide to see its impact . Iteration 0
You only have to look at the increasing number and frequency of natural disasters worldwide to see the consequences of climate change . Iteration 1
They only have to look at the increasing number and frequency of natural disasters around the world to see the consequences of climate change . Iteration 2
They only have to look at the increasing number and frequency of natural disasters around the world in order to identify the consequences of climate change . Table 9. Examples in the training data for the NAR model on the WMT14 De-En task.
Figure 1 .
1The proposed framework: The AR and NAR models are jointly optimized by alternating between a E-step and a M-step. See Sec. 5.1 for the detailed explanation.
compare our model with the Transformer (Vaswani et al., 2017) teacher model and several NAR baselines, including NAT-FT (Gu et al., 2017), LT (Kaiser et al., 2018), ENAT (Guo et al., 2018), NAT-BAN (Zhou et al., 2019), NAT-REG (Wang et al., 2019), NAT-HINT (Li et al., 2018), NAT-CTC (Libovickỳ & Helcl, 2018), Flowseq (Ma et al., 2019), ReorderNAT (Ran et al., 2019), NAT-CRF (Sun et al., 2019), NAT-IR (Lee et al., 2018), CMLM (Ghazvininejad et al., 2019), and LevT (Gu et al., 2019).
Kaiser et al., 2018;Lee et al., 2018;Ma et al., 2019; Sun et al., 2019;Gu et al., 2019)
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. Wang, Y., Tian, F., He, D., Qin, T., Zhai, C., and Liu, T.-Y. Non-autoregressive machine translation with auxiliary regularization. arXiv preprint arXiv:1902.10245, 2019. Wei, B., Wang, M., Zhou, H., Lin, J., and Sun, X. Imitation learning for non-autoregressive neural machine translation. arXiv preprint arXiv:1906.02041, 2019. Williams, R. J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.Wu, L., Tian, F., Qin, T., Lai, J., and Liu, T.-Y. A study of reinforcement learning for neural machine translation. arXiv preprint arXiv:1808.08866, 2018.Yun, C., Bhojanapalli, S., Rawat, A. S., Reddi, S., and Kumar, S. Are transformers universal approximators of sequence-to-sequence functions? In International Conference on Learning Representations, 2020. URL https: //openreview.net/forum?id=ByxRM0Ntvr.
Figure 3 .
3The architectures of Autoregressive Transformer and Non-autoregressive Transformer used in this paper.
Table 1 .
1Toy examples illustrating the two types of synthetic experiments.
Table 2 .
2Theaccuracy in whole-sentence matching of the AR and
NAR models over 1000 synthetic examples.
Experiment I Experiment II
Models
AR NAR AR
NAR
Accuracy(%) 99.9 95.7 99.8 00.0
Table 3 .
3Performance of BLEU score on WMT14 En-De/De-En and IWSLT14 De-En tasks for single-pass NAR models. "/" denotes that the results are not reported in the original paper. Transformer(Vaswani et al., 2017) results are based on our own reproduction.Models
WMT14
IWSLT14 Latency Speedup
En-De De-En De-En
Autoregressive teacher model
Transformer
27.84 32.14 34.69
393ms 1.00×
w/ beam size 5
Non-autoregressive models
NAT-FT
17.69 21.47
/
39ms † 15.6× †
LT
19.80
/
/
105ms †
/
ENAT
20.65 23.02 24.13
24ms † 25.3× †
NAT-BAN
21.47
/
/
/
/
NAT-REG
20.65 24.77 23.89
22ms † 27.6× †
NAT-HINT
21.11 25.24 25.55
26ms † 30.2× †
NAT-CTC
17.68 19.80
/
350ms † 3.42× †
FlowSeq-base
21.45 26.16 27.55
/
/
ReorderNAT
22.79 27.28
/
/
16.1× †
NAT-CRF
23.44 27.22 27.44
37ms † 10.4× †
Ours
NAT baseline
19.55 23.44 22.35
22ms 17.9×
+ EM training
23.27 26.73 29.38
22ms 17.9×
+ ODD decoding 24.54 27.93 30.69
24ms 16.4×
Table 4 .
4Performance of BLEU score on WMT14 En-De/De-En and IWSLT14 De-En tasks for NAR models with rescoring or iterative refinement. "/" denotes that the results are not reported in the original paper. Transformer(Vaswani et al., 2017) results are based on our own reproduction.Models
WMT14
IWSLT14 Latency Speedup
En-De De-En De-En
Autoregressive teacher model
Transformer
27.84 32.14 34.69
393ms 1.00×
w/ beam size 5
Non-autoregressive models
NAT-FT (rescore 10)
18.66 22.41
/
79ms † 7.68× †
LT (rescore 10)
21.00
/
/
/
/
ENAT (rescore 9)
24.28 26.10 27.30
49ms † 12.4× †
NAT-REG (rescore 9)
24.61 28.90 28.04
40ms † 15.1× †
NAT-HINT (rescore 9)
25.20 29.52 28.80
44ms † 17.8× †
FlowSeq
base (rescore 15) 23.08 28.07
/
/
1.04× †
large (rescore 15) 25.03 30.48
/
/
0.96× †
ReorderNAT (rescore 7)
24.74 29.11
/
/
7.40× †
NAT-CRF (rescore 9)
26.07 29.68 29.99
63ms † 6.14× †
NAT-IR (10 iterations)
21.61 25.48
/
222ms † 1.88× †
CMLM
(4 iterations)
25.94 29.90
/
/
3.05× †
(10 iterations)
27.03 30.53
/
/
1.30× †
LevT (7+ iterations)
27.27
/
/
/
3.55× †
Ours
NAT baseline (rescore 9)
22.24 26.21 29.64
41ms 9.59×
+ EM training
25.03 28.78 31.74
41ms 9.59×
+ ODD decoding
25.75 29.29 32.66
43ms 9.14×
+ larger model
26.21 29.81 33.25
65ms 6.05×
Table 5 .
5Analysis of the convergence for the EM algorithm on WMT14 En-De test BLEU. NCM represents the Normalized Corpus-level Multi-modality. All the models are evaluated without ODD decoding and rescoring.Iteration t = 1, 2, 3, 4, 5 * are per-
formed without quality constraints. Iteration t = 5, 6 are re-started
from t = 4 with quality constraints.
NAR Model AR Model NCM
t = 1
19.55
27.84
2.88
t = 2
22.27
27.50
2.33
t = 3
22.85
27.13
2.24
t = 4
23.27
26.78
2.16
t = 5 *
22.86
26.11
2.04
t = 5
23.18
26.72
2.12
t = 6
23.16
26.75
2.11
Table 6 .
6Analysisof the amortized inference for iteration t =
2, 3, 4 on WMT En-De test BLEU. All the models are evaluated
without ODD decoding and rescoring. We show the results of the
NAR models using different training data in the M-step.
amortized non-amortized
t = 2
22.27
21.78
t = 3
22.85
22.44
t = 4
23.27
22.98
6.6. Analysis of the Optimal De-duplicated Decoding
Table 7 .
7Analysis of the ODD decoding on WMT En-De test BLEU. All the models are trained with our EM algorithm.WMT En-De WMT De-En IWSLT
post-de-duplication
23.67
26.93
24.96
+ rescoring 9
25.56
28.92
32.03
ODD decoding
24.54
27.93
30.69
+ rescoring 9
25.75
29.29
32.66
For example, Google Translate only provide one translation for the input text.
Although this method does not allow any word duplication in the output sequence, it is still able to produce any sequence. To solve the problem that multiple consecutive occurrences of a token cannot be captured by our decoding method, we can introduce a special " concat " symbol. For example, "very very good" can be represented by "very concat very good".3 https://wit3.fbk.eu/ 4 http://statmt.org/wmt14/translation-task. html
We follow common practice in previous works to make a fair comparison. Specifically, we use tokenized case-sensitive BLEU for WMT datasets and case-insensitive BLEU for IWSLT datasets. 6 † in Tab. 3 and Tab. 4 indicates that the latency and the speedup may be affected by hardware settings and are thus not fair for direct comparison.
The bias b is omitted for all matrix multiplication operations for brevity.
AcknowledgementsWe thank the reviewers for their helpful comments. This work is supported in part by the National Science Founda-An EM Approach to Non-autoregressive Conditional Sequence Generation tion (NSF) under grant IIS-1546329.Figure 2. Illustration of different decoding methods. Darker boxes represent higher likelihood. The argmax decoding method produces "An approach appraoch" as the result, which contains word duplication. The empirical post de-duplication method can solve the word duplication problem, but after collapsing, the length of the target sequence is changed. This will cause a discrepancy between the predicted target length and the actual sequence length and thus make the final output sub-optimal. The proposed Optimal De-Duplicated (ODD) decoding can produce the optimal prediction in the CRF framework. Note that OOD decoding only needs to consider the top-3 labels for each position in the forward-backward algorithm, which is very efficient.E. Analysis of Training Examples for the NAR ModelIn Tab. 9, we present randomly picked examples from the training data for the NAR model on the WMT14 De-En task. We can find that the proposed EM algorithm constantly changes the training examples for the NAR model.F. Analysis of Translation ResultsIn Tab. 10, we present randomly picked translation outputs from the test set of WMT14 De-En. We have the following observations:• The proposed OOD decoding method preserves the original predicted length of tokens, which avoid the sub-optimal problem of the post de-duplication method.• The proposed EM algorithm can effectively jointly optimize both the AR model and the NAR model. During EM iterations, the multi-modality in the AR models is reduced, while the translation quality of the NAR models is improved. Australian reports read that , in the meantime , it is a holiday holiday the resort of Kraresort in southern Thailand . NAR model -Iter 2 w/ post de-duplication Australian reports read that , in the meantime , it is a holiday the resort of Kraresort in southern Thailand . NAR model -Iter 2 w/ ODD decoding Australian reports read that , in the meantime , it is a holiday in the resort of Kraresort in southern Thailand .Table 10. Examples of translation outputs on the WMT14 De-En task. We do not apply rescoring to the NAR model's outputs.
. J L Ba, J R Kiros, G Hinton, arXiv:1607.06450E. Layer normalization. arXiv preprintBa, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintBahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
T Furlanello, Z C Lipton, M Tschannen, L Itti, Anandkumar , A , arXiv:1805.04770Born again neural networks. arXiv preprintFurlanello, T., Lipton, Z. C., Tschannen, M., Itti, L., and Anandkumar, A. Born again neural networks. arXiv preprint arXiv:1805.04770, 2018.
Posterior regularization for structured latent variable models. K Ganchev, J Gillenwater, B Taskar, Journal of Machine Learning Research. 11Ganchev, K., Gillenwater, J., Taskar, B., et al. Posterior regularization for structured latent variable models. Jour- nal of Machine Learning Research, 11(Jul):2001-2049, 2010.
Convolutional sequence to sequence learning. J Gehring, M Auli, D Grangier, D Yarats, Y N Dauphin, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Gehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y. N. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Ma- chine Learning-Volume 70, pp. 1243-1252. JMLR. org, 2017.
Amortized inference in probabilistic reasoning. S Gershman, N Goodman, Proceedings of the annual meeting of the cognitive science society. the annual meeting of the cognitive science society36Gershman, S. and Goodman, N. Amortized inference in probabilistic reasoning. In Proceedings of the annual meeting of the cognitive science society, volume 36, 2014.
Mask-predict: Parallel decoding of conditional masked language models. M Ghazvininejad, O Levy, Y Liu, L Zettlemoyer, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingGhazvininejad, M., Levy, O., Liu, Y., and Zettlemoyer, L. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6114-6123, 2019.
J Gu, J Bradbury, C Xiong, V O Li, R Socher, arXiv:1711.02281Non-autoregressive neural machine translation. arXiv preprintGu, J., Bradbury, J., Xiong, C., Li, V. O., and Socher, R. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281, 2017.
. J Gu, C Wang, J Zhao, arXiv:1905.11006Levenshtein transformer. arXiv preprintGu, J., Wang, C., and Zhao, J. Levenshtein transformer. arXiv preprint arXiv:1905.11006, 2019.
J Guo, X Tan, D He, T Qin, L Xu, T.-Y Liu, arXiv:1812.09664Nonautoregressive neural machine translation with enhanced decoder input. arXiv preprintGuo, J., Tan, X., He, D., Qin, T., Xu, L., and Liu, T.-Y. Non- autoregressive neural machine translation with enhanced decoder input. arXiv preprint arXiv:1812.09664, 2018.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHe, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Distilling the knowledge in a neural network. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv preprintHinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Fast decoding in sequence models using discrete latent variables. Ł Kaiser, A Roy, A Vaswani, N Parmar, S Bengio, J Uszkoreit, N Shazeer, arXiv:1803.03382arXiv preprintKaiser, Ł., Roy, A., Vaswani, A., Parmar, N., Bengio, S., Uszkoreit, J., and Shazeer, N. Fast decoding in sequence models using discrete latent variables. arXiv preprint arXiv:1803.03382, 2018.
Sequence-level knowledge distillation. Y Kim, A M Rush, arXiv:1606.07947arXiv preprintKim, Y. and Rush, A. M. Sequence-level knowledge distil- lation. arXiv preprint arXiv:1606.07947, 2016.
D P Kingma, J Ba, Adam, arXiv:1412.6980A method for stochastic optimization. arXiv preprintKingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F C Pereira, Lafferty, J., McCallum, A., and Pereira, F. C. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 2001.
Deterministic nonautoregressive neural sequence modeling by iterative refinement. J Lee, E Mansimov, K Cho, arXiv:1802.06901arXiv preprintLee, J., Mansimov, E., and Cho, K. Deterministic non- autoregressive neural sequence modeling by iterative re- finement. arXiv preprint arXiv:1802.06901, 2018.
Hint-based training for non-autoregressive translation. Z Li, D He, F Tian, T Qin, L Wang, T.-Y Liu, Li, Z., He, D., Tian, F., Qin, T., Wang, L., and Liu, T.- Y. Hint-based training for non-autoregressive translation. 2018.
Hint-based training for non-autoregressive machine translation. Z Li, Z Lin, D He, F Tian, T Qin, L Wang, T.-Y Liu, arXiv:1909.06708arXiv preprintLi, Z., Lin, Z., He, D., Tian, F., Qin, T., Wang, L., and Liu, T.-Y. Hint-based training for non-autoregressive machine translation. arXiv preprint arXiv:1909.06708, 2019.
End-to-end non-autoregressive neural machine translation with connectionist temporal classification. J Libovickỳ, J Helcl, arXiv:1811.04719arXiv preprintLibovickỳ, J. and Helcl, J. End-to-end non-autoregressive neural machine translation with connectionist temporal classification. arXiv preprint arXiv:1811.04719, 2018.
Automatic evaluation of summaries using n-gram co-occurrence statistics. C.-Y Lin, E Hovy, Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational LinguisticsLin, C.-Y. and Hovy, E. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pp. 150-157, 2003.
X Ma, C Zhou, X Li, G Neubig, E Hovy, Flowseq, arXiv:1909.02480Non-autoregressive conditional sequence generation with generative flow. arXiv preprintMa, X., Zhou, C., Li, X., Neubig, G., and Hovy, E. Flowseq: Non-autoregressive conditional sequence generation with generative flow. arXiv preprint arXiv:1909.02480, 2019.
The EM algorithm and extensions. G J Mclachlan, T Krishnan, John Wiley & Sons382McLachlan, G. J. and Krishnan, T. The EM algorithm and extensions, volume 382. John Wiley & Sons, 2007.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics, 2002.
Guiding nonautoregressive neural machine translation decoding with reordering information. Q Ran, Y Lin, P Li, J Zhou, arXiv:1911.02215arXiv preprintRan, Q., Lin, Y., Li, P., and Zhou, J. Guiding non- autoregressive neural machine translation decoding with reordering information. arXiv preprint arXiv:1911.02215, 2019.
Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, arXiv:1508.07909arXiv preprintSennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
| [] |
[
"What is the Essence of a Claim? Cross-Domain Claim Identification",
"What is the Essence of a Claim? Cross-Domain Claim Identification"
] | [
"Johannes Daxenberger \nDepartment of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n\n",
"Steffen Eger \nDepartment of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n\n",
"Ivan Habernal \nDepartment of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n\n",
"Christian Stab \nDepartment of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n\n"
] | [
"Department of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n",
"Department of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n",
"Department of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n",
"Department of Computer Science\nIryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA\nTechnische Universität Darmstadt\n"
] | [
"Natural Language Processing"
] | Argument mining has become a popular research area in NLP. It typically includes the identification of argumentative components, e.g. claims, as the central component of an argument. We perform a qualitative analysis across six different datasets and show that these appear to conceptualize claims quite differently. To learn about the consequences of such different conceptualizations of claim for practical applications, we carried out extensive experiments using state-of-the-art featurerich and deep learning systems, to identify claims in a cross-domain fashion. While the divergent conceptualization of claims in different datasets is indeed harmful to cross-domain classification, we show that there are shared properties on the lexical level as well as system configurations that can help to overcome these gaps. | 10.18653/v1/d17-1218 | [
"https://www.aclweb.org/anthology/D17-1218.pdf"
] | 11,014,757 | 1704.07203 | 21887b77472dfa7bf415ce42dbc39984c99b0003 |
What is the Essence of a Claim? Cross-Domain Claim Identification
September 7-11, 2017
Johannes Daxenberger
Department of Computer Science
Iryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Steffen Eger
Department of Computer Science
Iryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Ivan Habernal
Department of Computer Science
Iryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Christian Stab
Department of Computer Science
Iryna Gurevych † ‡ † Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
What is the Essence of a Claim? Cross-Domain Claim Identification
Natural Language Processing
Copenhagen, DenmarkSeptember 7-11, 2017‡ Ubiquitous Knowledge Processing Lab (UKP-DIPF) German Institute for Educational Research and Educational Information
Argument mining has become a popular research area in NLP. It typically includes the identification of argumentative components, e.g. claims, as the central component of an argument. We perform a qualitative analysis across six different datasets and show that these appear to conceptualize claims quite differently. To learn about the consequences of such different conceptualizations of claim for practical applications, we carried out extensive experiments using state-of-the-art featurerich and deep learning systems, to identify claims in a cross-domain fashion. While the divergent conceptualization of claims in different datasets is indeed harmful to cross-domain classification, we show that there are shared properties on the lexical level as well as system configurations that can help to overcome these gaps.
Introduction
The key component of an argument is the claim. This simple observation has not changed much since the early works on argumentation by Aristotle more than two thousand years ago, although argumentation scholars provide us with a plethora of often clashing theories and models (van Eemeren et al., 2014). Despite the lack of a precise definition in the contemporary argumentation theory, Toulmin's influential work on argumentation in the 1950's introduced a claim as an 'assertion that deserves our attention' (Toulmin, 2003, p. 11); recent works describe a claim as 'a statement that is in dispute and that we are trying to support with reasons' (Govier, 2010).
Argument mining, a computational counterpart of manual argumentation analysis, is a recent growing sub-field of NLP (Peldszus and Stede, 2013a). 'Mining' arguments usually involves several steps like separating argumentative from nonargumentative text units, parsing argument structures, and recognizing argument components such as claims-the main focus of this article. Claim identification itself is an important prerequisite for applications such as fake checking (Vlachos and Riedel, 2014), politics and legal affairs (Surdeanu et al., 2010), and science (Park and Blake, 2012).
Although claims can be identified with a promising level of accuracy in typical argumentative discourse such as persuasive essays (Stab and Gurevych, 2014;Eger et al., 2017), less homogeneous resources, for instance online discourse, pose challenges to current systems (Habernal and Gurevych, 2017). Furthermore, existing argument mining approaches are often limited to a single, specific domain like legal documents (Mochales-Palau and Moens, 2009), microtexts (Peldszus and Stede, 2015), Wikipedia articles (Levy et al., 2014;Rinott et al., 2015) or student essays (Stab and Gurevych, 2017). The problem of generalizing systems or features and their robustness across heterogeneous datasets thus remains fairly unexplored.
This situation motivated us to perform a detailed analysis of the concept of claims (as a key component of an argument) in existing argument mining datasets from different domains. 1 We first review and qualitatively analyze six existing publicly available datasets for argument mining ( §3), showing that the conceptualizations of claims in these datasets differ largely. In a next step, we analyze the influence of these differences for crossdomain claim identification. We propose several computational models for claim identification, including systems using linguistically motivated features ( §4.1) and recent deep neural networks ( §4.2), and rigorously evaluate them on and across all datasets ( §5). Finally, in order to better understand the factors influencing the performance in a cross-domain scenario, we perform an extensive quantitative analysis on the results ( §6).
Our analysis reveals that despite obvious differences in conceptualizations of claims across datasets, there are some shared properties on the lexical level which can be useful for claim identification in heterogeneous or unknown domains. Furthermore, we found that the choice of the source (training) domain is crucial when the target domain is unknown. We release our experimental framework to help other researchers build upon our findings. 2
Related Work
Existing approaches to argument mining can be roughly categorized into (a) multi-document approaches which recognize claims and evidence across several documents and (b) discourse level approaches addressing the argumentative structure within a single document. Multi-document approaches have been proposed e.g. by Levy et al. (2014) and Rinott et al. (2015) for mining claims and corresponding evidence for a predefined topic over multiple Wikipedia articles. Nevertheless, to date most approaches and datasets deal with single-document argumentative discourse. This paper takes the discourse level perspective, as we aim to assess multiple datasets from different authors and compare their notion of 'claims'. Mochales-Palau and Moens (2009) experiment at the discourse level using feature-rich SVM and a hand-crafted context-free grammar in order to recognize claims and premises in legal decisions. Their best results for claims achieve 74.1% F 1 using domain-dependent key phrases, token counts, location features, information about verbs, and the tense of the sentence. Peldszus and Stede (2015) present an approach based on a minimum spanning tree algorithm and model the global structure of arguments considering argumentative relations, the stance and the function of argument components. Their approach yields 86.9% F 1 for recognizing claims in English 'microtexts'. Habernal and Gurevych (2017) cast ar-gument component identification as BIO sequence labeling and jointly model separation of argumentative from non-argumentative text units and identification of argument component boundaries together with their types. They achieved 25.1% Macro-F 1 with a combination of topic, sentiment, semantic, discourse and embedding features using structural SVM. Stab and Gurevych (2014) identified claims and other argument components in student essays. They experiment with several classifiers and achieved the best performance of 53.8% F 1 score using SVM with structural, lexical, syntactic, indicator and contextual features. Although the above-mentioned approaches achieve promising results in particular domains, their ability to generalize over heterogeneous text types and domains remains unanswered. Rosenthal and McKeown (2012) set out to explore this direction by conducting cross-domain experiments for detecting claims in blog articles from LiveJournal and discussions taken from Wikipedia. However, they focused on relatively similar datasets that both stem from the social media domain and in addition annotated the datasets themselves, leading to an identical conceptualization of the notion of claim. Although Al-Khatib et al. (2016) also deal with cross-domain experiments, they address a different task; namely identification of argumentative sentences. Further, their goals are different: they want to improve argumentation mining via distant supervision rather than detecting differences in the notions of a claim.
Domain adaptation techniques (Daume III, 2007) try to address the frequently observed drop in classifier performances entailed by a dissimilarity of training and test data distributions. Since techniques such as learning generalized crossdomain representations in an unsupervised manner (Blitzer et al., 2006;Pan et al., 2010;Glorot et al., 2011;Yang and Eisenstein, 2015) have been criticized for targeting specific source and target domains, it has alternatively been proposed to learn universal representations from general domains in order to render a learner robust across all possible domain shifts (Müller and Schütze, 2015;Schnabel and Schütze, 2013). Our approach is in a similar vein. However, rather than trying to improve classifier performances for a specific sourcetarget domain pair, we want to detect differences between these pairs. Furthermore, we are looking for universal feature sets or classifiers that perform generally well for claim identification across varying source and target domains.
Claim Identification in Computational Argumentation
We briefly describe six English datasets used in our empirical study; they all capture claims on the discourse level. Table 1 summarizes the dataset statistics relevant to claim identification.
Datasets
The AraucariaDB corpus (Reed et al., 2008) includes various genres (VG) such as newspaper editorials, parliamentary records, or judicial summaries. The annotation scheme structures arguments as trees and distinguishes between claims and premises at the clause level. Although the reliability of the annotations is unknown, the corpus has been extensively used in argument mining (Moens et al., 2007;Feng and Hirst, 2011;Rooney et al., 2012). The corpus from Habernal and Gurevych (2017) includes user-generated web discourse (WD) such as blog posts, or user comments annotated with claims and premises as well as backings, rebuttals and refutations (α U 0.48) inspired by Toulmin's model of argument (Toulmin, 2003).
The persuasive essay (PE) corpus (Stab and Gurevych, 2017) includes 402 student essays. The scheme comprises major claims, claims and premises at the clause level (α U 0.77). The corpus has been extensively used in the argument mining community (Persing and Ng, 2015;Lippi and Torroni, 2015;Nguyen and Litman, 2016).
Biran and Rambow (2011a) annotated claims and premises in online comments (OC) from blog threads of LiveJournal (κ 0.69). In a subsequent work, Biran and Rambow (2011b) applied their annotation scheme to documents from Wikipedia talk pages (WTP) and annotated 118 threads. For our experiments, we consider each user comment in both corpora as a document, which yields 2, 805 documents in the OC corpus and 1, 985 documents in the WTP corpus.
Peldszus and Stede (2016) created a corpus of German microtexts (MT) of controlled linguistic and rhetoric complexity. Each document includes a single argument and does not exceed five argument components. The scheme models the argument structure and distinguishes between premises and claims, among other properties (such as proponent/opponent or normal/example). In the first annotation study, 26 untrained annotators annotated 23 microtexts in a classroom experiment (κ 0.38) (Peldszus and Stede, 2013b). In a subsequent work, the corpus was largely extended by expert annotators (κ 0.83). Recently, they translated the corpus to English, resulting in the first parallel corpus in computational argumentation; our experiments rely on the English version.
Qualitative Analysis of Claims
In order to investigate how claim annotations are tackled in the chosen corpora, one co-author of this paper manually analyzed 50 randomly sampled claims from each corpus. The characteristics taken into account are drawn from argumentation theory (Schiappa and Nordin, 2013) and include among other things the claim type, signaling words and discourse markers.
Biran and Rambow (2011b) do not back-up their claim annotations by any common argumentation theory but rather state that claims are utterances which convey subjective information and anticipate the question 'why are you telling me that?' and need to be supported by justifications. Using this rather loose definition, a claim might be any subjective statement that is justified by the author. Detailed examination of the LiveJournal corpus (OC) revealed that sentences with claims are extremely noisy. Their content ranges from a single word, ("Bastard."), to emotional expressions of personal regret, ("::hugs:: i am so sorry hon ..") to general Web-chat nonsense ("W-wow... that's a wicked awesome picture... looks like something from Pirates of the Caribbean...gone Victorian ...lolz.") or posts without any clear argumentative purpose ("what i did with it was make this recipe for a sort of casserole/stratta (i made this up, here is the recipe) [...] butter, 4 eggs, salt, pepper, sauted onions and cabbage..add as much as you want bake for 1 hour at 350 it was seriously delicious!"). The Wikipedia Talk Page corpus (WTP) contains claims typical to Wikipedia quality discussions ("That is why this article has NPOV issues.") and policy claims (Schiappa and Nordin, 2013) are present as well ("I think the gallery should be got rid of altogether."). However, a small number of nonsensical claims remains ("A dot.").
Analysis of the MT dataset revealed that about half of claim sentences contain the modal verb 'should', clearly indicating policy claims ("The death penalty should be abandoned everywhere."). Such statements also very explicitly express the stance on the controversial topic of interest. In a similar vein, claims in persuasive students' essays (PE) heavily rely on phrases signaling beliefs ("In my opinion, although using machines have many benefits, we cannot ignore its negative effects.") or argumentative discourse connectors whose usage is recommended in textbooks on essay writing ("Thus, it is not need for young people to possess this ability."). Most claims are value/policy claims written in the present tense.
The mixture of genres in the AraucariaDB corpus (VG) is reflected in the variety of claims. While some are simple statements starting with a discourse marker ("Therefore, 10% of the students in my logic class are left-handed."), there are many legal-specific claims requiring expert knowledge ("In considering the intention of Parliament when passing the 1985 Act, or perhaps more properly the intention of the draftsman in settling its terms, there are [...]"), reported and direct speech claims ("Eight-month-old Kyle Mutch's tragic death was not an accident and he suffered injuries consistent with a punch or a kick, a court heard yesterday."), and several nonsensical claims ("RE: Does the Priest Scandal Reveal the Beast?") which undercut the consistency of this dataset.
The web-discourse (WD) claims take a clear stance to the relevant controversy ("I regard single sex education as bad."), yet sometimes anaphoric ("My view on the subject is no."). The usage of discourse markers is seldom. Habernal and Gurevych (2017) investigated hedging in claims and found out that it varies with respect to the topic being discussed (10% up to 35% of claims are hedged). Sarcasm or rhetorical question are also common ("In 2013, is single-sex education really the way to go?").
These observations make clear that annotating claims-the central part of all arguments, as suggested by the majority of argumentation scholars-can be approached very differently when it comes to actual empirical, data-driven operationalization. While some traits are shared, such as that claims usually need some support to make up a 'full' argument (e.g., premises, evidence, or justifications), the exact definition of a claim can be arbitrary-depending on the domain, register, or task.
Methodology
Given the results from the qualitative analysis, we want to investigate whether the different conceptualizations of claims can be assessed empirically and if so, how they could be dealt with in practice. Put simply, the task we are trying to solve in the following is: given a sentence, classify whether or not it contains a claim. We opted to model the claim identification task on sentence level, as this is the only way to make all datasets compatible to each other. Different datasets model claim boundaries differently, e.g. MT includes discourse markers within the same sentence, whereas they are excluded in PE.
All six datasets described in the previous section have been preprocessed by first segmenting documents into sentences using Stanford CoreNLP (Manning et al., 2014) and then annotating every sentence as claim, if one or more tokens within the sentence were labeled as claim (or major claim in PE). Analogously, each sentence is annotated as non-claim, if none of its tokens were labeled as claim (or major claim). Although our basic units of interest are sentences, we keep the content of the entire document to be able to retrieve information about the context of (non-)claims. 3 We are not interested in optimizing the properties of a certain learner for this task, but rather want to compare the influence of different types of lexical, syntactical, and other kinds of information across datasets. 4 Thus, we used a limited set of learners for our task: a) a standard L2-regularized logistic regression approach with manually defined feature sets 5 , which is a simple yet robust and established technique for many text classification problems (Plank et al., 2014;He et al., 2015;Zhang et al., 2016a;Ferreira and Vlachos, 2016); and b) several deep learning approaches, using state-of-the-art neural network architectures.
The in-domain experiments were carried out in a 10-fold cross-validation setup with fixed splits into training and test data. As for the crossdomain experiments, we train on the entire data of the source domain and test on the entire data of the target domain. In the domain adaptation terminology, this corresponds to an unsupervised setting.
To address class-imbalance in our datasets (see Table 1), we downsample the negative class (nonclaim) both in-domain and cross-domain, so that positive and negative class occur approximately in an 1:1 ratio in the training data. Since this means that we discard a lot of useful information (many negative instances), we repeat this procedure 20 times, in each case randomly discarding instances of the negative class such that the required ratio is obtained. At test time, we use the majority prediction of this ensemble of 20 trained models. With the exception of very few cases, this led to consistent performance improvements across all experiments. The systems are described in more detail in the following subsections. Additionally, we report the results of two baselines. The majority baseline labels all sentences as non-claims (predominant class in all datasets), the random baseline labels sentences as claims with 0.5 probability.
Linguistically Motivated Features
For the logistic regression-based experiments (LR) we employed the following feature groups. Structure Features capture the position, the length and the punctuation of a sentence. Lexical Features are lowercased unigrams. Syntax Features account for grammatical information at the sentence level. We include information about the part-of-speech and parse tree for each sentence.
Discourse Features encode information extracted with help of the Penn Discourse Treebank (PDTB) styled end-to-end discourse parser as presented by Lin et al. (2014). Embedding Features represent each sentence as a summation of its word embeddings (Guo et al., 2014). We further experimented with sentiment features (Habernal and Gurevych, 2015;Anand et al., 2011) and dictionary features (Misra et al., 2015;Rosenthal and McKeown, 2015) but these delivered very poor results and are not reported in this article. The full set of features and their parameters are described in the supplementary material to this article. We experiment with the full feature set, individual feature groups, and feature ablation (all features except for one group).
Deep Learning Approaches
As alternatives to our feature-based systems, we consider three deep learning approaches. The first is the Convolutional Neural Net of Kim (Kim, 2014) which has shown to perform excellently on many diverse classification tasks such as sentiment analysis and question classification and is still a strong competitor among neural techniques focusing on sentence classification (Komninos and Manandhar, 2016;Zhang et al., 2016b,c). We consider two variants of Kim's CNN, one in which words' vectors are initialized with pre-trained GoogleNews word embeddings (CNN:w2vec) and one in which the vectors are randomly initialized and updated during training (CNN:rand). Our second model is an LSTM (long short-term memory) neural net for sentence classification (LSTM) and our third model is a bidirectional LSTM (BiL-STM).
For all neural network classifiers, we use default hyperparameters concerning hidden dimensionalities (for the two LSTM models), number of filters (for the convolutional neural net), and others. We train each of the three neural networks for 15 iterations and choose in each case the learned model that performs best on a held-out development set of roughly 10% of the training data as the model to apply to unseen test data. This corresponds to an early stopping regularization scheme.
Results
In the following, we summarize the results of the various learners described above. Obtaining all results required heavy computation, e.g. the cross- validation experiments for feature-based systems took 56 days of computing. We intentionally do not list the results of previous work on those datasets. The scores are not comparable since we strictly work on sentence level (rather than e.g. clause level) and applied downsampling to the training data. All reported significance tests were conduced using two-tailed Wilcoxon Signed-Rank Test for matched pairs, i.e. paired scores of F 1 scores from two compared systems (Japkowicz and Shah, 2014).
In-Domain Experiments
The performance of the learners is quite divergent across datasets, with Macro-F 1 scores 6 ranging from 60% (WTP) to 80% (MT), average 67% (see Table 2). On all datasets, our best systems clearly outperform both baselines. In isolation, lexical, embedding, and syntax features are most helpful, whereas structural features did not help in most cases. Discourse features only contribute significantly on MT. When looking at the performance of the feature-based approaches, the most striking finding is the importance of lexical (in our setup, unigram) information.
The average performances of LR −syntax and CNN:rand are virtually identical, both for Macro-6 Described as Fscore M in Sokolova and Lapalme (2009). F 1 and Claim-F 1 , with a slight advantage for the feature-based approach, but their difference is not statistically significant (p ≤ 0.05). Altogether, these two systems exhibit significantly better average performances than all other models surveyed here, both those relying on and those not relying on hand-crafted features (p ≤ 0.05). The absence or the different nature of inter-annotator agreement measures for all datasets prevent us from searching for correlations between agreement and performance. But we observed that the systems yield better results on PE and MT, both datasets with good inter-annotator agreement (α u = 0.77 for PE and κ = 0.83 for MT).
Cross-Domain Experiments
For all six datasets, training on different sources resulted in a performance drop. Table 3 lists the results of the best feature-based (LR All features) and deep learning (CNN:rand) systems, as well as single feature groups (averages over all source domains, results for individual source domains can be found in the supplementary material to this article). We note the biggest performance drops on the datasets which performed best in the indomain setting (MT and PE). For the lowest scoring datasets, OC and WTP, the differences are only marginal when trained on a suitable dataset (VG and OC, respectively). The best feature-based approach outperforms the best deep learning approach in most scenarios. In particular, as opposed to the in-domain experiments, the difference of the Claim-F 1 measure between the feature-based approaches and the deep learning approaches is striking. In the feature-based approaches, on average, a combination of all features yields the best results for both Macro-F 1 and Claim-F 1 . When comparing single features, lexical ones do the best job.
Looking at the best overall system (LR with all features), the average test results when training on different source datasets are between 54% Macro-F 1 resp. 23% Claim-F 1 (both MT) and 58% (VG) resp. 34% (OC). Depending on the goal that should be achieved, training on VG (highest average Macro-F 1 ) or OC (highest average Claim-F 1 ) seems to be the best choice when the domain of test data is unknown (we analyze this finding in more depth in §6). MT clearly gives the best results as target domain, followed by PE and VG.
We also performed experiments with mixed sources, the results are shown in Table 4. We did this in a leave-one-domain-out fashion, in partic-ular we trained on all but one datasets and tested on the remaining one. In this scenario, the neural network systems seem to benefit from the increased amount of training data and thus gave the best results. Overall, the mixed sources approach works better than many of the single-source crossdomain systems -yet, the differences were not found to be significant, but as good as training on suitable single sources (see above).
Further Analysis and Discussion
To better understand which factors influence cross-domain performance of the systems we tested, we considered the following variables as potential determinants of outcome: similarity between source and target domain, the source domain itself, training data size, and the ratio between claims and non-claims.
We calculated the Spearman correlation of the top-500 lemmas between the datasets in each direction, see results in Table 5. The most similar domains are OC (source s) and WTP (target t), coming from the same authors. OC (s) and WD (t) as well OC (s) and VG (t) are also highly cor- related. For a statistical test of potential correlations between cross-domain performances and the introduced variables, we regress the cross-domain results (Table 3) on Table 5 (T4 in the following equation), on the number of claims #C (directly related to training data size in our experiments, effect of downsampling), and on the ratio of claims to non-claims R. 7 More precisely, given source/training data and target data pairs (s,t) in Table 3, we estimate the linear regression model
y st = α · T4 st + β · log(#C s ) + γ · R t + ε st ,(1)
where y st denotes the Macro-F 1 score when training on s and testing on t. In the regression, we also include binary dummy variables 1 σ = 1 s,σ for each domain σ whose value is 1 if s = σ (and 0 otherwise). These help us identify "good" source domains. The coefficient α for Table 5 is not statistically significantly different from zero in any case. Ultimately, this means that it is difficult to predict cross-domain performance from lexical similarity of the datasets. This is in contrast to e.g., POS tagging, where lexical similarity has been reported to predict cross-domain performance very 7 Overall, we had 15 different systems, see upper 15 rows in Table 2. Therefore, we had 15 different regression models.
well (Van Asch and Daelemans, 2010). The coefficient for training data size β is statistically significantly different from zero in three out of 15 cases. In particular, it is significantly positive in two (CNN:rand, CNN:w2vec) out of four cases for the neural networks. This indicates that the neural networks would have particularly benefited from more training data, which is confirmed by the improved performance of the neural networks in the mixed sources experiments (cf. §5.2). The ratio of claims to non-claims in t is among the best predictors for the variables considered here (coefficient γ is significant in three out of 15 cases, but consistently positive). This is probably due to our decision to balance training data (downsampling nonclaims) to keep the assessment of claim identification realistic for real-world applications, where the class ratio of t is unknown. Our systems are thus inherently biased towards a higher claim ratio.
Finally, the dummy variables for OC and VG are three times significantly positive, but consistently positive overall. Their average coefficient is 2.31 and 1.90, respectively, while the average coefficients for all other source datasets is negative, and not significant in most cases. Thus, even when controlling for all other factors such as training data size and the different claim ratios of target domains, OC and VG are the best source domains for cross-domain claim classification in our experiments. OC and VG are particularly good training sources for the detection of claims (as opposed to non-claims)-the minority class in all datasetsas indicated by the average Claim-F 1 scores in Table 3.
One finding that was confirmed both in-domain as well as cross-domain was the importance of lexical features as compared to other feature groups. As mere lexical similarity between domains does not explain performance (cf. coefficient α above), this finding indicated that the learners relied on a few, but important lexical clues. To go more into depth, we carried out error analysis on the CNN:rand cross-domain results. We used OC, VG and PE as source domains, and MT and WTP as target domains. By examining examples in which a model trained on OC and VG made correct predictions as opposed to a model trained on PE, we quickly noticed that lexical indicators indeed played a crucial role. In particular, the occurrence of the word "should" (and to a lower degree: "would", "article", "one") are helpful for the detection of claims across various datasets. In MT, a simple baseline labeling every sentence containing "should" as claim achieves 76.1 Macro-F 1 (just slightly below the best in-domain system on this dataset). In the other datasets, this phenomenon is far less dominant, but still observable. We conclude that a few rather simple rules (learned by models trained on OC and VG, but not by potentially more complex models trained on PE) make a big difference in the cross-domain setting.
Conclusion
In a rigorous empirical assessment of different machine learning systems, we compared how six datasets model claims as the fundamental component of an argument. The varying performance of the tested in-domain systems reflects different notions of claims also observed in a qualitative study of claims across the domains. Our results reveal that the best in-domain system is not necessarily the best system in environments where the target domain is unknown. Particularly, we found that mixing source domains and training on two rather noisy datasets (OC and VG) gave the best results in the cross-domain setup. The reason for this seem to be a few important lexical indicators (like the word "should") which are learned easier under these circumstances. In summary, as for the six datasets we analyzed here, our analysis shows that the essence of a claim is not much more than a few lexical clues.
From this, we conclude that future work should address the problem of vague conceptualization of claims as central components of arguments. A more consistent notion of claims, which also holds across domains, would potentially not just benefit cross-domain claim identification, but also higherlevel applications relying on argumentation mining (Wachsmuth et al., 2017). To further overcome the problem of domain dependence, multi-task learning is a framework that could be explored (Søgaard and Goldberg, 2016) for different conceptualizations of claims.
Table 2 :
2In-domain experiments, best values per column are highlighted. For each dataset (column head) we show two scores: Macro-F 1 score (left-hand column) and F 1 score for claims (right-hand column).
Table 3 :
3Cross-domain experiments, best values per column are highlighted, in-domain results (for comparison) in italics; results only for selected systems. For each source/target combination we show two scores: Macro-F 1 score (left-hand column) and F 1 score for claims (right-hand column).
Table 4 :
4Leave-one-domain-out experiments, best values per column are highlighted. For each test
dataset (column head) we show two scores: Macro-F 1 score (left-hand column) and F 1 score for claims
(right-hand column).
MT OC
PE
VG WD WTP
MT
100
47
51
52
49
48
OC
56
100
55
68
71
71
PE
59
58
100
66
67
57
VG
51
58
52
100
59
62
WD
54
61
61
62
100
55
WTP
49
59
49
57
57
100
Table 5 :
5Heatmap of Spearman correlations in % based on most frequent 500 lemmas for each dataset. Source domain: rows, target domain: columns.
We take the machine learning perspective in which different domains mean data drawn from different distributions(Murphy, 2012, p. 297).
https://github.com/UKPLab/emnlp2017-c laim-identification
This is true only for the feature-based learners. The neural networks do not have access to information beyond individual sentences.
For the same reason, we do not optimize any hyperparameters for individual learners, unless explicitly stated. 5 Using the liblinear library(Fan et al., 2008).
AcknowledgmentsThis work has been supported by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 03VP02540 (Ar-gumenText), by the GRK 1994/1 AIPHES (DFG), and by the ArguAna Project GU 798/20-1 (DFG).
Cross-Domain Mining of Argumentative Text through Distant Supervision. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas Köhler, Benno Stein, 15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Berlin, GermanyKhalid Al-Khatib, Henning Wachsmuth, Matthias Ha- gen, Jonas Köhler, and Benno Stein. 2016. Cross- Domain Mining of Argumentative Text through Dis- tant Supervision. In 15th Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1395-1404, Berlin, Germany.
Cats rule and dogs drool!: Classifying stance in online debate. Pranav Anand, Marilyn Walker, Rob Abbott, Jean E Fox Tree, Robeson Bowmani, Michael Minor, Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis WASSA 2011. the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis WASSA 2011Portland, OR, USAPranav Anand, Marilyn Walker, Rob Abbott, Jean E. Fox Tree, Robeson Bowmani, and Michael Mi- nor. 2011. Cats rule and dogs drool!: Classifying stance in online debate. In Proceedings of the 2nd Workshop on Computational Approaches to Subjec- tivity and Sentiment Analysis WASSA 2011, pages 1- 9, Portland, OR, USA.
Identifying justifications in written dialogs. Or Biran, Owen Rambow, Fifth IEEE International Conference on Semantic Computing (ICSC). Palo Alto, CA, USAOr Biran and Owen Rambow. 2011a. Identifying jus- tifications in written dialogs. In Fifth IEEE Interna- tional Conference on Semantic Computing (ICSC), pages 162-168, Palo Alto, CA, USA.
Identifying justifications in written dialogs by classifying text as argumentative. Or Biran, Owen Rambow, International Journal of Semantic Computing. 04Or Biran and Owen Rambow. 2011b. Identifying jus- tifications in written dialogs by classifying text as argumentative. International Journal of Semantic Computing, 05(04):363-381.
Domain adaptation with structural correspondence learning. John Blitzer, Ryan Mcdonald, Fernando Pereira, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingSydney, AustraliaJohn Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Processing, pages 120-128, Sydney, Australia.
Frustratingly easy domain adaptation. Hal Daume, Iii , Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicHal Daume III. 2007. Frustratingly easy domain adap- tation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256-263, Prague, Czech Republic.
Handbook of Argumentation Theory. H Frans, Bart Van Eemeren, Garssen, C W Erik, A Krabbe, Francisca Snoeck, Bart Henkemans, Jean H M Verheij, Wagemans, SpringerBerlin/HeidelbergFrans H. van Eemeren, Bart Garssen, Erik C. W. Krabbe, A. Francisca Snoeck Henkemans, Bart Verheij, and Jean H. M. Wagemans. 2014. Handbook of Argumentation Theory. Springer, Berlin/Heidelberg.
Neural End-to-End Learning for Computational Argumentation Mining. Steffen Eger, Johannes Daxenberger, Iryna Gurevych, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, Canadapage to appearSteffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural End-to-End Learning for Computational Argumentation Mining. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), page to appear, Vancouver, Canada.
Liblinear: A library for large linear classification. Kai-Wei Rong-En Fan, Cho-Jui Chang, Xiang-Rui Hsieh, Chih-Jen Wang, Lin, J. Mach. Learn. Res. 9Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A li- brary for large linear classification. J. Mach. Learn. Res., 9:1871-1874.
Classifying arguments by scheme. Vanessa Wei Feng, Graeme Hirst, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, OR, USAVanessa Wei Feng and Graeme Hirst. 2011. Classi- fying arguments by scheme. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 987-996, Portland, OR, USA.
Emergent: a novel data-set for stance classification. William Ferreira, Andreas Vlachos, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CA, USAWilliam Ferreira and Andreas Vlachos. 2016. Emer- gent: a novel data-set for stance classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1163-1168, San Diego, CA, USA.
Domain adaptation for large-scale sentiment classification: A deep learning approach. Xavier Glorot, Antoine Bordes, Yoshua Bengio, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine LearningBellevue, WA, USAXavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Pro- ceedings of the 28th International Conference on Machine Learning, pages 513-520, Bellevue, WA, USA.
A Practical Study of Argument. Trudy Govier, Wadsworth, Cengage Learning7th editionTrudy Govier. 2010. A Practical Study of Argument, 7th edition. Wadsworth, Cengage Learning.
Revisiting Embedding Features for Simple Semi-supervised Learning. Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarJiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Revisiting Embedding Features for Sim- ple Semi-supervised Learning. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, pages 110-120, Doha, Qatar.
Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse. Ivan Habernal, Iryna Gurevych, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalIvan Habernal and Iryna Gurevych. 2015. Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 2127- 2137, Lisbon, Portugal.
Argumentation Mining in User-Generated Web Discourse. Ivan Habernal, Iryna Gurevych, Computational Linguistics. 431Ivan Habernal and Iryna Gurevych. 2017. Argumen- tation Mining in User-Generated Web Discourse. Computational Linguistics, 43(1):125-179.
Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language. Luheng He, Mike Lewis, Luke Zettlemoyer, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalLuheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Lan- guage. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 643-653, Lisbon, Portugal.
Evaluating Learning Algorithms: A Classification Perspective. Nathalie Japkowicz, Mohak Shah, Cambridge University PressCambridge, UKNathalie Japkowicz and Mohak Shah. 2014. Evaluat- ing Learning Algorithms: A Classification Perspec- tive. Cambridge University Press, Cambridge, UK.
Convolutional neural networks for sentence classification. Yoon Kim, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarYoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar.
Dependency based embeddings for sentence classification tasks. Alexandros Komninos, Suresh Manandhar, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAlexandros Komninos and Suresh Manandhar. 2016. Dependency based embeddings for sentence classi- fication tasks. In Proceedings of the 2016 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1490-1500, San Diego, California.
Context dependent claim detection. Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, Noam Slonim, Proceedings of the 25th International Conference on Computational Linguistics. the 25th International Conference on Computational LinguisticsDublin, IrelandRan Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context depen- dent claim detection. In Proceedings of the 25th In- ternational Conference on Computational Linguis- tics, pages 1489-1500, Dublin, Ireland.
A PDTB-Styled End-to-End Discourse Parser. Ziheng Lin, Min-Yen Hwee Tou Ng, Kan, Natural Language Engineering. 202Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-Styled End-to-End Discourse Parser. Natu- ral Language Engineering, 20(2):151-184.
Contextindependent claim detection for argument mining. Marco Lippi, Paolo Torroni, Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence. the Twenty-Fourth International Joint Conference on Artificial IntelligenceBuenos Aires, ArgentinaMarco Lippi and Paolo Torroni. 2015. Context- independent claim detection for argument mining. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pages 185-191, Buenos Aires, Argentina.
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mc-Closky, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsBaltimore, MA, USAChristopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55-60, Baltimore, MA, USA.
Using summarization to discover argument facets in online idealogical dialog. Amita Misra, Pranav Anand, Jean Fox Tree, Marilyn Walker, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics. the Conference of the North American Chapter of the Association for Computational LinguisticsDenver, CO, USAAmita Misra, Pranav Anand, Jean Fox Tree, and Mari- lyn Walker. 2015. Using summarization to discover argument facets in online idealogical dialog. In Pro- ceedings of the Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 430-440, Denver, CO, USA.
Argumentation mining: The detection, classification and structure of arguments in text. Raquel Mochales, - Palau, Marie-Francine Moens, Proceedings of the 12th International Conference on Artificial Intelligence and Law. the 12th International Conference on Artificial Intelligence and LawBarcelona, SpainRaquel Mochales-Palau and Marie-Francine Moens. 2009. Argumentation mining: The detection, clas- sification and structure of arguments in text. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, pages 98-107, Barcelona, Spain.
Automatic detection of arguments in legal texts. Marie-Francine Moens, Erik Boiy, Proceedings of the 11th International Conference on Artificial Intelligence and Law. the 11th International Conference on Artificial Intelligence and LawStanford, CA, USARaquel Mochales Palau, and Chris ReedMarie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th International Conference on Artificial Intelli- gence and Law, pages 225-230, Stanford, CA, USA.
Robust morphological tagging with word representations. Thomas Müller, Hinrich Schütze, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, CO, USAThomas Müller and Hinrich Schütze. 2015. Robust morphological tagging with word representations. In Proceedings of the Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 526-536, Denver, CO, USA.
Kevin P Murphy, Machine Learning: A Probabilistic Perspective. Cambridge, MA, USAMIT PressKevin P. Murphy. 2012. Machine Learning: A Prob- abilistic Perspective. MIT Press, Cambridge, MA, USA.
Improving argument mining in student essays by learning and exploiting argument indicators versus essay topics. Huy Nguyen, Diane Litman, Proceedings of the Twenty-Ninth International Florida Artificial Intelligence Research Society Conference. the Twenty-Ninth International Florida Artificial Intelligence Research Society ConferenceKey Largo, FL, USAHuy Nguyen and Diane Litman. 2016. Improving ar- gument mining in student essays by learning and exploiting argument indicators versus essay top- ics. In Proceedings of the Twenty-Ninth Interna- tional Florida Artificial Intelligence Research So- ciety Conference, pages 485-490, Key Largo, FL, USA.
Cross-domain sentiment classification via spectral feature alignment. Xiaochuan Sinno Jialin Pan, Jian-Tao Ni, Qiang Sun, Zheng Yang, Chen, Proceedings of the 19th International Conference on World Wide Web. the 19th International Conference on World Wide WebRaleigh, NC, USASinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sen- timent classification via spectral feature alignment. In Proceedings of the 19th International Conference on World Wide Web, pages 751-760, Raleigh, NC, USA.
Identifying comparative claim sentences in full-text scientific articles. Hoon Dae, Catherine Park, Blake, Proceedings of the Workshop on Detecting Structure in Scholarly Discourse. the Workshop on Detecting Structure in Scholarly DiscourseJeju, Republic of KoreaDae Hoon Park and Catherine Blake. 2012. Identifying comparative claim sentences in full-text scientific ar- ticles. In Proceedings of the Workshop on Detecting Structure in Scholarly Discourse, pages 1-9, Jeju, Republic of Korea.
From Argument Diagrams to Argumentation Mining in Texts: A Survey. Andreas Peldszus, Manfred Stede, International Journal of Cognitive Informatics and Natural Intelligence. 71Andreas Peldszus and Manfred Stede. 2013a. From Argument Diagrams to Argumentation Mining in Texts: A Survey. International Journal of Cognitive Informatics and Natural Intelligence, 7(1):1-31.
Ranking the annotators: An agreement study on argumentation structure. Andreas Peldszus, Manfred Stede, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. the 7th Linguistic Annotation Workshop and Interoperability with DiscourseSofia, BulgariaAndreas Peldszus and Manfred Stede. 2013b. Ranking the annotators: An agreement study on argumenta- tion structure. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Dis- course, pages 196-204, Sofia, Bulgaria.
Joint prediction in mst-style discourse parsing for argumentation mining. Andreas Peldszus, Manfred Stede, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAndreas Peldszus and Manfred Stede. 2015. Joint pre- diction in mst-style discourse parsing for argumen- tation mining. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Processing, pages 938-948, Lisbon, Portugal.
An annotated corpus of argumentative microtexts. Andreas Peldszus, Manfred Stede, Argumentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation. Lisbon, PortugalAndreas Peldszus and Manfred Stede. 2016. An anno- tated corpus of argumentative microtexts. In Argu- mentation and Reasoned Action: Proceedings of the 1st European Conference on Argumentation, pages 801-815, Lisbon, Portugal.
Modeling argument strength in student essays. Isaac Persing, Vincent Ng, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaIsaac Persing and Vincent Ng. 2015. Modeling ar- gument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 543-552, Beijing, China.
Linguistically debatable or just plain wrong?. Barbara Plank, Dirk Hovy, Anders Søgaard, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MA, USABarbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Linguistically debatable or just plain wrong? In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, pages 507- 511, Baltimore, MA, USA.
Language resources for studying argument. Chris Reed, Raquel Mochales-Palau, Glenn Rowe, Marie-Francine Moens, Proceedings of the Sixth International Conference on Language Resources and Evaluation. the Sixth International Conference on Language Resources and EvaluationMarrakech, MoroccoChris Reed, Raquel Mochales-Palau, Glenn Rowe, and Marie-Francine Moens. 2008. Language resources for studying argument. In Proceedings of the Sixth International Conference on Language Resources and Evaluation, pages 2613-2618, Marrakech, Mo- rocco.
Show me your evidence -an automatic method for context dependent evidence detection. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M Khapra, Ehud Aharoni, Noam Slonim, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalRuty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence -an auto- matic method for context dependent evidence detec- tion. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing, pages 440-450, Lisbon, Portugal.
Applying kernel methods to argumentation mining. Niall Rooney, Hui Wang, Fiona Browne, Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference. the Twenty-Fifth International Florida Artificial Intelligence Research Society ConferenceMarco Island, FL, USANiall Rooney, Hui Wang, and Fiona Browne. 2012. Applying kernel methods to argumentation min- ing. In Proceedings of the Twenty-Fifth Interna- tional Florida Artificial Intelligence Research Soci- ety Conference, pages 272-275, Marco Island, FL, USA.
Detecting opinionated claims in online discussions. Sara Rosenthal, Kathleen Mckeown, Proceedings of the 2012 IEEE Sixth International Conference on Semantic Computing. the 2012 IEEE Sixth International Conference on Semantic ComputingWashington, DC, USASara Rosenthal and Kathleen McKeown. 2012. De- tecting opinionated claims in online discussions. In Proceedings of the 2012 IEEE Sixth International Conference on Semantic Computing, pages 30-37, Washington, DC, USA.
I couldn't agree more: The role of conversational structure in agreement and disagreement detection in online discussions. Sara Rosenthal, Kathy Mckeown, Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 16th Annual Meeting of the Special Interest Group on Discourse and DialoguePrague, Czech RepublicSara Rosenthal and Kathy McKeown. 2015. I couldn't agree more: The role of conversational structure in agreement and disagreement detection in online dis- cussions. In Proceedings of the 16th Annual Meet- ing of the Special Interest Group on Discourse and Dialogue, pages 168-177, Prague, Czech Republic.
Argumentation: Keeping Faith with Reason. Edward Schiappa, John P Nordin, Pearson UK1st editionEdward Schiappa and John P. Nordin. 2013. Argu- mentation: Keeping Faith with Reason, 1st edition. Pearson UK.
Towards robust cross-domain domain adaptation for part-ofspeech tagging. Tobias Schnabel, Hinrich Schütze, Proceedings of the Sixth International Joint Conference on Natural Language Processing. the Sixth International Joint Conference on Natural Language ProcessingNagoya, JapanTobias Schnabel and Hinrich Schütze. 2013. Towards robust cross-domain domain adaptation for part-of- speech tagging. In Proceedings of the Sixth Interna- tional Joint Conference on Natural Language Pro- cessing, pages 198-206, Nagoya, Japan.
Deep multi-task learning with low level tasks supervised at lower layers. Anders Søgaard, Yoav Goldberg, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAnders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 231-235, Berlin, Germany.
A systematic analysis of performance measures for classification tasks. Marina Sokolova, Guy Lapalme, Information Processing & Management. 454Marina Sokolova and Guy Lapalme. 2009. A system- atic analysis of performance measures for classifica- tion tasks. Information Processing & Management, 45(4):427-437.
Identifying argumentative discourse structures in persuasive essays. Christian Stab, Iryna Gurevych, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarChristian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive es- says. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing, pages 46-56, Doha, Qatar.
Parsing argumentation structures in persuasive essays. Christian Stab, Iryna Gurevych, arXiv:1604.07370Computational Linguistics. pages in press. preprint available atChristian Stab and Iryna Gurevych. 2017. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, pages in press, preprint avail- able at arXiv:1604.07370.
Legal claim identification: Information extraction with hierarchically labeled data. Mihai Surdeanu, Ramesh Nallapati, Christopher D Manning, Workshop on the Semantic Processing of Legal Texts at LREC. Valletta, MaltaMihai Surdeanu, Ramesh Nallapati, and Christopher D. Manning. 2010. Legal claim identification: Infor- mation extraction with hierarchically labeled data. In Workshop on the Semantic Processing of Legal Texts at LREC, pages 22-29, Valletta, Malta.
The Uses of Argument, Updated Edition. Stephen E Toulmin, Cambridge University PressNew YorkStephen E. Toulmin. 2003. The Uses of Argument, Up- dated Edition. Cambridge University Press, New York.
Using domain similarity for performance estimation. Vincent Van Asch, Walter Daelemans, Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing. the 2010 Workshop on Domain Adaptation for Natural Language ProcessingUppsala, SwedenVincent Van Asch and Walter Daelemans. 2010. Us- ing domain similarity for performance estimation. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 31-36, Uppsala, Sweden.
Fact checking: Task definition and dataset construction. Andreas Vlachos, Sebastian Riedel, Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science. the ACL 2014 Workshop on Language Technologies and Computational Social ScienceBaltimore, MD, USAAndreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Lan- guage Technologies and Computational Social Sci- ence, pages 18-22, Baltimore, MD, USA.
PageRank" for Argument Relevance. Henning Wachsmuth, Benno Stein, Yamen Ajjour, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, Spain1Henning Wachsmuth, Benno Stein, and Yamen Ajjour. 2017. "PageRank" for Argument Relevance. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 1117-1127, Valencia, Spain.
Unsupervised multi-domain adaptation with feature embeddings. Yi Yang, Jacob Eisenstein, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, CO, USAYi Yang and Jacob Eisenstein. 2015. Unsupervised multi-domain adaptation with feature embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 672-682, Denver, CO, USA.
ArgRewrite: A Web-based Revision Assistant for Argumentative Writings. Fan Zhang, Rebecca Hwa, Diane Litman, Homa B Hashemi, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: DemonstrationsSan Diego, CA, USAFan Zhang, Rebecca Hwa, Diane Litman, and Homa B. Hashemi. 2016a. ArgRewrite: A Web-based Revision Assistant for Argumentative Writings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Demonstrations, pages 37-41, San Diego, CA, USA.
Dependency sensitive convolutional neural networks for modeling sentences and documents. Rui Zhang, Honglak Lee, Dragomir R Radev, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CA, USARui Zhang, Honglak Lee, and Dragomir R. Radev. 2016b. Dependency sensitive convolutional neural networks for modeling sentences and documents. In Proceedings of the Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1512-1521, San Diego, CA, USA.
MGNC-CNN: A simple approach to exploiting multiple word embeddings for sentence classification. Ye Zhang, Stephen Roller, Byron C Wallace, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CA, USAYe Zhang, Stephen Roller, and Byron C. Wallace. 2016c. MGNC-CNN: A simple approach to exploit- ing multiple word embeddings for sentence classi- fication. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1522-1527, San Diego, CA, USA.
| [
"https://github.com/UKPLab/emnlp2017-c"
] |
[
"IMDB Spoiler Dataset",
"IMDB Spoiler Dataset"
] | [
"Rishabh Misra r1misra@eng.ucsd.edu \nUC San Diego\n\n"
] | [
"UC San Diego\n"
] | [] | User-generated reviews are often our first point of contact when we consider watching a movie or a TV show. However, beyond telling us the qualitative aspects of the media we want to consume, reviews may inevitably contain undesired revelatory information (i.e. 'spoilers') such as the surprising fate of a character in a movie, or the identity of a murderer in a crime-suspense movie, etc. In this paper, we present a high-quality movie-review based spoiler dataset to tackle the problem of spoiler detection and describe various research questions it can answer. | 10.48550/arxiv.2212.06034 | [
"https://export.arxiv.org/pdf/2212.06034v1.pdf"
] | 254,564,768 | 2212.06034 | 97d6974e943ee13dfa9c6f6302355684763d7a98 |
IMDB Spoiler Dataset
Rishabh Misra r1misra@eng.ucsd.edu
UC San Diego
IMDB Spoiler Dataset
User-generated reviews are often our first point of contact when we consider watching a movie or a TV show. However, beyond telling us the qualitative aspects of the media we want to consume, reviews may inevitably contain undesired revelatory information (i.e. 'spoilers') such as the surprising fate of a character in a movie, or the identity of a murderer in a crime-suspense movie, etc. In this paper, we present a high-quality movie-review based spoiler dataset to tackle the problem of spoiler detection and describe various research questions it can answer.
Motivation
For people who are interested in consuming media but are unaware of the critical plot twists, spoilers may decrease the excitement regarding the pleasurable uncertainty and curiosity of media consumption. A random tweet, review, or recommended news article can greatly spoil people's experience when they have been looking forward to watching something. Therefore, a natural question is how to identify these spoilers in entertainment reviews, so that users can more effectively navigate review platforms. The first step towards solving the problem is access to a high-quality dataset which can highlight spoiler/non-spoiler reviews. Motivated by this utility, we curate the IMDB Spoiler Dataset presented in this paper.
IMDB Spoiler Dataset
We present 1 a large-scale and high-quality spoiler dataset of movie reviews collected from IMDb 2 . When leaving reviews on IMDb, users have the capability to annotate whether their review has any spoilers. The dataset is divided into 2 files: IMDB_movie_details contains metadata about the movies in the dataset and IMDB_reviews contains user reviews to the movies available in the dataset.
Each record in IMDB_movie_details consists of the following attributes:
• movie_id: unique id for the movie.
• plot_summary: summary of the plot without any spoilers.
• duration: run time of the movie.
• genre: list of genres the movie belongs to.
• rating: star rating out of 10.
• release_date: date the movie was released.
• plot_synopsis: revealing details of the plot of the movie.
Each record in IMDB_reviews consists of the following attributes:
• review_date: date the review was posted.
• movie_id: unique id of the movie for which the review is about.
• user_id: unique id of the user who left the review.
• is_spoiler: whether the review contains spoilers.
• review_text: text of the review.
• rating: star rating (out of 10) given by the user.
• review_summary: review summary provided by the user.
As we can see, apart from the spoiler information, we have included a variety of metadata that can prove useful in tackling various prediction problems apart from spoiler detection.
We expand more on this in section 6.
Data Curation Method
We make use of open-source tools like BeautifulSoup, Selenium, and Chrome Driver to curate the dataset. First, we extracted about 1500 seed movie links from the IMDb page. In all the movies presented, we extract their metadata like id, plot summary, genre information, average rating, and release date using BeautifulSoup API. Next, for each movie, we scraped all the available user reviews along with information about whether the review has spoilers, ratings, and review date. Since not all the reviews are loaded at once, we simulate a "load more" button click using Selenium to load extra reviews and repeat the process until there are no more reviews. Due to this method of collection, we have much more users as compared to movies in the dataset.
Reading the Data
Once you download the dataset, you can use the following code snippet to read the data for your machine learning methods:
Exploratory Data Analysis
In order to visualize how reliable the spoiler annotation on IMDb is, we simply visualize the spoiler/non-spoiler annotated reviews that contain the word "spoiler". We notice that >25% of times, people may call out a spoiler in their review but may not have annotated it containing spoiler information on IMDb. So there is a 1 in 4 chance that users may accidentally read spoilers while browsing IMDb, which motivates the need to develop sophisticated spoiler detection tools.
In Figure 2, we plot the distribution of word counts for spoiler/non-spoiler reviews and notice they follow a similar pattern, so there is no apparent distinction on the surface. Next, we take a case study of a specific movie: Star Wars: Episode V -The Empire Strikes Back (1980). Based on the insights produced by Wan et. al., we want to ascertain whether spoiler reviews contain more movie-specific words. Star Wars: Episode V -The Empire Strikes Back (1980) has 401 reviews out of which 95 are spoilers (~23.7%). In figure 3, we visualize how the fraction of spoiler reviews increases when looking at some movie-specific words, which validates the findings by Wan et. al.
Potential Use Cases
Apart from the evident spoiler detection task, the IMDB Spoiler Dataset also contains sufficient metadata to study the problem of movie recommendations based on users' reviews and ratings. Furthermore, rich textual information present in the movies combined with genre information can aid in tackling the genre prediction problem, which has utility for understanding text semantics. Back (1980), from left to right and top to bottom, we plot a fraction of spoiler reviews among the review, among the reviews which mention `Vader`, among the reviews which mention `father` and among the reviews which have both `Vader` and `father`. We notice as movie-specific words increase, the fraction of spoiler reviews also increases.
(file): for l in open(file,'r'): yield json.loads(l) data = list(parse_data('./IMDB_reviews.json'))
Figure 1 :
1Fraction of spoiler/non-spoiler reviews based on IMDb annotation.
Figure 2 :
2Distribution of spoiler and non-spoiler reviews based on word count.
Figure 3 :
3For Star Wars: Episode V -The Empire Strikes
Table 1
1notes the overall statistics of the dataset. We have a total of 573,913 reviews out of which 150,924 contain spoilers (~26.3%).Statistic
Value
# reviews
573,913
# spoiler reviews
150,924
# users
263,407
# movies
1,572
# users with at least one spoiler review
79,039
# movies with at least one spoiler review
1,570
Table 1 :
1General statistics of the dataset.
https://www.imdb.com/ 1 Dataset is available at https://rishabhmisra.github.io/publications/
Fine-grained spoiler detection from large-scale review corpora. Mengting Wan, Rishabh Misra, Ndapa Nakashole, Julian J Mcauley, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyAssociation for Computational Linguistics1Mengting Wan, Rishabh Misra, Ndapa Nakashole, and Julian J. McAuley. 2019. Fine-grained spoiler detection from large-scale review corpora. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 2605-2610. Association for Computational Linguistics.
| [] |
[
"Improving Topic Segmentation by Injecting Discourse Dependencies",
"Improving Topic Segmentation by Injecting Discourse Dependencies"
] | [
"Linzi Xing lzxing@cs.ubc.ca \nDepartment of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada\n",
"Patrick Huber huberpat@cs.ubc.ca \nDepartment of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada\n",
"Giuseppe Carenini carenini@cs.ubc.ca \nDepartment of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada\n"
] | [
"Department of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada",
"Department of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada",
"Department of Computer Science\nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada"
] | [] | Recent neural supervised topic segmentation models achieve distinguished superior effectiveness over unsupervised methods, with the availability of large-scale training corpora sampled from Wikipedia. These models may, however, suffer from limited robustness and transferability caused by exploiting simple linguistic cues for prediction, but overlooking more important inter-sentential topical consistency. To address this issue, we present a discourseaware neural topic segmentation model with the injection of above-sentence discourse dependency structures to encourage the model make topic boundary prediction based more on the topical consistency between sentences. Our empirical study on English evaluation datasets shows that injecting above-sentence discourse structures to a neural topic segmenter with our proposed strategy can substantially improve its performances on intradomain and out-of-domain data, with little increase of model's complexity. | 10.48550/arxiv.2209.08626 | [
"https://export.arxiv.org/pdf/2209.08626v1.pdf"
] | 252,368,005 | 2209.08626 | 772ea11644806fbc4fc9bfa4f4145d96dffd92e1 |
Improving Topic Segmentation by Injecting Discourse Dependencies
Linzi Xing lzxing@cs.ubc.ca
Department of Computer Science
University of British Columbia Vancouver
V6T 1Z4BCCanada
Patrick Huber huberpat@cs.ubc.ca
Department of Computer Science
University of British Columbia Vancouver
V6T 1Z4BCCanada
Giuseppe Carenini carenini@cs.ubc.ca
Department of Computer Science
University of British Columbia Vancouver
V6T 1Z4BCCanada
Improving Topic Segmentation by Injecting Discourse Dependencies
Recent neural supervised topic segmentation models achieve distinguished superior effectiveness over unsupervised methods, with the availability of large-scale training corpora sampled from Wikipedia. These models may, however, suffer from limited robustness and transferability caused by exploiting simple linguistic cues for prediction, but overlooking more important inter-sentential topical consistency. To address this issue, we present a discourseaware neural topic segmentation model with the injection of above-sentence discourse dependency structures to encourage the model make topic boundary prediction based more on the topical consistency between sentences. Our empirical study on English evaluation datasets shows that injecting above-sentence discourse structures to a neural topic segmenter with our proposed strategy can substantially improve its performances on intradomain and out-of-domain data, with little increase of model's complexity.
Introduction
Topic segmentation is a fundamental NLP task with the goal to separate textual documents into coherent segments (consisting of one or more sentences), following the document's underlying topical structure. The structural knowledge obtained from topic segmentation has been shown to play a vital role in key NLP downstream tasks, such as document summarization (Mitra et al., 1997;Riedl and Biemann, 2012;Xiao and Carenini, 2019), question answering (Oh et al., 2007;Diefenbach et al., 2018) and dialogue modeling (Xu et al., 2021;Zhang et al., 2020). The aim of topic segmentation makes it tightly connected to related research areas aiming to understand the latent structure of long and potentially complex text. Specifically, understanding the semantic and pragmatic underpinnings of a document can arguably support the task of separating continuous text into topical segments. To this end, Figure 1: An example article about Cholinergic Urticaria (CU) sampled from the en_disease portion of Wiki-Section dataset (Arnold et al., 2019). Left: discourse dependency structure predicted by the Sent-First discourse parser (Zhou and Feng, 2022). discourse analysis and discourse parsing provide the means to understand and infer the semantic and pragmatic relationships underlying complete documents, well aligned with the local text coherence and highly correlated to the inter-sentential topical consistency, as shown in Louis and Nenkova (2012) and Muangkammuen et al. (2020). With a variety of linguistic theories proposed in the past, such as the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), the lexicalized discourse framework (Webber et al., 2003a) (underlying PDTB), and the Segmented Discourse Representation Theory (SDRT) (Asher, 1993;Asher et al., 2003), we follow the RST framework in this work (1) as we focus on monologue text (as compared to dialogue frameworks, such as SDRT) and (2) since RST postulates complete discourse trees spanning whole documents, directly aligned with the topical structure of complete documents (Huber et al., 2021).
We further motivate the synergistic relationship between topic segmentation and discourse analysis/parsing in Figure 1, showing anecdotal evidence of the alignment between the document's topical structure and the respective RST-style discourse dependency graph. Starting from a sequence of sentences, the task of topic segmentation addresses the problem of splitting the given Wikipedia article into an ordered set of topical-coherent fragments (here: T1, T2 and T3) by predicting topical boundaries. As shown in the example, the document discourse tree is indicative of the topical structure of the document, as discourse dependencies occur considerably more often within a topic segment than across topic segments. Given significant influence on a variety of realworld tasks, topic segmentation is an active research area in the field of NLP. As such, modern, neural methods for monologue topic segmentation are proposed by formulating the task as a sentencelevel sequence labeling problem, trained and evaluated on the large-scale Wikipedia dataset (Xing et al., 2020;Glavas and Somasundaran, 2020;Barrow et al., 2020;Lo et al., 2021). These Wikipedia articles are well-suited for the task of topic segmentation, providing natural section marks which can be reasonably used as ground-truth segment boundaries (Koshorek et al., 2018;Arnold et al., 2019), superseding previously proposed unsupervised methods (Hearst, 1997;Galley et al., 2003;Eisenstein and Barzilay, 2008;Song et al., 2016). Despite the significant improvements achieved by neural supervised topic segmentation models, it remains unclear if these topic segmenters effectively learn to cluster sentences into topical-coherent pieces based on the (document-level) topical consistency, or solely exploit superficial patterns (e.g., simple linguistic cues) in the training domain.
To address this challenge, in this paper, we propose a more discourse-aware neural topic segmentation model. We thereby inject above-sentence discourse structures into basic topic segmenter to encourage the model to base its topic boundary prediction more explicitly on the topical consistency between sentences. More specifically, we propose to exploit a discourse dependency parser pre-trained on out-of-domain data to induce inter-sentential discourse dependency trees. Subsequently, we convert the dependency tree into a directed discourse graph with sentences as nodes and discourse dependencies as edges. With the generated discourse graph, a Graph Attention Network (GAT) (Veličković et al., 2018) is used to encode sentences as discoursecontextualized representations by aggregating information from neighboring sentence nodes in the graph. Finally, the discourse-infused sentence representations are concatenated with standard encodings for segment boundary prediction.
In our empirical study conducted on English evaluation datasets, we show that: (i) Injecting discourse structures can substantially improve the performance of the basic neural topic segmentation model on three datasets. (ii) Our novel, discourseenhanced topic segmenter is more robust compared to the basic neural model in settings that require domain transfer, showing superior performance on four challenging real-world test sets, to confirm the improved domain-independence. (iii) Even if our proposal has inferior accuracy against a state-ofthe-art segmenter sharing the same basic architecture, it does achieve significantly better efficiency assessed by model's parameter size and speeds for learning and inference, which makes it potentially more favorable in real-world use.
Related Work
Topic Segmentation aims to reveal important aspects of the semantic structure of a document by splitting a sequence of sentences into topiccoherent textual units. Typically, computational topic segmentation models can be broadly separated into supervised and unsupervised approaches. Early topic segmentation methods usually fall into the category of unsupervised approaches, mainly due to the prevalent data sparsity issue at the time. Based on predicting the coherence between sentences through shallow (surface-level) features, unsupervised models reach a limited understanding of the contextualized structure of documents by merely relying on easy-to-extract but barely effective features for the similarity measurement between sentences (i.e., the degree of token overlap between two sentences) (Hearst, 1997;Eisenstein and Barzilay, 2008). Improving on the unsupervised topic segmentation paradigm, researchers started to address this issue by introducing pretrained neural language models (LMs), trained on massive dataset (Xu et al., 2021;Solbiati et al., 2021;. Some works show that the signal captured in pre-trained LMs (e.g., BERT (Devlin et al., 2019)) are more indicative of topic relevance between sentences than early surface-level features. However, these proposed strategies of integrating BERT into the topic segmentation framework solely exploit BERT to induce dense encodings and further compute reciprocal sentence similarities. While this constitues a reasonable first step, the considerable gap between the training objective of LMs and topic segmentation task requires further efforts along this line of work (Sun et al., 2022).
More recently, the data sparsity issue has been alleviated by the proposal of large-scale corpora sampled from Wikipedia (e.g., Wiki-727k (Koshorek et al., 2018) and Wiki-Section (Arnold et al., 2019)), in which well-structured articles with their section marks are used as gold labels for segment boundaries. As a result, neural supervised topic segmenters started to gain attention by reaching greater effectiveness and efficiency compared to previously proposed unsupervised approaches. These supervised topic segmenters typically follow a common strategy which formulates the task as a sentencelevel sequence labeling problem. More specifically, by assigning binary labels to each sentence, models infer the likelihood of a sentence to be a topic segment boundary (Koshorek et al., 2018;Arnold et al., 2019;Barrow et al., 2020;Lo et al., 2021). However, we believe that current models, besides reaching promising performance, potentially favour simple linguistic cues over effective measurements for semantic cohesion, restricting their application to narrow domains. Some recent works have attempted to address this limitation via explicitly integrating coherence modeling components into segmenters (Xing et al., 2020;Glavas and Somasundaran, 2020). However, compared to our objective in this work, these proposed coherence modeling strategies are either (i) only taking two adjacent sentences into account, limiting the additional module to extremely local contexts, or (ii) discriminating real documents from artificially "incoherent" texts, resulting in implicit and synthetic negative training samples and heavy parameter size caused by modeling multiple tasks simultaneously.
In contrast, we propose an effective method to integrate the document discourse (dependency) structure into neural topic segmentation frameworks, following the intuition that above-sentence discourse structure are indicative of text coherence and topical consistency, providing a more global and interpretable source of information for better topic transition prediction.
Discourse Analysis and Parsing analyze and generalize the underlying semantic and pragmatic structure of a coherence document (called a discourse). As an important upstream task in the field of NLP, discourse analysis proposes elaborate frameworks and theories to describe the textual organization of a document. To this end, a variety of popular discourse theories proposed in the past, such as (besides others) the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) and the lexicalized discourse framework (Webber et al., 2003b) for monologues as well as the Segmented Discourse Representation Theory (SDRT) (Asher, 1993;Asher et al., 2003) for dialogues. Among these theories, the RST discourse theory postulates a single, complete discourse tree for monologue documents, while the lexicalized discourse framework only focuses on local discourse connectives within and between adjacent sentences. Focusing on the connection between discourse information and topic segmentation, we employ the RST discourse theory in this work, most aligned with the requirement to capture topical coherence.
Building on human annotated discourse treebanks, a mix of traditional and neural discourse parsers have been proposed over the last decades, with traditional approaches mainly exploiting surface-level features through Support-Vector Machines (SVMs) (Hernault et al., 2010;Ji and Eisenstein, 2014;Wang et al., 2017) or Conditional Random Fields (CRFs) (Joty et al., 2015;Feng and Hirst, 2014). On the other hand, neural models achieve similar or superior results on RST discourse parsing, with models using either custom architectures (Yu et al., 2018;Liu and Lapata, 2018) or pre-trained LMs (e.g. BERT (Zhou and Feng, 2022), RoBERTa , SpanBERT ). In this work, we generate discourse dependency trees from a BERT-based neural dependency parser proposed in Zhou and Feng (2022), since: (i) The parser follows the intuition that information, and hence structures, in sentences are oftentimes "self-contained". Therefore, it predicts the interactions between EDUs of the same sentence in a first stage and subsequently predicts the inter-sentential discourse structures, which aligns well with our objective of sentencelevel topic segmenation. (ii) The parser by Zhou and Feng (2022) makes direct prediction of dependency discourse structures, alleviating the potential error caused by converting constituency structures into their respective dependency trees.
Methodology
As shown in Figure 2, our proposed discourseaware neural topic segmentation model comprises two components: the Hierarchical Topic Segmenter and Discourse Graph Modeling, highlighted in green and red respectively. Discourse Graph Modeling further comprises of a Discourse Graph Construction and Graph Modeling component.
Basic Model: Hierarchical Topic Segmenter
The basic architecture of our proposal is adopted from the basic model in Xing et al. (2020), consisting of two hierarchical layers: First, a sentence encoder contextualizes individual sentences, followed by the second layer, conditioning sentences on the complete document. Following the settings in Xing et al. (2020), we adopt the attention BiL-STM architecture 1 for each layer and enhance the encodings with pre-trained BERT embeddings. Formally, given a document D as a sequence of n sentences, the sentence encoder (bottom component in Figure 2) yields the embedding for each individual sentence. Based on the obtained encodings, the document-level contextualization layer returns an ordered set of hidden states H = {h 1 , ..., h n }. Next, a simple multilayer perceptron (MLP) with a final softmax activation serves as a binary topic boundary predictor based on a threshold τ , tuned on the validation set. During training, we optimize the model in accordance to the cross-entropy loss, while at inference time, every sentence (except the last sentence 2 ) with a probability ≥ τ is considered as the end of a segment.
Discourse Graph Modeling
Our goal is to inject inter-sentential discourse dependency structures into the task of topic segmentation. We believe that the additional, structural information is thereby well aligned with the topical consistency between sentences, hence, suited to guide the prediction of topic transitions. To integrate the discourse information into the basic model described in section 3.1, we first generate an above-sentence discourse dependency tree T D for the document. Specifically, we utilize the discourse dependency parsing model proposed in Zhou and Feng (2022), reaching state-of-the-art performance for discourse tree construction and relation type identification in multiple language settings. The "Sent-First" parser (Zhou and Feng, 2022) further fits the aim of our proposal due to its two-staged approach, first generating discourse trees within (Zhou and Feng, 2022). sentences and subsequently combining sentencelevel sub-trees. This hard constraint allows us to exclusively obtain above-sentence discourse structures, avoiding potentially leaky sub-trees (Joty et al., 2015). Regarding the discourse relations attached to every head-dependent pair (discourse dependency), we follow the observation in , stating that the agreement between the type of rhetorical relation is usually lower and more ambiguous, to leave them for future work to avoid error propagation.
In contrast to the original proposal in Zhou and Feng (2022), training and testing their dependency discourse parser on one corpus (i.e., SciDTB (Yang and Li, 2018)), we believe that a mixture of several diverse and publicly available discourse treebanks with different document lengths and domains can increase the parser's robustness on new and unseen genres. Therefore, we retrain the parser on a mixture of RST-DT 3 (Carlson et al., 2002), GUM 4 (Zeldes, 2017), SciDTB 5 (Yang and Li, 2018) and COVID19-DTB 6 (Nishida and Matsumoto, 2022). More specifically, we combine those discourse treebanks and randomly split the aggregated corpus into 80% training, 10% validation, 10% test data. The parser retrained on our combined training portion achieves an Unlabeled Attachment Score (UAS) of 58.6 on the test portion. We show additional key dataset statistics for each treebank used in this paper in Table 1.
After training the discourse parser to infer a discourse dependency tree T D for document D, we convert the tree structure into a discourse graph G D (as a binary matrix). Formally, we initialize the graph G D as a n × n identity matrix G D = I n,n , connecting every node to itself. Afterwards, we fill in the remaining cells by assigning
G D [i][j] = 1 iff ∃ T D (i → j)
, with i, j indexing the head and dependant sentences in the document, respectively. Using the binary matrix representation of G D , we apply the multi-layer Graph Attention Network (GAT) (Veličković et al., 2018) to update sentence encodings following the discourse graph. More specifically, with the discourse graph matrix G D and the contextualized representations H = {h 1 , ..., h n } described in section 3.1, within each graph attentional layer, we perform self-attention on the sentence nodes. Taking the lth layer as an example, we compute the attention coefficient α i,j between sentence nodes i, j as:
α l ij = sof tmax(e l ij ) = exp(e l ij ) k∈N i exp(e l ik ) ,(1)e l ij = LeakyReLU (a T l [W l g l i ||W l g l j ])(2)
where W l and a l are learnable parameters for layer l and T is the transposition operation. N i denotes the direct neighborhood of node i in the graph (G D [i][·] = 1). As the node representation input of the first GAT layer (l = 0), g 0 i = h i ∈ H. Once attention coefficients are obtained, we compute the intermediate node representation z l i for sentence node i at layer l by aggregating information from neighboring nodes as:
z l i = j∈N i α l ij W l g l j(3)
Following the step in Huang et al. (2020), we combine the intermediate node representation z l i with the input of this layer g l i to get the updated node representation g l+1 i as the input for the next layer:
g l+1 i = ELU (g l i + z l i )(4)
where ELU denotes an exponential linear unit (Clevert et al., 2016). With the output g i from the last layer of GAT, we concatenate it together with h i and further feed [h i ; g i ] into the predictor layer for segment boundary prediction.
Experiments
In order to quantitatively evaluate the effectiveness, generality and efficiency of our proposal, we conduct three sets of experiments to compare our topic segmentation approach against a variety of baselines and previous models. Namely, we assess the performance of our model in regards to the Intra-Domain Segment Inference Performance, Domain Transfer Segment Inference Performance, and conduct an additional Efficiency Analysis.
Datasets
Intra-Domain Datasets
For the set of intra-domain segment inference experiments, we train and test models within the same domain (here: on the same corpus). We thereby choose three diverse corpora (see Table 2 for more details) for the intra-domain evaluation:
Choi (Choi, 2000). This corpus consists of 920 articles artificially generated by randomly combining passages from the Brown corpus. The datapoints in this dataset are not human written, leading us to solely use this corpus for a preliminary performance assessment for topic segmentation models in a 80% (train)/10%(dev)/10%(test) data-split.
Rules (Bertrand et al., 2018). This corpus consists of 4,461 documents about regulation discussion published in the Federal Register 7 by U.S. federal agencies. Since each paragraph is about one particular regulation and all regulations covered by one document are under the same category, we deem it as a reasonably coherent data source for topic segmentation evaluation with the paragraph breaks as ground-truth segment boundaries. We split this dataset into training, validation and test sets with the default 80%, 10%, 10% data-split.
Wiki-Section (Section) (Arnold et al., 2019). This corpus originally contains Wikipedia articles in both English and German. The English portion of the dataset, which we use for our intra-domain experiment, consists of around 3.6k articles about diseases and 19.5k articles about cities around the world. After the step of filtering out problematic samples with incorrect sentence segmentation detected by mismatched counts between sentences and labels, the resulted dataset covers 21,376 articles with the highest-level section marks as groundtruth segment boundaries. We follow the setting in Arnold et al. (2019)
Domain Transfer Datasets
To better evaluate models' robustness in cases where a domain-shift is present (called "domain transfer segment inference"), we apply the topic segmenters trained on Wiki-Section to four small corpora heavily deviating from the training corpus (see Table 3 for more details):
Wiki-50 (Koshorek et al., 2018) consists of 50 Wikipedia articles randomly sampled from the latest English Wikipedia dump. There is no overlap between this dataset and Wiki-Section.
Cities (Chen et al., 2009) consists of 100
Wikipedia articles about cities. There is no overlap between this dataset and Wiki-Section, even the theme of this dataset is close to the portion of city articles in Wiki-Section.
Elements (Chen et al., 2009) consists of 118
Wikipedia articles on chemical elements.
Clinical (Malioutov and Barzilay, 2006) consists of 227 chapters in a clinical book. The subsection marks within each chapter are deemed as groundtruth segment boundaries.
Experimental Design
Baselines: We directly compare our proposed discourse-aware topic segmentation model (called Basic Model + Discourse) with the following unsupervised and supervised baselines:
-BayesSeg (Eisenstein and Barzilay, 2008): This unsupervised method makes segmentation prediction by situating the lexical cohesion of text in a Bayesian framework. A text span produced by a distinct lexical distribution is recognized as a coherent topic segment.
-GraphSeg (Glavaš et al., 2016): This unsupervised method derives semantically coherent segments through reasoning on a semantic relatedness graph construed from greedy lemma alignment.
-TextSeg (Koshorek et al., 2018): This supervised neural topic segmenter adopts a hierarchical neural sequence labeling framework with BiLSTM as the main architecture of each layer. The basic model used in our paper (described in section 3.1) is an effective extension of this approach.
-Sector (Arnold et al., 2019): This is a supervised neural topic segmenter extended from TextSeg by adding an auxiliary layer for sentence topic label prediction. The learned intermediate topic embeddings for sentences are directly utilized for segment boundary inference.
-Transformer (Glavas and Somasundaran, 2020): This is a supervised neural topic segmenter consisting of two hierarchically connected Transformer networks for sentence encoding and sentence contextualization respectively.
-Basic Model + Context (Xing et al., 2020): This is a top-performing neural topic segmenter which shares the same basic architecture with our proposal. The approach improves the context modeling capacity of the plain basic model by adding an auxiliary coherence prediction module and restricted self-attention.
Evaluation Metrics: We use the P k error score 8 (Beeferman et al., 1999) for our intra-domain and domain transfer segment inference evaluations. The metric thereby simply measures the probability that a pair of sentences located at two ends of a ksized sliding window in a document are incorrectly identified as belonging to the same segment or not. k is determined as half of the average true segment size of the document. Since it is a penalty metric, lower values indicates better performance. We further quantitatively analyze models' efficiency according to two aspects: Model size and model speed, evaluating the count of learnable parameters and batches/documents processed per second during training/inference, besides P k measurement. Table 4: P k (↓) error score on three corpora for intradomain experiment. Results in bold and underlined indicates the best and second best performance across all comparisons. The row in purple is the results achieved by our proposal. The column in green is the results for RSTDT paragraph break prediction with gold discourse structures integrated.
component, the number of GAT layers is set to 2 through validation and the number of heads is set to 4 as in (Veličković et al., 2018). The input and output dimensions of each layer = 256. Training uses Adam with lr = 1e −3 and batch size = 8. Early stopping is applied within 10 epoches of model training and the boundary prediction threshold τ is tuned over the validation set of each corpus we use for intra-domain model evaluation.
Intra-Domain Segment Inference
We report our results of the intra-domain segment inference on the Choi, Rules and Wiki-Section datasets in Table 4. For better performance comparison, the table is subdivided into three sub-tables: random baseline, previously proposed approaches and models build on top of the basic model we use. We observe that the basic model without any additinal components already outperforms alternative supervised and unsupervised segmenters. With the above-sentence discourse dependency information injected, as proposed in this paper, the method (named +Discourse) further improves the performance by a notable margin across all three corpora. We further find that our proposed approach does not achieve superior performances compared to the basic model enhanced with the context modeling strategy (+Context) in Xing et al. (2020). We believe that a possible explanation for this underperformance could be the upstream parsing error of the discourse dependency parser applied out-ofdomain, oftentimes severly impairing the parsing performance (Huber and Carenini, 2019). Therefore, we conduct an additional experiment on RST- DT due to the availability of gold discourse structures annotated by human for this corpus. With no human-annotated topic segment boundaries at hand, we use paragraph breaks contained in RST-DT articles as the ground-truth for training and testing of topic segmentation models. Our results in Table 4 show that the quality of discourse structure is positively correlated with enlarged improvements achieved by our proposal. In this case, the upper bound achieved by integrating gold discourse structures can even outperform the basic model enhanced by context modeling (+Context). Table 5 presents the performance of simple baselines, previously proposed models and our new approach on the domain transfer task. Similar to the intra-domain segment inference, the Basic Model+Context approach still achieves the best performance across all testing domains except Elements, in which the unsupervised BayesSeg performs superior. However, our +Discourse strategy still leads to improvement over the basic model, and achieves comparable performance to the best model (+Context) on Wiki-50 and Cities. We believe that it gives evidence that injecting discourse dependency structures has potential to enhance the generality of topic segmentation models. proposed system would be a better option in practical settings where efficiency is critical. Additionally, we conduct the same set of experiments for the model with both context modeling module and our proposed discourse structure integration (Basic Model+Context+Discourse). The performance of this model always falls in between +Context and +Discourse individually, but with the worst efficiency measured by model size and speed.
Domain Transfer Segment Inference
Efficiency Analysis
Conclusion and Future Work
In this paper, we present a neural topic segmentation model with injection of above-sentence discourse dependency structures inferred from a stateof-the-art discourse dependency parser. Different from previously proposed methods, our segmenter leverages the discourse signal by encoding the topical consistency between sentences from a more global and interpretable point of view. Experiments on multiple settings (intra-domain, domain transfer and efficiency comparison) show that our system achieves comparable performance to one of the current top-performing topic segmenters, with much less model size increase and speed degradation.
In the near future, we plan to investigate the synergy between topic segmentation and discourse parsing more comprehensively, by incorporating the type of inter-sentential rhetorical relations and analyzing whether and how this discourse knowledge can enhance supervised topic segmentation frameworks. In the long run, we intend to explore the possibility for discourse parsing to benefit segment topic labeling, which is another important task usually coupled together with topic segmentation to provide the coarse-grained structural information for documents. Particularly, we believe discourse parsing can potentially enhance the step of key phrase extraction in segment topic labeling due to the significant improvement it brings to the related task of name entity recognition (NER) (Jie and Lu, 2019).
Figure 2 :
2The overall architecture of our discourse-infused topic segmentation model.
Table 2 :
2Statistics of the datasets used in intra-domain experiments.
Table 3 :
3Statistics of the datasets used in domain transfer experiments.
Table 5: P k (↓) error score on four test corpora for domain transfer experiment. Results in bold and underlined indicates the best and second best performance across all comparisons. The row highlighted in purple is the results achieved by our proposal.Dataset
Wiki-50 Cities Elements Clinical
Random
52.7
47.1
50.1
44.1
BayesSeg
49.2
36.2
35.6
57.2
GraphSeg
63.6
40.0
49.1
64.6
TextSeg
28.5
19.8
43.9
36.6
Sector
28.6
33.4
42.8
36.9
Transformer
29.3
20.2
45.2
35.6
Basic Model
28.7
17.9
43.5
33.8
+Context
26.8
16.1
39.4
30.5
+Discourse
26.8
16.9
41.1
31.8
Table 6
6compares the efficiency of the top two models, comparing our proposed approach (Basic Model+Discourse) against Basic Model+Context. The experiments for these systems were carried out on a Nvidia Telsa V100 16G GPU card. We observe that our strategy of injecting discourse de-# Params ↓ T-Speed ↑ I-Speed ↑Basic Model
4.82M
6.90
35.58
+Context
10.93M
1.49
19.23
+Discourse
7.97M
5.44
32.85
Table 6 :
6The efficiency comparison between our proposal and the method proposed inXing et al. (2020) on the Wiki-Section corpus. These two models share the same basic segmentation framework. T-Speed refers the training speed as number of batches processed per second during training stage. I-Speed refers the inference speed as number of documents processed per second during inference stage.pendency structures can improve model's performance on intra-domain and domain transfer setting, but with less increase of model size and loss of speed compared to +Context. More specifically, adding our discourse graph modeling component on top of the basic model introduces 65% more learnable parameters while the context modeling components inXing et al. (2020) cause a 127% parameter increasing. On the other hand, discourse graph modeling slightly slows down the speed of model training and inference by 21% and 7.7% respectively, while making more complex context modeling significantly slows down the speed by 78% and 46%. Together with the previous results about model's effectiveness, we can see that our
We also considered Transformer as the backbone of contextualized encoder, but eventually chose BiLSTM for its superior performance.
We remove the last sentence from the sequence for prediction since it is per definition the end of the last segment.
catalog.ldc.upenn.edu/LDC2002T07 4 corpling.uis.georgetown.edu/gum 5 https://github.com/PKUTANGENT/SciDTB 6 https://github.com/norikinishida/ biomedical-discourse-treebanks
We also considered windiff(Pevzner and Hearst, 2002) as another evaluation metric. Since it was highly correlated with P k , we omit it and only present performance by P k to better compare with results reported in previous works.
AcknowledgmentsWe thank the anonymous reviewers and the UBC-NLP group for their insightful comments and suggestions. This research was supported by the Language & Speech Innovation Lab of Cloud BU, Huawei Technologies Co., Ltd.
SECTOR: A neural model for coherent topic segmentation and classification. Sebastian Arnold, Rudolf Schneider, Philippe Cudré-Mauroux, Felix A Gers, Alexander Löser, 10.1162/tacl_a_00261Transactions of the Association for Computational Linguistics. 7Sebastian Arnold, Rudolf Schneider, Philippe Cudré- Mauroux, Felix A. Gers, and Alexander Löser. 2019. SECTOR: A neural model for coherent topic seg- mentation and classification. Transactions of the As- sociation for Computational Linguistics, 7:169-184.
Reference to abstract objects in discourse. Nicholas Asher, https:/link.springer.com/book/10.1007/978-94-011-1715-9Springer Science & Business Media50Nicholas Asher. 1993. Reference to abstract objects in discourse, volume 50. Springer Science & Business Media.
Logics of conversation. Nicholas Asher, Nicholas Michael Asher, Alex Lascarides, Cambridge University PressNicholas Asher, Nicholas Michael Asher, and Alex Lascarides. 2003. Logics of conversation. Cam- bridge University Press.
A joint model for document segmentation and segment labeling. Joe Barrow, Rajiv Jain, Vlad Morariu, Varun Manjunatha, Douglas Oard, Philip Resnik, 10.18653/v1/2020.acl-main.29Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsJoe Barrow, Rajiv Jain, Vlad Morariu, Varun Manju- natha, Douglas Oard, and Philip Resnik. 2020. A joint model for document segmentation and segment labeling. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguis- tics, pages 313-322, Online. Association for Com- putational Linguistics.
Statistical models for text segmentation. Machine Learning. Doug Beeferman, Adam Berger, John Lafferty, 34Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Ma- chine Learning, 34(1):177-210.
Hall of mirrors: Corporate philanthropy and strategic advocacy. Marianne Bertrand, Matilde Bombardini, Raymond Fisman, Bradley Hackinen, Francesco Trebbi, National Bureau of Economic ResearchTechnical reportMarianne Bertrand, Matilde Bombardini, Raymond Fisman, Bradley Hackinen, and Francesco Trebbi. 2018. Hall of mirrors: Corporate philanthropy and strategic advocacy. Technical report, National Bu- reau of Economic Research.
Lynn Carlson, Mary Ellen Okurowski, Daniel Marcu, RST discourse treebank. Linguistic Data Consortium. University of PennsylvaniaLynn Carlson, Mary Ellen Okurowski, and Daniel Marcu. 2002. RST discourse treebank. Linguistic Data Consortium, University of Pennsylvania.
Global models of document structure using latent permutations. S R K Harr Chen, Regina Branavan, David R Barzilay, Karger, Proceedings of Human Language Technologies: The. Human Language Technologies: TheHarr Chen, S.R.K. Branavan, Regina Barzilay, and David R. Karger. 2009. Global models of document structure using latent permutations. In Proceed- ings of Human Language Technologies: The 2009
Annual Conference of the North American Chapter of the Association for Computational Linguistics. Boulder, ColoradoAssociation for Computational LinguisticsAnnual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 371-379, Boulder, Colorado. Association for Computational Linguistics.
Advances in domain independent linear text segmentation. Y Y Freddy, Choi, 1st Meeting of the North American Chapter. Association for Computational LinguisticsFreddy Y. Y. Choi. 2000. Advances in domain inde- pendent linear text segmentation. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics.
Fast and accurate deep network learning by exponential linear units (elus). Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter, arXiv: LearningDjork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and accurate deep network learning by exponential linear units (elus). arXiv: Learning.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Core techniques of question answering systems over knowledge bases: a survey. Dennis Diefenbach, Vanessa Lopez, Kamal Singh, Pierre Maret, Knowledge and Information Systems. 553Dennis Diefenbach, Vanessa Lopez, Kamal Singh, and Pierre Maret. 2018. Core techniques of question an- swering systems over knowledge bases: a survey. Knowledge and Information Systems, 55(3):529- 569.
Bayesian unsupervised topic segmentation. Jacob Eisenstein, Regina Barzilay, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HawaiiAssociation for Computational LinguisticsJacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 334-343, Honolulu, Hawaii. Association for Computational Linguistics.
A lineartime bottom-up discourse parser with constraints and post-editing. Vanessa Wei Feng, Graeme Hirst, 10.3115/v1/P14-1048Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MarylandLong Papers1Association for Computational LinguisticsVanessa Wei Feng and Graeme Hirst. 2014. A linear- time bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 511- 521, Baltimore, Maryland. Association for Compu- tational Linguistics.
Discourse segmentation of multi-party conversation. Michel Galley, Kathleen R Mckeown, Eric Fosler-Lussier, Hongyan Jing, 10.3115/1075096.1075167Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsSapporo, JapanAssociation for Computational LinguisticsMichel Galley, Kathleen R. McKeown, Eric Fosler- Lussier, and Hongyan Jing. 2003. Discourse seg- mentation of multi-party conversation. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 562-569, Sap- poro, Japan. Association for Computational Linguis- tics.
Unsupervised text segmentation using semantic relatedness graphs. Goran Glavaš, Federico Nanni, Simone Paolo Ponzetto, 10.18653/v1/S16-2016Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics. the Fifth Joint Conference on Lexical and Computational SemanticsBerlin, GermanyAssociation for Computational LinguisticsGoran Glavaš, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation us- ing semantic relatedness graphs. In Proceedings of the Fifth Joint Conference on Lexical and Computa- tional Semantics, pages 125-130, Berlin, Germany. Association for Computational Linguistics.
Twolevel transformer and auxiliary coherence modeling for improved text segmentation. Goran Glavas, Swapna Somasundaran, The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20). Goran Glavas and Swapna Somasundaran. 2020. Two- level transformer and auxiliary coherence model- ing for improved text segmentation. In The Thirty- Fourth AAAI Conference on Artificial Intelligence (AAAI-20), pages 2306-2315.
Coreference for discourse parsing: A neural approach. Grigorii Guz, Giuseppe Carenini, 10.18653/v1/2020.codi-1.17Proceedings of the First Workshop on Computational Approaches to Discourse. the First Workshop on Computational Approaches to DiscourseAssociation for Computational LinguisticsOnlineGrigorii Guz and Giuseppe Carenini. 2020. Coref- erence for discourse parsing: A neural approach. In Proceedings of the First Workshop on Computa- tional Approaches to Discourse, pages 160-167, On- line. Association for Computational Linguistics.
Unleashing the power of neural discourse parsers -a context and structure aware approach using large scale pretraining. Grigorii Guz, Patrick Huber, Giuseppe Carenini, 10.18653/v1/2020.coling-main.337Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsGrigorii Guz, Patrick Huber, and Giuseppe Carenini. 2020. Unleashing the power of neural discourse parsers -a context and structure aware approach using large scale pretraining. In Proceedings of the 28th International Conference on Compu- tational Linguistics, pages 3794-3805, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.
Text tiling: Segmenting text into multi-paragraph subtopic passages. Marti A Hearst, Computational Linguistics. 231Marti A. Hearst. 1997. Text tiling: Segmenting text into multi-paragraph subtopic passages. Computa- tional Linguistics, 23(1):33-64.
Hilda: A discourse parser using support vector machine classification. Hugo Hernault, Helmut Prendinger, Mitsuru Ishizuka, 10.5087/dad.2010.003Dialogue & Discourse. 13Hugo Hernault, Helmut Prendinger, Mitsuru Ishizuka, et al. 2010. Hilda: A discourse parser using sup- port vector machine classification. Dialogue & Dis- course, 1(3).
GRADE: Automatic graphenhanced coherence metric for evaluating opendomain dialogue systems. Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, Xiaodan Liang, 10.18653/v1/2020.emnlp-main.742Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsLishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. GRADE: Automatic graph- enhanced coherence metric for evaluating open- domain dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9230-9240, Online. Association for Computational Linguistics.
Predicting discourse structure using distant supervision from sentiment. Patrick Huber, Giuseppe Carenini, 10.18653/v1/D19-1235Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsPatrick Huber and Giuseppe Carenini. 2019. Pre- dicting discourse structure using distant supervision from sentiment. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2306-2316, Hong Kong, China. As- sociation for Computational Linguistics.
Predicting above-sentence discourse structure using distant supervision from topic segmentation. Patrick Huber, Linzi Xing, Giuseppe Carenini, The Thirty-sixth AAAI Conference on Artificial Intelligence (AAAI-22). Patrick Huber, Linzi Xing, and Giuseppe Carenini. 2021. Predicting above-sentence discourse structure using distant supervision from topic segmentation. In The Thirty-sixth AAAI Conference on Artificial In- telligence (AAAI-22), pages 10794-10802.
Representation learning for text-level discourse parsing. Yangfeng Ji, Jacob Eisenstein, 10.3115/v1/P14-1002Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MarylandAssociation for Computational Linguistics1Yangfeng Ji and Jacob Eisenstein. 2014. Representa- tion learning for text-level discourse parsing. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 13-24, Baltimore, Maryland. Associ- ation for Computational Linguistics.
Dependency-guided LSTM-CRF for named entity recognition. Zhanming Jie, Wei Lu, 10.18653/v1/D19-1399Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsZhanming Jie and Wei Lu. 2019. Dependency-guided LSTM-CRF for named entity recognition. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3862-3872, Hong Kong, China. Association for Computational Linguistics.
CODRA: A novel discriminative framework for rhetorical analysis. Shafiq Joty, Giuseppe Carenini, Raymond T Ng, 10.1162/COLI_a_00226Computational Linguistics. 413Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2015. CODRA: A novel discriminative framework for rhetorical analysis. Computational Linguistics, 41(3):385-435.
Text segmentation as a supervised learning task. Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, Jonathan Berant, 10.18653/v1/N18-2075Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesShort Papers; New Orleans, Louisiana2Association for Computational LinguisticsOmri Koshorek, Adir Cohen, Noam Mor, Michael Rot- man, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 2 (Short Pa- pers), pages 469-473, New Orleans, Louisiana. As- sociation for Computational Linguistics.
Learning structured text representations. Yang Liu, Mirella Lapata, 10.1162/tacl_a_00005Transactions of the Association for Computational Linguistics. 6Yang Liu and Mirella Lapata. 2018. Learning struc- tured text representations. Transactions of the Asso- ciation for Computational Linguistics, 6:63-75.
Transformer over pre-trained transformer for neural text segmentation with enhanced topic coherence. Kelvin Lo, Yuan Jin, Weicong Tan, Ming Liu, Lan Du, Wray Buntine, 10.18653/v1/2021.findings-emnlp.283Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsKelvin Lo, Yuan Jin, Weicong Tan, Ming Liu, Lan Du, and Wray Buntine. 2021. Transformer over pre-trained transformer for neural text segmentation with enhanced topic coherence. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3334-3340, Punta Cana, Dominican Re- public. Association for Computational Linguistics.
A coherence model based on syntactic patterns. Annie Louis, Ani Nenkova, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsAnnie Louis and Ani Nenkova. 2012. A coherence model based on syntactic patterns. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1157-1168, Jeju Island, Korea. Association for Computational Lin- guistics.
Minimum cut model for spoken lecture segmentation. Igor Malioutov, Regina Barzilay, 10.3115/1220175.1220179Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsSydney, AustraliaAssociation for Computational LinguisticsIgor Malioutov and Regina Barzilay. 2006. Minimum cut model for spoken lecture segmentation. In Pro- ceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguis- tics, pages 25-32, Sydney, Australia. Association for Computational Linguistics.
Rhetorical structure theory: Toward a functional theory of text organization. C William, Sandra A Mann, Thompson, 10.1515/text.1.1988.8.3.243Text -Interdisciplinary Journal for the Study of Discourse. 83William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text -Interdisciplinary Journal for the Study of Discourse, 8(3):243-281.
Automatic text summarization by paragraph extraction. Mandar Mitra, Amit Singhal, Chris Buckley, Intelligent Scalable Text Summarization. Mandar Mitra, Amit Singhal, and Chris Buckley. 1997. Automatic text summarization by paragraph extrac- tion. In Intelligent Scalable Text Summarization.
A neural local coherence analysis model for clarity text scoring. Panitan Muangkammuen, Sheng Xu, Fumiyo Fukumoto, Jiyi Kanda Runapongsa Saikaew, Li, 10.18653/v1/2020.coling-main.194Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain (OnlineInternational Committee on Computational LinguisticsPanitan Muangkammuen, Sheng Xu, Fumiyo Fuku- moto, Kanda Runapongsa Saikaew, and Jiyi Li. 2020. A neural local coherence analysis model for clarity text scoring. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 2138-2143, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.
Out-of-Domain Discourse Dependency Parsing via Bootstrapping: An Empirical Analysis on Its Effectiveness and Limitation. Noriki Nishida, Yuji Matsumoto, Transactions of the Association for Computational Linguistics. 10Noriki Nishida and Yuji Matsumoto. 2022. Out-of- Domain Discourse Dependency Parsing via Boot- strapping: An Empirical Analysis on Its Effective- ness and Limitation. Transactions of the Association for Computational Linguistics, 10:127-144.
Semantic passage segmentation based on sentence topics for question answering. Hyojung Oh, Sung Hyon Myaeng, Myung-Gil Jang, Information Sciences. 17718HyoJung Oh, Sung Hyon Myaeng, and Myung-Gil Jang. 2007. Semantic passage segmentation based on sentence topics for question answering. Informa- tion Sciences, 177(18):3696-3717.
A critique and improvement of an evaluation metric for text segmentation. Lev Pevzner, Marti A Hearst, 10.1162/089120102317341756Computational Linguistics. 281Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):19- 36.
How text segmentation algorithms gain from topic models. Martin Riedl, Chris Biemann, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaAssociation for Computational LinguisticsMartin Riedl and Chris Biemann. 2012. How text seg- mentation algorithms gain from topic models. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 553-557, Montréal, Canada. Association for Computational Linguistics.
Unsupervised topic segmentation of meetings with bert embeddings. Alessandro Solbiati, Kevin Heffernan, Georgios Damaskinos, Shivani Poddar, Shubham Modi, Jacques Cali, Alessandro Solbiati, Kevin Heffernan, Georgios Damaskinos, Shivani Poddar, Shubham Modi, and Jacques Cali. 2021. Unsupervised topic segmenta- tion of meetings with bert embeddings.
Dialogue session segmentation by embedding-enhanced texttiling. Yiping Song, Lili Mou, R Yan, Li Yi, Zinan Zhu, X Hu, M Zhang, IN-TERSPEECH. Yiping Song, Lili Mou, R. Yan, Li Yi, Zinan Zhu, X. Hu, and M. Zhang. 2016. Dialogue session seg- mentation by embedding-enhanced texttiling. In IN- TERSPEECH, page 2706-2710.
Sentence similarity based on contexts. Xiaofei Sun, Yuxian Meng, Xiang Ao, Fei Wu, Tianwei Zhang, Jiwei Li, Chun Fan, 10.1162/tacl_a_00477Transactions of the Association for Computational Linguistics. 10Xiaofei Sun, Yuxian Meng, Xiang Ao, Fei Wu, Tianwei Zhang, Jiwei Li, and Chun Fan. 2022. Sentence sim- ilarity based on contexts. Transactions of the Asso- ciation for Computational Linguistics, 10:573-588.
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, Graph Attention Networks. International Conference on Learning Representations. Accepted as posterPetar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations. Accepted as poster.
A two-stage parsing method for text-level discourse analysis. Yizhong Wang, Sujian Li, Houfeng Wang, 10.18653/v1/P17-2029Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaShort Papers2Association for Computational LinguisticsYizhong Wang, Sujian Li, and Houfeng Wang. 2017. A two-stage parsing method for text-level discourse analysis. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 184-188, Vancou- ver, Canada. Association for Computational Linguis- tics.
Anaphora and discourse structure. Bonnie Webber, Matthew Stone, Aravind Joshi, Alistair Knott, 10.1162/089120103322753347Computational Linguistics. 294Bonnie Webber, Matthew Stone, Aravind Joshi, and Al- istair Knott. 2003a. Anaphora and discourse struc- ture. Computational Linguistics, 29(4):545-587.
Anaphora and discourse structure. Bonnie Webber, Matthew Stone, Aravind Joshi, Alistair Knott, 10.1162/089120103322753347Computational Linguistics. 294Bonnie Webber, Matthew Stone, Aravind Joshi, and Al- istair Knott. 2003b. Anaphora and discourse struc- ture. Computational Linguistics, 29(4):545-587.
Extractive summarization of long documents by combining global and local context. Wen Xiao, Giuseppe Carenini, 10.18653/v1/D19-1298Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWen Xiao and Giuseppe Carenini. 2019. Extractive summarization of long documents by combining global and local context. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3011-3021, Hong Kong, China. Association for Computational Linguistics.
Improving unsupervised dialogue topic segmentation with utterance-pair coherence scoring. Linzi Xing, Giuseppe Carenini, Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 22nd Annual Meeting of the Special Interest Group on Discourse and DialogueSingaporeand Online. Association for Computational LinguisticsLinzi Xing and Giuseppe Carenini. 2021. Improv- ing unsupervised dialogue topic segmentation with utterance-pair coherence scoring. In Proceedings of the 22nd Annual Meeting of the Special Inter- est Group on Discourse and Dialogue, pages 167- 177, Singapore and Online. Association for Compu- tational Linguistics.
Improving context modeling in neural topic segmentation. Linzi Xing, Brad Hackinen, Giuseppe Carenini, Francesco Trebbi, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsLinzi Xing, Brad Hackinen, Giuseppe Carenini, and Francesco Trebbi. 2020. Improving context model- ing in neural topic segmentation. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Lan- guage Processing, pages 626-636, Suzhou, China. Association for Computational Linguistics.
Discourse-aware neural extractive text summarization. Jiacheng Xu, Zhe Gan, Yu Cheng, Jingjing Liu, 10.18653/v1/2020.acl-main.451Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsJiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text sum- marization. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5021-5031, Online. Association for Computa- tional Linguistics.
Topicaware multi-turn dialogue modeling. Yi Xu, Hai Zhao, Zhuosheng Zhang, The Thirtyfifth AAAI Conference on Artificial Intelligence (AAAI-21). Yi Xu, Hai Zhao, and Zhuosheng Zhang. 2021. Topic- aware multi-turn dialogue modeling. In The Thirty- fifth AAAI Conference on Artificial Intelligence (AAAI-21), pages 14176-14184.
SciDTB: Discourse dependency TreeBank for scientific abstracts. An Yang, Sujian Li, 10.18653/v1/P18-2071Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaShort Papers2Association for Computational LinguisticsAn Yang and Sujian Li. 2018. SciDTB: Discourse de- pendency TreeBank for scientific abstracts. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 444-449, Melbourne, Australia. As- sociation for Computational Linguistics.
Transition-based neural RST parsing with implicit syntax features. Nan Yu, Meishan Zhang, Guohong Fu, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsNan Yu, Meishan Zhang, and Guohong Fu. 2018. Transition-based neural RST parsing with implicit syntax features. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 559-570, Santa Fe, New Mexico, USA. Asso- ciation for Computational Linguistics.
The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation. Amir Zeldes, 10.1007/s10579-016-9343-x51Amir Zeldes. 2017. The GUM corpus: Creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.
Modeling topical relevance for multi-turn dialogue generation. Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20. the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, and Dawei Yin. 2020. Mod- eling topical relevance for multi-turn dialogue gen- eration. In Proceedings of the Twenty-Ninth In- ternational Joint Conference on Artificial Intelli- gence, IJCAI-20, pages 3737-3743. International Joint Conferences on Artificial Intelligence Organi- zation.
Improve discourse dependency parsing with contextualized representations. Yifei Zhou, Yansong Feng, Findings of the Association for Computational Linguistics: NAACL 2022. Seattle, United StatesAssociation for Computational LinguisticsYifei Zhou and Yansong Feng. 2022. Improve dis- course dependency parsing with contextualized rep- resentations. In Findings of the Association for Com- putational Linguistics: NAACL 2022, pages 2250- 2261, Seattle, United States. Association for Com- putational Linguistics.
| [
"https://github.com/PKUTANGENT/SciDTB",
"https://github.com/norikinishida/"
] |
[
"First the Worst: Finding Better Gender Translations During Beam Search",
"First the Worst: Finding Better Gender Translations During Beam Search"
] | [
"Danielle Saunders \nDepartment of Engineering\nUniversity of Cambridge\nUK\n",
"Rosie Sallis \nDepartment of Engineering\nUniversity of Cambridge\nUK\n",
"Bill Byrne \nDepartment of Engineering\nUniversity of Cambridge\nUK\n"
] | [
"Department of Engineering\nUniversity of Cambridge\nUK",
"Department of Engineering\nUniversity of Cambridge\nUK",
"Department of Engineering\nUniversity of Cambridge\nUK"
] | [
"Association for Computational Linguistics: ACL 2022"
] | Generating machine translations via beam search seeks the most likely output under a model. However, beam search has been shown to amplify demographic biases exhibited by a model. We aim to address this, focusing on gender bias resulting from systematic errors in grammatical gender translation. Almost all prior work on this problem adjusts the training data or the model itself. By contrast, our approach changes only the inference procedure.We constrain beam search to improve gender diversity in n-best lists, and rerank n-best lists using gender features obtained from the source sentence. Combining these strongly improves WinoMT gender translation accuracy for three language pairs without additional bilingual data or retraining. We also demonstrate our approach's utility for consistently gendering named entities, and its flexibility to handle new gendered language beyond the binary. | 10.18653/v1/2022.findings-acl.301 | [
"https://www.aclanthology.org/2022.findings-acl.301.pdf"
] | 233,240,748 | 2104.07429 | e0922696bc88a1bf817e6b374abd0ac036f46dc5 |
First the Worst: Finding Better Gender Translations During Beam Search
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Danielle Saunders
Department of Engineering
University of Cambridge
UK
Rosie Sallis
Department of Engineering
University of Cambridge
UK
Bill Byrne
Department of Engineering
University of Cambridge
UK
First the Worst: Finding Better Gender Translations During Beam Search
Association for Computational Linguistics: ACL 2022
Association for Computational LinguisticsMay 22-27, 2022 c 2022
Generating machine translations via beam search seeks the most likely output under a model. However, beam search has been shown to amplify demographic biases exhibited by a model. We aim to address this, focusing on gender bias resulting from systematic errors in grammatical gender translation. Almost all prior work on this problem adjusts the training data or the model itself. By contrast, our approach changes only the inference procedure.We constrain beam search to improve gender diversity in n-best lists, and rerank n-best lists using gender features obtained from the source sentence. Combining these strongly improves WinoMT gender translation accuracy for three language pairs without additional bilingual data or retraining. We also demonstrate our approach's utility for consistently gendering named entities, and its flexibility to handle new gendered language beyond the binary.
Introduction
Neural language generation models optimized by likelihood have a tendency towards 'safe' word choice. This lack of output diversity has been noted in NMT (Vanmassenhove et al., 2019) and throughout NLP (Li et al., 2016;Sultan et al., 2020). Model-generated language may be repetitive or stilted. More insidiously, generating the most likely output based only on corpus statistics can amplify any existing biases in the corpus (Zhao et al., 2017).
Potential harms arise when biases around word choice or grammatical gender inflections reflect demographic or social biases (Sun et al., 2019). The resulting gender mistranslations could involve implicit misgendering of a user or other referent, or perpetuation of social stereotypes about the 'typical' gender of a referent in a given context.
Past approaches to the problem almost exclusively involve retraining (Vanmassenhove et al., * Now at RWS Language Weaver 2018;Escudé Font and Costa-jussà, 2019;Stafanovičs et al., 2020) or tuning (Saunders and Byrne, 2020; Basta et al., 2020) on gender-adjusted data. Such approaches are often computationally expensive and risk introducing new biases (Shah et al., 2020). Instead, we seek to improve translations from existing models. Roberts et al. (2020) highlight beam search's tendency to amplify gender bias and Renduchintala et al. (2021) show that very shallow beams degrade gender translation accuracy; we instead guide beam search towards better gender translations further down the n-best list.
Our contributions are as follows: we rerank NMT n-best lists, demonstrating that we can extract better gender translations from the original model's beam. We also generate new n-best lists subject to gendered inflection constraints, and show this makes correctly gendered entities more common in n-best lists. We make no changes to the NMT model or training data, and require only monolingual resources for the source and target languages.
Related work
Prior work mitigating gender bias in NLP often involves adjusting training data, directly (Zhao et al., 2018) or via embeddings (Bolukbasi et al., 2016). Our inference-only approach is closer to work on controlling or 'correcting' gendered output.
Controlling gender translation generally involves introducing external information into the model. Miculicich Werlen and Popescu-Belis (2017) integrate cross-sentence coreference links into reranking to improve pronoun translation. Vanmassenhove et al. (2018) and Moryossef et al. (2019) incorporate sentence-level gender features into training data and during inference respectively. Tokenlevel source gender tags are used by Stafanovičs et al. (2020) and Saunders et al. (2020). As in this prior work, our focus is applying linguistic genderconsistency information, rather than obtaining it.
A separate line of work treats gender-related inconsistencies as a search and correction problem. Roberts et al. (2020) find that beam search amplifies gender bias compared to sampling search. Saunders and Byrne (2020) rescore translations with a model fine-tuned for additional gender sensitivity, constraining outputs to genderedreinflections of the original. Related approaches for monolingual tasks reinflect whole-sentence gender (Habash et al., 2019;Alhafni et al., 2020;Sun et al., 2021). An important difference in our work is use of the same model for initial translation and reinflection, reducing computation and complexity.
Finding consistent gender in the beam
There are two elements to our proposed approach. First, we produce an n-best list of translations using our single model per language pair. We use either standard beam search or a two-pass approach where the second pass searches for differently-gendered versions of the highest likelihood initial translation. We then select a translation from the list, either by log likelihood or by how far the target language gender features correspond to the source sentence. We produce n-best lists in two ways. One option is standard beam search. Alternatively, we synthesize n-best lists using the gendered constraint scheme of Saunders and Byrne (2020), illustrated in Figure 1. This involves a second genderconstrained beam search pass to reinflect an initial hypothesis, producing a synthesized n-best list containing gendered alternatives of that hypothesis.
Gender-constrained n-best lists
The second reinflection pass uses a target language gender inflection transducer which defines grammatically gendered reinflections. For example, Spanish definite article el could be unchanged or reinflected to la, and profession noun médico could be reinflected to médica (and vice versa). Composing the reinflections with the original hypothesis generates a constrained hypothesis lattice.
We can now perform constrained beam search, which can encourage NMT to output specific vocabulary (Stahlberg et al., 2016;Khayrallah et al., 2017). The only difference from standard beam search is that gender-constrained search only expands translations forming paths in the constrained hypothesis lattice. In the Figure 1 example, beamn search would produce the n most likely translations, while the gender-constrained pass would only produce the 4 translations in the lattice.
Importantly, for each language pair we use just one NMT model to produce gendered variations of its own hypotheses. Unlike Saunders and Byrne (2020) we do not reinflect translations with a separate gender-sensitive model. This removes the complexity, potential bias amplification and computational load of developing the gender-translationspecific models central to their approach.
While we perform two full inference passes to simplify implementation, further efficiency improvements are possible. For example, the source sentence encoding could be reused for the reinflection pass. In principle, some beam search constraints could be applied in the first inference pass, negating the need for two passes. These potential efficiency gains would not be possible if using a separate NMT model to reinflect the translations.
Reranking gendered translations
Algorithm 1 Gender-reranking an n-best list Input: x: Source sentence; Y : set of translation hypotheses for x; L: Log likelihoods for all y ∈ Y ; A: word alignments between x and all y p, p g ← pronoun_and_gender(x) ▷ Or oracle e ← get_entity(x, p) ▷ Or oracle for all y ∈ Y do y score ← 0 for all t ∈ A y (e) do ▷ Translated entity t g ← get_gender(t) if t g = p g then y score += 1 end if end for end for Y = {argmax y (y score , y ∈ Y )} y = argmax y (L(y), y ∈Ŷ ) returnŷ
We select an output translation from an n-best list in two ways, regardless of whether the list Figure 2: Complete workflow for a toy en-es example. We have two options for producing an n-best list -standard or gender-constrained search -and can then either take the highest likelihood output from the list, or rerank it. was produced by beam search or the two-pass approach. One option selects the highest-likelihood translation under the NMT model. Alternatively, we rerank for gender consistency with the source sentence. We focus on either oracle or inferred entities coreferent with a source pronoun.
The oracle case occurs in several scenarios. Oracle entity labels could be provided as for the WinoMT challenge set (Stanovsky et al., 2019). They could also be user-defined for known entities (Vanmassenhove et al., 2018), or if translating the same sentence with different entity genders to produce multiple outputs (Moryossef et al., 2019).
The inferred case determines entities automatically given a source pronoun 1 and its grammatical gender. We find coreferent entities using a target language coreference resolution tool in get_entity. For brevity Algorithm 1 is written for one entity per sentence: in practice there is no such limit.
For each entity we find the aligned translated entity, similar to Stafanovičs et al. (2020). We determine the translated entity's grammatical gender by target language morphological analysis in get_gender. Finally we rerank, first by source gender agreement, tie-breaking with log likelihood 2 .
Experimental setup
We translate English into German, Spanish and Hebrew using Transformers (Vaswani et al., 2017). We train the en-de model on WMT19 newstask data including filtered Paracrawl (Barrault et al., 2019), en-es on UNCorpus data (Ziemski et al., 2016), and en-he on the IWSLT corpus (Cettolo et al., 2014). For further training details see Appendix A.
Some proposed steps require tools or resources:
1) For gender-constrained search, creating gender inflection transducers; 2) For inferred-reranking, finding source gendered entities 3) For all reranking, finding translated gendered entities; 4) For all reranking, getting translated entity genders. For 1) we use Spacy (Honnibal and Montani, 2017) and DEMorphy (Altinok, 2018) morphological analysis for Spanish and German, and fixed rules for Hebrew, on large vocabulary lists to produce gender transducers, following Saunders and Byrne (2020) We evaluate gender translation on WinoMT (Stanovsky et al., 2019) via accuracy and ∆G (F1 score difference between masculine and feminine labelled sentences, closer to 0 is better). As WinoMT lacks references we assess cased BLEU on WMT18 (en-de), WMT13 (en-es) and IWSLT14 (en-he) using SacreBLEU (Post, 2018).
Results and discussion
Oracle entities
We first describe oracle-reranking n-best lists in Table 1, before proceeding to the more general scenario of inferred-reranking. Comparing lines 1 vs 2, gender-constrained beam-4 search taking the highest likelihood output scores similarly to standard beam-4 search for all metrics and language pairs. For beam-20 (5 vs 6) en-de and en-es, constraints do mitigate the BLEU degradation common with larger beams (Stahlberg and Byrne, 2019). In lines 1 vs 3, 5 vs 7, we oracle-rerank beam search outputs instead of choosing by highest likelihood. We see about 10% accuracy improvement relative to non-reranked beam-4 across languages, and over 25% relative improvement for beam-20. Combining oracle-reranking and constraints further boosts accuracy. This suggests constraints encourage presence of better gender translations in n-best lists, but that reranking is needed to extract them.
Using beam-20 significantly improves the performance of reranking. With constraints, beam-20 oracle-reranking gives absolute accuracy gains of about 20% over the highest likelihood beam search output. However, beam-4 shows most of the improvement over that baseline. We find diminishing returns as beam size increases (Appendix B), suggesting large, expensive beams are not necessary.
Inferred entities
We have shown accuracy improvements with oracle reranking, indicating that the synthesized n-best lists often contain a gender-accurate hypothesis. In Table 2, we explore inferred-reranking using a RoBERTa model, investigating whether that hypothesis can be found automatically. We find very little degradation in WinoMT accuracy when inferring entities compared to the oracle (Table 1). We hypothesise that difficult sentences are hard for both coreference resolution and NMT, so cases where RoBERTa disambiguates wrongly are also mistranslated with oracle information.
We are unable to oracle-rerank the generic test sets, since they have no oracle gender labels. However, we can tag them using RoBERTA for inferredreranking. In Table 2 we find this has little or no impact on BLEU score, unsurprising for sets not designed to highlight potentially subtle gender translation effects. This suggests positively that our scheme does not impact general translation quality.
So far we have not changed the NMT model at all. In Table 3, for comparison, we investigate the approach of Saunders and Byrne (2020): tuning a model on a dataset of gendered profession sentences, then constrained-rescoring the original model's hypotheses. 5 We do indeed see strong gender accuracy improvements with this approach, but inferred-reranking the resulting models' n-best lists further improves scores. We also note that inferred reranking the baseline with beam size 20 (Table 2 line 4) outperforms non-reranked S&B, without requiring specialized profession-domain tuning data or any change to the model.
Reranking with named entities
At time of writing, published gender translation test sets focus on profession nouns, a domain we evaluate with WinoMT. However, our approach can also improve other aspects of gender translation. One of these is consistently gendering named entities. Sentences may contain gendered terminology with no pronouns, only named entities. Generic namegender mappings are unreliable: many names are not gendered, and a name with a 'typical' gender may not correspond to an individual's gender. However, we may know the appropriate gendered terms to use for a specific named entity, for example from other sentences, a knowledge base, or user preference. With this information we can gender-rerank.
An example is given in Table 4. The English sentence contains no gendered pronoun, so is not covered by our default reranking algorithm. We know from previous sentences that Calderon should be referred to with the linguistic feminine, so we can rerank with known p g . The 'entities' e are the words referring to Calderon, including 'who', 'had' and 'led'. 6 Algorithm 1 proceeds over these entities, of which only 'who' is gendered in German, to extract a better gendered translation.
Reranking with new gendered language
Another benefit of our approach is flexibility to introducing new gendered vocabulary, e.g. as used by non-binary people. Developing a system to correctly produce new terms like neopronouns is itself an open research problem (Saunders et al., 2020). However, we can simulate such a system by editing existing WinoMT translations to contain genderedterm placeholders instead of binary gendered terms, and shuffling these translations into n-best lists. For example, where a German translation includes der Mitarbeiter, the employee (masculine), we substitute DEFNOM MitarbeiterNEND. This allows later replacement of DEFNOM by e.g. dier or NEND by _in (Heger, 2020), but remains flexible to prefer-ences for new gendered language. We then define the new patterns for identification by the reranker.
To evaluate reranking with new gendered language, we use 1826 neutral WinoMT sentences with they/them pronouns on the English side. We initialise the corresponding n-best lists with the masculine WinoMT German 20-best lists, and shuffle one 'placeholder' translation into each, giving them the average log likelihood of the whole list. We find that the reranker successfully extracts the correct placeholder-style sentences in 92% of cases. This demonstrates that if a system can generate some new gendered term, reranking can extract it from an n-best list with minimal adjustments.
Conclusions
This paper attempts to improve gender translation without a single change to the NMT model. We demonstrate that gender-constraining the target language during inference can encourage models to produce n-best lists with correct hypotheses. Moreover, we show that simple reranking heuristics can extract more accurate gender translations from the n-best lists using oracle or inferred information.
Unlike other approaches to this problem we do not attempt to counter unidentified and potentially intractable sources of bias in the training data, or produce new models. However, our approach does significantly boost the accuracy of a prior datacentric bias mitigation technique. In general we view our scheme as orthogonal to such approaches: if a model ranks diverse gender translations higher in the beam initially, finding better gender translations during beam search becomes simpler.
Acknowledgments
This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and performed using resources from the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service 7 funded by EPSRC Tier-2 capital grant EP/P020259/1.
Impact statement
Where machine translation is used in people's lives, mistranslations have the potential to misrepresent people. This is the case when personal characteristics like social gender conflict with model biases towards certain forms of grammatical gender. As mentioned in the introduction, the result can involve implicit misgendering of a user or other human referent, or perpetuation of social biases about gender roles as represented in the translation. A user whose words are translated with gender defaults that imply they hold such biased views will also be misrepresented.
We attempt to avoid these failure modes by identifying translations which are at least consistent within the translation and consistent with the source sentence. This is dependent on identifying grammatically gendered terms in the target languagehowever, this element is very flexible and can be updated for new gendered terminology. We note that models which do not account for variety in gender expression such as neopronoun use may not be capable of generating appropriate gender translations. However, we demonstrate that, if definable, a variety of gender translations can be extracted from the beam.
By avoiding the data augmentation, tuning and retraining elements in previously proposed approaches to gender translation, we simplify the process and remove additional stages where bias could be introduced or amplified (Shah et al., 2020).
In terms of compute time and power, we minimize impact by using a single GPU only for training the initial NMT models exactly once for the iterations listed in Appendix A. All other experiments involve inference or rescoring the outputs of those models and run in parallel on CPUs in under an hour, except the experiments following Saunders and Byrne (2020), an approach itself involving only minutes of GPU fine-tuning.
References
A Model training details
All NMT models are 6-layer Transformers with 30K BPE vocabularies (Sennrich et al., 2016), trained using Tensor2Tensor with batch size 4K (Vaswani et al., 2018). All data except Hebrew is truecased and tokenized using (Koehn et al., 2007). The en-de model is trained for 300K batches, enes for 150K batches, and en-he for 15K batches, transfer learning from the en-de model. We filter subworded data for max (80) and min (3)
B Beam size for constrained reranking
In this paper we present results with beam sizes 4 and 20. Beam-4 search is commonly used and meets a speed-quality trade-off for NMT (see e.g. Junczys-Dowmunt et al. (2016)). Beam-20 is still practical, but approaches diminishing returns for quality without search error mitigation (Stahlberg and Byrne, 2019). These sizes therefore illustrate contrasting levels of practical reranking. However, it is instructive to explore what beam size is necessary to benefit from gender-constrained reranking.
In Figure 3 we report WinoMT accuracy under gender-constrained oracle reranking with beam width increasing by intervals of 4. For all systems, the largest jump in improvement is between beam sizes 4 and 8, with diminishing returns after beam-12. The en-de curve is relatively shallow, possibly due to strong scores before reranking, or even a performance ceiling determined by the WinoMT framework itself. Curves for en-he and en-es are very close, suggesting a similarity between the gender distribution in the n-best lists for those models.
C Constrained vs unconstrained beams
We can observe the difference between standard and constrained beam search by examining the nbest lists. Table 5 (next page) gives 5 examples of 4best lists for WinoMT sentences translated into German. Examples are not cherry-picked but selected from throughout WinoMT with a random number generator. Lists are ordered by NMT model likelihood and produced with standard unconstrained beam search, and with constrained beam search.
With standard beam search, translations vary words unrelated to the entities, such as synonyms or verb tenses. However, entity grammatical genders are generally unchanged throughout the unconstrained n-best lists, except for 1 where the secondary entity changes. Reranking cannot always find a gender-consistent translation in the unconstrained lists, defaulting to the 1best for all except 2 (which seems to have a poorly aligned hypothesis). By contrast, constrained beam search ensures the n-best list contains gendered inflections of the initial best-scoring translation. The changes vary the grammatical genders of articles and entities, resulting in more gender-diverse hypotheses, and allowing reranking to find a better translation for 1.
We note that in 3, 4 and 5 both the pronoun and the default gender convention for unknown gender entities are masculine. Reranking is not strictly necessary to find a better translation for these sentences, since the highest likelihood output is gender-consistent. However, we note that some outputs with gender constraints do in fact vary the gender of the secondary entity -the entity with unspecified gender. This illustrates our approach's ability to improve n-best list diversity even when it does not necessarily impact translation consistency.
We observe occasional grammatical inconsistencies in n-best hypotheses (e.g. "die Fahrer" in 3). When constraining beam search to grammatical variations of a sentence with an imperfect NMT model, we expect some hypotheses with grammatical degradation. However, our priority, and the purpose of our reranking scheme, is consistency with the source in the output translation, not inconsistencies elsewhere in the n-best list. Table 5: English-German 4-best lists for 5 randomly-selected WinoMT sentences, translated with normal beam search and gender-constrained beam search. Grammatically feminine human entities are underlined. Grammatically masculine human entities are emphasised. Lists are ordered by NMT model likelihood (first is 1best) -lines marked with * are those selected under oracle-reranking. 1: Constrained reranking finds a better gender translation that is not present in the unconstrained beam. 2: A better gendered translation is not found in either width-4 beam. Constraints still maintain semantic meaning throughout the beam while allowing syntactic variation, including a differently gendered secondary entity. 3, 4, 5: The highest likelihood output is acceptable. For 3 and 5 constraining the n-best list results in more gender variation.
Figure 1 :
1Constraints for a toy initial hypothesis.
3 .
3The highest likelihood outputs from beam-4 search form the original hypothesis lattices. For 2) we use a RoBERTa model (Liu et al., 2019) tuned for coreference on Winograd challenge data 4 . For 3) we use fast_align (Dyer et al., 2013). For 4) we use the same morphological analysis as in 1, now on translated entities.
Figure 3 :
3WinoMT accuracy after oracle-reranking gender-constrained n-best lists, varying n.
BLEU Acc ∆G BLEU Acc ∆G BLEU Acc ∆G 1Beam
Gender
Oracle
en-de
en-es
en-he
constrain rerank 4
×
×
42.7
60.1 18.6
27.5
47.8 38.4
23.8
47.5 21.1
2
✓
×
42.7
59.1 20.1
27.8
48.3 36.2
23.8
47.4 21.5
3
×
✓
-
66.5 10.1
-
53.9 25.9
-
52.0 16.8
4
✓
✓
-
77.9 -0.6
-
55.7 22.3
-
54.5 13.7
5
20
×
×
42.3
59.0 20.1
27.3
46.4 40.7
24.0
46.8 22.5
6
✓
×
42.7
59.0 20.3
27.8
48.3 36.2
23.8
47.3 21.7
7
×
✓
-
74.3
2.4
-
63.5 11.0
-
59.3 11.2
8
✓
✓
-
84.2 -3.6
-
66.3
8.1
-
65.3
4.9
Table 1: Accuracy (%) and masculine/feminine F1 difference ∆G, oracle-reranking WinoMT. BLEU scores are for
en-de WMT18, en-es WMT13, and en-he IWSLT14, which lack gender labels so cannot be oracle-reranked.
Beam
Gender
Inferred
en-de
en-es
en-he
constrain
rerank
BLEU Acc ∆G BLEU Acc ∆G BLEU Acc ∆G
1
4
×
✓
42.7
65.9 10.7
27.5
52.6 28.1
23.8
51.3 17.0
2
✓
✓
42.7
76.4
0.5
27.8
53.9 24.6
23.8
53.6 14.4
3
20
×
✓
42.2
72.9
3.3
27.3
60.2 15.3
24.0
57.8 11.9
4
✓
✓
42.6
81.8 -2.6
27.8
63.5 10.9
23.8
62.8
6.2
Table 2 :
2Accuracy (%) and masculine/feminine F1 difference ∆G. Inferred-reranking with genders and entities for WinoMT and generic test sets determined by a RoBERTa model. Non-reranked results unchanged fromTable 1.
Table 3 :
3WinoMT accuracy inferred-reranking the adaptation scheme of Saunders and Byrne (2020).
Vallejo appears to have only narrowly edged out Calderon, who had led polls before election day -12.3 Vallejo scheint nur knapp ausgegrenzt Calderon, der vor dem Wahltag Wahlen geführt hatte. -14.6 * Vallejo scheint nur knapp ausgegrenzt Calderon, die vor dem Wahltag Wahlen geführt hatte. -24.3 Vallejo scheint nur knapp ausgegrenzt Calderon, der vor dem Wahltag Wahlern geführt hatte. -26.5 Vallejo scheint nur knapp ausgegrenzt Calderon, die vor dem Wahltag Wahlern geführt hatte.
Table 4 :
4Sentence from WMT newstest12 with gender-constrained n-best list and NLL scores. Words like 'who' coreferent with 'Calderon' become entities for Algorithm 1, which finds a better gendered translation ( * ).
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630-1640, Florence, Italy. Association for Computational Linguistics. Tony Sun, Kellie Webster, Apu Shah, William Yang Wang, and Melvin Johnson. 2021. They, them, theirs: Rewriting with gender-neutral english. arXiv preprint arXiv:2102.06788. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003-3008, Brussels, Belgium. Association for Computational Linguistics.Bashar Alhafni, Nizar Habash, and Houda Bouamor.
2020. Gender-aware reinflection using linguistically
enhanced neural models. In Proceedings of the Sec-
ond Workshop on Gender Bias in Natural Language
Processing, pages 139-150, Barcelona, Spain (On-
line). Association for Computational Linguistics.
Duygu Altinok. 2018. DEMorphy, German lan-
guage morphological analyzer. arXiv preprint
arXiv:1803.00902.
Loïc Barrault, Ondřej Bojar, Marta R. Costa-jussà,
Christian Federmann, Mark Fishel, Yvette Gra-
ham, Barry Haddow, Matthias Huck, Philipp Koehn,
Shervin Malmasi, Christof Monz, Mathias Müller,
Santanu Pal, Matt Post, and Marcos Zampieri. 2019.
Findings of the 2019 conference on machine trans-
lation (WMT19). In Proceedings of the Fourth Con-
ference on Machine Translation (Volume 2: Shared
Task Papers, Day 1), pages 1-61, Florence, Italy. As-
sociation for Computational Linguistics.
Christine Basta, Marta R. Costa-jussà, and José A. R.
Fonollosa. 2020. Towards mitigating gender bias in
a decoder-based neural machine translation model by
adding contextual information. In Proceedings of the
The Fourth Widening Natural Language Processing
Workshop, pages 99-102, Seattle, USA. Association
for Computational Linguistics.
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou,
Venkatesh Saligrama, and Adam T Kalai. 2016. Man
is to computer programmer as woman is to home-
maker? Debiasing word embeddings. In Advances in
neural information processing systems, pages 4349-
4357.
Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa
Bentivogli, and Marcello Federico. 2014. Report
on the 11th IWSLT evaluation campaign, IWSLT
2014. In Proceedings of the International Workshop
on Spoken Language Translation, Hanoi, Vietnam,
page 57.
Chris Dyer, Victor Chahuneau, and Noah A. Smith.
2013. A simple, fast, and effective reparameteriza-
tion of IBM model 2. In Proceedings of the 2013
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, pages 644-648, Atlanta,
Georgia. Association for Computational Linguistics.
Joel Escudé Font and Marta R. Costa-jussà. 2019.
Equalizing gender bias in neural machine translation
with word embeddings techniques. In Proceedings of
the First Workshop on Gender Bias in Natural Lan-
guage Processing, pages 147-154, Florence, Italy.
Association for Computational Linguistics.
Nizar Habash, Houda Bouamor, and Christine Chung.
2019. Automatic gender identification and reinflec-
tion in Arabic. In Proceedings of the First Workshop
on Gender Bias in Natural Language Processing,
pages 155-165, Florence, Italy. Association for Com-
putational Linguistics.
Illi Anna Heger. 2020. Version 3.3: xier pronomen ohne
geschlecht. (accessed: Mar 2022).
Matthew Honnibal and Ines Montani. 2017. spaCy 2:
Natural language understanding with bloom embed-
dings. Convolutional Neural Networks and Incre-
mental Parsing.
Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu
Hoang. 2016. Is neural machine translation ready
for deployment? a case study on 30 translation direc-
tions. In Proceedings of the International Workshop
on Spoken Language Translation 2016, volume 1.
13th International Workshop on Spoken Language
Translation 2016, IWSLT 2016.
Huda Khayrallah, Gaurav Kumar, Kevin Duh, Matt Post,
and Philipp Koehn. 2017. Neural lattice search for
domain adaptation in machine translation. In Pro-
ceedings of the Eighth International Joint Conference
on Natural Language Processing (Volume 2: Short
Papers), pages 20-25, Taipei, Taiwan. Asian Federa-
tion of Natural Language Processing.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris
Callison-Burch, Marcello Federico, Nicola Bertoldi,
Brooke Cowan, Wade Shen, Christine Moran,
Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra
Constantin, and Evan Herbst. 2007. Moses: Open
source toolkit for statistical machine translation. In
Proceedings of the 45th Annual Meeting of the As-
sociation for Computational Linguistics Companion
Volume Proceedings of the Demo and Poster Sessions,
pages 177-180, Prague, Czech Republic. Association
for Computational Linguistics.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao,
and Bill Dolan. 2016. A diversity-promoting ob-
jective function for neural conversation models. In
Proceedings of the 2016 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
pages 110-119, San Diego, California. Association
for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
RoBERTa: A robustly optimized BERT pretraining
approach. arXiv preprint arXiv:1907.11692.
Lesly Miculicich Werlen and Andrei Popescu-Belis.
2017. Using coreference links to improve Spanish-
to-English machine translation. In Proceedings of
the 2nd Workshop on Coreference Resolution Beyond
OntoNotes (CORBON 2017), pages 30-40, Valencia,
Spain. Association for Computational Linguistics.
Amit Moryossef, Roee Aharoni, and Yoav Goldberg.
2019. Filling gender & number gaps in neural ma-
chine translation with black-box context injection. In
Proceedings of the First Workshop on Gender Bias
in Natural Language Processing, pages 49-54, Flo-
rence, Italy. Association for Computational Linguis-
tics.
Matt Post. 2018. A call for clarity in reporting BLEU
scores. In Proceedings of the Third Conference on
Machine Translation: Research Papers, pages 186-
191, Belgium, Brussels. Association for Computa-
tional Linguistics.
Adithya Renduchintala, Denise Diaz, Kenneth Heafield,
Xian Li, and Mona Diab. 2021. Gender bias ampli-
fication during speed-quality optimization in neural
machine translation. In Proceedings of the 59th An-
nual Meeting of the Association for Computational
Linguistics and the 11th International Joint Confer-
ence on Natural Language Processing (Volume 2:
Short Papers), pages 99-109, Online. Association for
Computational Linguistics.
Nicholas Roberts, Davis Liang, Graham Neubig,
and Zachary C Lipton. 2020. Decoding and di-
versity in machine translation. arXiv preprint
arXiv:2011.13477.
Danielle Saunders and Bill Byrne. 2020. Reducing gen-
der bias in neural machine translation as a domain
adaptation problem. In Proceedings of the 58th An-
nual Meeting of the Association for Computational
Linguistics, pages 7724-7736, Online. Association
for Computational Linguistics.
Danielle Saunders, Rosie Sallis, and Bill Byrne. 2020.
Neural machine translation doesn't translate gender
coreference right unless you make it. In Proceedings
of the Second Workshop on Gender Bias in Natural
Language Processing, pages 35-43, Barcelona, Spain
(Online). Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural machine translation of rare words with
subword units. In Proceedings of the 54th Annual
Meeting of the Association for Computational Lin-
guistics (Volume 1: Long Papers), pages 1715-1725,
Berlin, Germany. Association for Computational Lin-
guistics.
Deven Santosh Shah, H. Andrew Schwartz, and Dirk
Hovy. 2020. Predictive biases in natural language
processing models: A conceptual framework and
overview. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 5248-5264, Online. Association for Computa-
tional Linguistics.
Artūrs Stafanovičs, Toms Bergmanis, and Mārcis Pinnis.
2020. Mitigating gender bias in machine translation
with target gender annotations. In Proceedings of the
Fifth Conference on Machine Translation (WMT).
Felix Stahlberg and Bill Byrne. 2019. On NMT search
errors and model errors: Cat got your tongue? In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 3356-
3362, Hong Kong, China. Association for Computa-
tional Linguistics.
Felix Stahlberg, Eva Hasler, Aurelien Waite, and Bill
Byrne. 2016. Syntactically guided neural machine
translation. In Proceedings of the 54th Annual Meet-
ing of the Association for Computational Linguistics
(Volume 2: Short Papers), pages 299-305, Berlin,
Germany. Association for Computational Linguis-
tics.
Gabriel Stanovsky, Noah A. Smith, and Luke Zettle-
moyer. 2019. Evaluating gender bias in machine
translation. In Proceedings of the 57th Annual Meet-
ing of the Association for Computational Linguistics,
pages 1679-1684, Florence, Italy. Association for
Computational Linguistics.
Md Arafat Sultan, Shubham Chandel, Ramón Fernan-
dez Astudillo, and Vittorio Castelli. 2020. On the
importance of diversity in question generation for
QA. In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics, pages
5651-5656, Online. Association for Computational
Linguistics.
length, and length ratio 3. We evaluate cased BLEU on WMT18 (en-de, 3K sentences), WMT13 (en-es, 3K sentences) and IWSLT14 (en-he, 962 sentences). For validation during NMT model training we use earlier test sets from the same tasks.
In 4.3 we show this could also be a source named entity.2 Reranking code and n-best lists at https://github. com/DCSaunders/nmt-gender-rerank
Scripts and data for lattice construction as in Saunders and Byrne (2020) provided at https://github.com/ DCSaunders/gender-debias 4 Model from https://github.com/pytorch/ fairseq/tree/master/examples/roberta/wsc
Different scores from the original work may be due to variations in hyperparameters, or WinoMT updates.
Extracted using RoBERTa coreference model; future work might explore use of a lightweight dependency parser.
http://www.hpc.cam.ac.uk
Lost in translation: Loss and decay of linguistic richness in machine translation. Eva Vanmassenhove, Dimitar Shterionov, Andy Way, Proceedings of Machine Translation Summit XVII. Machine Translation Summit XVIIDublin, Ireland1Research Track. European Association for Machine TranslationEva Vanmassenhove, Dimitar Shterionov, and Andy Way. 2019. Lost in translation: Loss and decay of lin- guistic richness in machine translation. In Proceed- ings of Machine Translation Summit XVII Volume 1: Research Track, pages 222-232, Dublin, Ireland. European Association for Machine Translation.
Tensor2Tensor for neural machine translation. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, Jakob Uszkoreit, Proceedings of the 13th Conference of the Association for Machine Translation in the Americas. the 13th Conference of the Association for Machine Translation in the AmericasBoston, MA1Research Papers). Association for Machine Translation in the AmericasAshish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for neural machine transla- tion. In Proceedings of the 13th Conference of the Association for Machine Translation in the Ameri- cas (Volume 1: Research Papers), pages 193-199, Boston, MA. Association for Machine Translation in the Americas.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010.
Men also like shopping: Reducing gender bias amplification using corpus-level constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/D17-1323Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979-2989, Copenhagen, Denmark. Association for Computational Linguis- tics.
Gender bias in coreference resolution: Evaluation and debiasing methods. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang, 10.18653/v1/N18-2003Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics2Short PapersJieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana. Association for Computational Linguistics.
The united nations parallel corpus v1.0. Michał Ziemski, Marcin Junczys-Dowmunt, Bruno Pouliquen, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Portorož, SloveniaEuropean Language Resources Association (ELRAMichał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel cor- pus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 3530-3534, Portorož, Slovenia. European Language Resources Association (ELRA).
The broker called the client because she had new stocks to recommend. The broker called the client because she had new stocks to recommend.
Makler rief den Kunden an, weil sie neue Aktien zu empfehlen hatte. * Unconstrained, Der, Unconstrained * Der Makler rief den Kunden an, weil sie neue Aktien zu empfehlen hatte.
Der Makler rief die Klientin an, weil sie neue Aktien zu empfehlen hatte. Der Makler rief die Klientin an, weil sie neue Aktien zu empfehlen hatte.
Der Makler rief den Klienten an, weil sie neue Aktien zu empfehlen hatte. Der Makler rief den Klienten an, weil sie neue Aktien zu empfehlen hatte.
Der Makler rief den Kunden an. weil er neue Aktien zu empfehlen hatteDer Makler rief den Kunden an, weil er neue Aktien zu empfehlen hatte.
Constrained Der Makler rief den Kunden an, weil sie neue Aktien zu empfehlen hatte. Constrained Der Makler rief den Kunden an, weil sie neue Aktien zu empfehlen hatte.
Der Makler rief den Kunden an, weil er neue Aktien zu empfehlen hatte. * Die Maklerin rief den Kunden an. weil sie neue Aktien zu empfehlen hatteDer Makler rief den Kunden an, weil er neue Aktien zu empfehlen hatte. * Die Maklerin rief den Kunden an, weil sie neue Aktien zu empfehlen hatte.
Der Makler rief die Kunden an, weil sie neue Aktien zu empfehlen hatte. Der Makler rief die Kunden an, weil sie neue Aktien zu empfehlen hatte.
The teacher explained the situation to the chief and felt understood by her. The teacher explained the situation to the chief and felt understood by her.
Unconstrained Die Lehrerin erklärte die Situation dem Chef und fühlte sich von ihr verstanden. * Die Lehrerin erklärte dem Obersten die Situation und fühlte sich von ihr verstanden. Die Lehrerin erklärte die Situation dem Obersten und fühlte sich von ihr verstanden. Die Lehrerin erläuterte die Situation dem Obersten und fühlte sich von ihr verstandenUnconstrained Die Lehrerin erklärte die Situation dem Chef und fühlte sich von ihr verstanden. * Die Lehrerin erklärte dem Obersten die Situation und fühlte sich von ihr verstanden. Die Lehrerin erklärte die Situation dem Obersten und fühlte sich von ihr verstanden. Die Lehrerin erläuterte die Situation dem Obersten und fühlte sich von ihr verstanden.
Der Lehrer erklärte die Situation dem Chef und fühlte sich von ihr verstanden. * Constrained, Die, Lehrerin erklärte die Situation dem Chef und fühlte sich von ihr verstandenConstrained *Die Lehrerin erklärte die Situation dem Chef und fühlte sich von ihr verstanden. Der Lehrer erklärte die Situation dem Chef und fühlte sich von ihr verstanden.
Der Lehrer erklärte die Situation dem Chef und fühlte sich von ihm verstanden. Der Lehrer erklärte die Situation dem Chef und fühlte sich von ihm verstanden.
Die Lehrerin erklärte die Situation dem Chef und fühlte sich von ihm verstanden. Die Lehrerin erklärte die Situation dem Chef und fühlte sich von ihm verstanden.
Assistent stellte den Fahrer ein, da er einen ausgezeichneten Service bot. Der Assistent stellte den Fahrer ein. * Unconstrained, Der, da er einen exzellenten Service botUnconstrained * Der Assistent stellte den Fahrer ein, da er einen ausgezeichneten Service bot. Der Assistent stellte den Fahrer ein, da er einen exzellenten Service bot.
Der Assistent stellte den Fahrer ein, da er einen hervorragenden Service bot. Der Assistent stellte den Fahrer ein, da er einen hervorragenden Service bot.
Der Assistent stellte den Fahrer ein, als er einen ausgezeichneten Service bot. Der Assistent stellte den Fahrer ein, als er einen ausgezeichneten Service bot.
Assistent stellte den Fahrer ein, da er einen ausgezeichneten Service bot. Die Assistentin stellte den Fahrer ein, da er einen ausgezeichneten Service bot. Der Assistent stellte die Fahrer ein. * Constrained, Der, da er einen ausgezeichneten Service botConstrained * Der Assistent stellte den Fahrer ein, da er einen ausgezeichneten Service bot. Die Assistentin stellte den Fahrer ein, da er einen ausgezeichneten Service bot. Der Assistent stellte die Fahrer ein, da er einen ausgezeichneten Service bot.
Der Assistent stellte den Fahrer ein, da er eine ausgezeichnete Service bot. Der Assistent stellte den Fahrer ein, da er eine ausgezeichnete Service bot.
Arbeiter übergab den Antrag an den Herausgeber. * Unconstrained, Der, weil er den Job willUnconstrained * Der Arbeiter übergab den Antrag an den Herausgeber, weil er den Job will.
Der Arbeiter übergab die Bewerbung an den Herausgeber, weil er die Arbeit will. Der Arbeiter übergab die Bewerbung an den Herausgeber, weil er die Arbeit will.
Der Arbeiter übergab die Bewerbung an den Herausgeber. weil er den Job willDer Arbeiter übergab die Bewerbung an den Herausgeber, weil er den Job will.
Der Arbeiter überreichte die Bewerbung an den Herausgeber, weil er die Arbeit will. Der Arbeiter überreichte die Bewerbung an den Herausgeber, weil er die Arbeit will.
Arbeiter übergab den Antrag an den Herausgeber. * Constrained, Der, weil er den Job willConstrained * Der Arbeiter übergab den Antrag an den Herausgeber, weil er den Job will.
Der Arbeiter übergab den Antrag an den Herausgeber. weil er diesen Job willDer Arbeiter übergab den Antrag an den Herausgeber, weil er diesen Job will.
Der Arbeiter übergab den Antrag an den Herausgeber. weil er die Job willDer Arbeiter übergab den Antrag an den Herausgeber, weil er die Job will.
Der Arbeiter übergab die Antrag an den Herausgeber. weil er die Job willDer Arbeiter übergab die Antrag an den Herausgeber, weil er die Job will.
Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur den Code versteht. * Unconstrained, Der, Unconstrained * Der Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur den Code versteht.
Der Entwickler konnte nicht mit dem Autor kommunizieren, weil er nur den Code versteht. Der Entwickler konnte nicht mit dem Autor kommunizieren, weil er nur den Code versteht.
Der Entwickler war nicht in der Lage, mit dem Schriftsteller zu kommunizieren, weil er nur den Code versteht. Der Entwickler war nicht in der Lage, mit dem Schriftsteller zu kommunizieren, weil er nur den Code versteht.
Der Entwickler war nicht in der Lage, mit dem Autor zu kommunizieren, weil er nur den Code versteht. Der Entwickler war nicht in der Lage, mit dem Autor zu kommunizieren, weil er nur den Code versteht.
Constrained * Der Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur den Code versteht. Constrained * Der Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur den Code versteht.
Der Entwickler konnte nicht mit der Schriftstellerin kommunizieren, weil er nur den Code versteht. Der Entwickler konnte nicht mit der Schriftstellerin kommunizieren, weil er nur den Code versteht.
Der Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur die Code versteht. Der Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur die Code versteht.
Der Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur diesen Code versteht. Der Entwickler konnte nicht mit dem Schriftsteller kommunizieren, weil er nur diesen Code versteht.
| [
"https://github.com/pytorch/"
] |
[
"CLASSIFIER COMBINATION APPROACH FOR QUESTION CLASSIFICATION FOR BENGALI QUESTION ANSWERING SYSTEM",
"CLASSIFIER COMBINATION APPROACH FOR QUESTION CLASSIFICATION FOR BENGALI QUESTION ANSWERING SYSTEM"
] | [
"Somnath Banerjee ",
"Sudip Kumar Naskar ",
"Paolo Rosso ",
"Sivaji Bandyopadhyay ",
"\nCSE Department\nCSE Department\nJadavpur University\nIndia\n",
"\nPRHLT Research Center Universitat Politècnica de València\nCSE Department\nJadavpur University\nIndia, Spain\n",
"\nJadavpur University\nIndia\n"
] | [
"CSE Department\nCSE Department\nJadavpur University\nIndia",
"PRHLT Research Center Universitat Politècnica de València\nCSE Department\nJadavpur University\nIndia, Spain",
"Jadavpur University\nIndia"
] | [] | Question classification (QC) is a prime constituent of automated question answering system. The work presented here demonstrates that the combination of multiple models achieve better classification performance than those obtained with existing individual models for the question classification task in Bengali. We have exploited state-of-the-art multiple model combination techniques, i.e., ensemble, stacking and voting, to increase QC accuracy. Lexical, syntactic and semantic features of Bengali questions are used for four well-known classifiers, namely Naïve Bayes, kernel Naïve Bayes, Rule Induction, and Decision Tree, which serve as our base learners. Single-layer question-class taxonomy with 8 coarse-grained classes is extended to two-layer taxonomy by adding 69 fine-grained classes. We carried out the experiments both on single-layer and two-layer taxonomies. Experimental results confirmed that classifier combination approaches outperform single classifier classification approaches by 4.02% for coarse-grained question classes. Overall, the stacking approach produces the best results for fine-grained classification and achieves 87.79% of accuracy. The approach presented here could be used in other Indo-Aryan or Indic languages to develop a question answering system.Keywords Bengali question classification · question classification · classifier combinations independently, namely, Naïve Bayes, kernel Naïve Bayes, Rule Induction and Decision Tree. Both theoretical[22,23]and empirical[24][25][26]studies confirm that the classifier combination approach is generally more accurate than any of the individual classifiers making up the ensemble. Furthermore, a number of studies[27,30]were successfully carried out on classifier combination methods for the QC task which outperformed the individual classifiers. Therefore, we consider classifier combination for classifying Bengali questions. To the best of our knowledge, classifier combination methods have not been employed for the QC task for Indian languages, prior to the work reported in this paper. As discussed earlier, mainly rule-based approach was employed for the QC task along with individual classifier based approach. Furthermore, no research work can be found in the literature for fine-grained question classification in Bengali. The deep learning framework performs well when large datasets are available for training and the framework is less effective than traditional machine learning approaches when the training datasets are small in size. In this work, we deal with a dataset which has only 1,100 samples. Therefore, we prefer classifier combination approach over deep learning. Li and Roth[15]and Lee et al[51]proposed 50 and 62 fine grained classes for English and Chinese QC respectively. In our work, we proposed 69 fine grained question classes to develop a two-layer taxonomy for Bengali QC. | 10.1007/s12046-019-1224-8 | [
"https://arxiv.org/pdf/2008.13597v2.pdf"
] | 210,720,385 | 2008.13597 | b77c7bcfd5b0ac684c3aa0b68b489a6208b68139 |
CLASSIFIER COMBINATION APPROACH FOR QUESTION CLASSIFICATION FOR BENGALI QUESTION ANSWERING SYSTEM
6 Sep 2020 September, 2019
Somnath Banerjee
Sudip Kumar Naskar
Paolo Rosso
Sivaji Bandyopadhyay
CSE Department
CSE Department
Jadavpur University
India
PRHLT Research Center Universitat Politècnica de València
CSE Department
Jadavpur University
India, Spain
Jadavpur University
India
CLASSIFIER COMBINATION APPROACH FOR QUESTION CLASSIFICATION FOR BENGALI QUESTION ANSWERING SYSTEM
6 Sep 2020 September, 2019
Question classification (QC) is a prime constituent of automated question answering system. The work presented here demonstrates that the combination of multiple models achieve better classification performance than those obtained with existing individual models for the question classification task in Bengali. We have exploited state-of-the-art multiple model combination techniques, i.e., ensemble, stacking and voting, to increase QC accuracy. Lexical, syntactic and semantic features of Bengali questions are used for four well-known classifiers, namely Naïve Bayes, kernel Naïve Bayes, Rule Induction, and Decision Tree, which serve as our base learners. Single-layer question-class taxonomy with 8 coarse-grained classes is extended to two-layer taxonomy by adding 69 fine-grained classes. We carried out the experiments both on single-layer and two-layer taxonomies. Experimental results confirmed that classifier combination approaches outperform single classifier classification approaches by 4.02% for coarse-grained question classes. Overall, the stacking approach produces the best results for fine-grained classification and achieves 87.79% of accuracy. The approach presented here could be used in other Indo-Aryan or Indic languages to develop a question answering system.Keywords Bengali question classification · question classification · classifier combinations independently, namely, Naïve Bayes, kernel Naïve Bayes, Rule Induction and Decision Tree. Both theoretical[22,23]and empirical[24][25][26]studies confirm that the classifier combination approach is generally more accurate than any of the individual classifiers making up the ensemble. Furthermore, a number of studies[27,30]were successfully carried out on classifier combination methods for the QC task which outperformed the individual classifiers. Therefore, we consider classifier combination for classifying Bengali questions. To the best of our knowledge, classifier combination methods have not been employed for the QC task for Indian languages, prior to the work reported in this paper. As discussed earlier, mainly rule-based approach was employed for the QC task along with individual classifier based approach. Furthermore, no research work can be found in the literature for fine-grained question classification in Bengali. The deep learning framework performs well when large datasets are available for training and the framework is less effective than traditional machine learning approaches when the training datasets are small in size. In this work, we deal with a dataset which has only 1,100 samples. Therefore, we prefer classifier combination approach over deep learning. Li and Roth[15]and Lee et al[51]proposed 50 and 62 fine grained classes for English and Chinese QC respectively. In our work, we proposed 69 fine grained question classes to develop a two-layer taxonomy for Bengali QC.
Proposed Question Taxonomies
The set of question categories is referred to as question taxonomy or question ontology. Since Bengali question classification is at an early stage of development, for simplicity initially a single-layer taxonomy for Bengali question types was proposed in [7] which consists of only eight coarse-grained classes and no fine-grained classes. No other investigation have been carried out for coarse-grained Bengali taxonomies till date. Later, based on the coarse-grained classes in [7], fine-grained question classes were proposed in [52]. Table 1 presents the Bengali Question taxonomy proposed in [7,52]. The taxonomy proposed by Li and Roth [15] contains 6 coarse-grained classes: Abbreviation, Description, Entity, Human, Location, Numeric. Abbreviation and Description classes of [15] are not present in Bengali taxonomy. Two coarse-grained classes of [15], namely, Entity and Human have resemblance with Miscellaneous and Person respectively in Bengali taxonomy. While Location and Number classes are present in both the taxonomies, Organization and Method classes are not present in [15]. In 2-layer Bengali taxonomy, 15 fine-grained classes of [15] are not present, namely, abbreviation, expression, definition, description, manner, reason, event, letter, substance, title, description, state, code, distance, order.
All the coarse-grained classes of Lee et al [51] are present in Bengali taxonomy. However, the Method class of Bengali taxonomy is not present in [51]. The Artifact class of [51] is similar to Definition and Miscellaneous of Bengali-taxonomy. In 2-layer Bengali taxonomy, 9 fine-grained classes of [51] are not included, namely, firstperson, planet, province, political system, substance, range, number, range, order.
The 5 fine-grained classes are introduced in Bengali taxonomy which are not present in [15] and [51]. The 5 classes are: AGE, NATURAL, ARTIFICIAL, INSTRUMENTAL, NON-INSTRUMENTAL. The NATURAL and ARTIFICIAL fine -grained classes belong to Method coarse-grained class which is not present in [15] and [51]. Similarly, INSTRU-MENTAL, NON-INSTRUMENTAL fine-grained classes belong to the Reason coarse-grained class. Also, the Reason coarse-grained class is not present in [15] and [51]. The AGE fine-grained class belong to Numerical coarse class.
The taxonomies proposed in [15] and [51] did not deal with causal and procedural questions. The proposed Bengali 2layer taxonomy is based on the only available Bengali QA dataset [7] which contains causal and procedural questions. Therefore, the Bengali taxonomy contains question classes of causal and procedural questions. A few fine-grained classes of [15] and [51] are not included in the taxonomy because such questions are not present in the Bengali QA dataset. However, the proposed Bengali taxonomy is not final for Bengali QA task. Increasing the size of the said dataset is still in the process. Therefore, it is expected that the missing fine-grained classes will be incorporated in the taxonomy in future.
Features for Question Classification
In the task of machine learning based QC, deciding the optimal set of features to train the classifiers is crucial. The features used for the QC task can be broadly categorized into three different types: lexical, syntactic and semantic features [53]. In the present work, we also employed these three types of features suitable for the Bengali QC task.
Loni et al [53] represented questions for the QC task similar to document representation in the vector space model, i.e., a question is represented as a vector described by the words inside it. Therefore, a question Q i can be represented as below:
Q i = (W i1 , W i2 , W i3 , . . . , W i(N −1) , W iN )
where, W ik = frequency of the term k in question Q i , and N = total number of terms.
Due to the sparseness of the feature vector, only non-zero valued features are kept. Therefore, the size of the samples is quite small despite the huge size of feature space. All lexical, syntactic and semantic features can be added to the feature space which expands the feature vector.
In the present study, the features employed for classifying questions (cf. Table 1) are described in the following subsections. In addition to the features used for the coarse-grained classification, fine-grained classification uses an additional feature, namely coarse-class, i.e. label of the coarse-grained class.
Lexical Features
Lexical features (f L ) of a question are extracted from the words appearing in the question. Lexical features include interrogative-words, interrogative-word-positions, interrogative-type, question-length, end-marker and word-shape.
• Interrogative-words and interrogative-word positions: The interrogative-words (e.g., what, who, which etc.) of a question are important lexical features. They are often referred to as wh-words. Huang et al [13,54] showed that considering question interrogative-word(s) as a feature can improve the performance of question classification task for English QA. Because of the relatively free word-ordering in Bengali, interrogative-words might not always appear at the beginning of the sentence, as in English. Therefore, the position of the interrogative (wh) words along with the interrogative words themselves have been considered as the lexical features. The position value is based on the appearance of the interrogative word in the question text and it can have any of the three values namely, first, middle and last.
• Interrogative-type: Unlike in English, there are many interrogatives present in the Bengali language. Twenty six Bengali interrogatives were reported in [7]. In the present work, the Bengali interrogative-type (wh-type) is considered as another lexical feature. In [7], the authors concluded that Bengali interrogatives not only provide important information about the expected answers but also indicate the number information (i.e., singular vs plural). In [7], wh-type was classified to three categories: Simple Interrogative (SI) or Unit Interrogative (UI), Dual Interrogative (UI) and Compound/Composite Interrogative (CI).
• Question length: Blunsom et al [55] introduced the length of a question as an important lexical feature which is simply the number of words in a question. We also considered this feature for the present study.
• End marker: The end marker plays an important role in Bengali QC task. Bengali question is end with either '?' or '|'. It has been observed from the experimental corpus that if the end marker is '|' (similar to dot (.) in English), then the given question is a definition question.
• Word shape: The word shape of each question word is considered as a feature. Word shapes refer to apparent properties of single words. Huang et al [13] introduced five categories for word shapes: all digits, lower case, upper case, mixed and other. Word shape alone is not a good feature for QC, however, when it is combined with other kinds of features, it usually improves the accuracy of QC [13,53]. Capitalization feature is not present in Bengali; so we have considered only the other three categories, i.e., all digits, mixed and other.
Example-1: ke gOdZa prawiRTA karena ?
Gloss: Who established Goura?
Lexical features: wh-word: ke; wh-word position: first; wh-type: SI; question length: 5; end marker: ? word shape: other
Syntactic Features
Although different works extracted several syntactic features (f S ), the most commonly used f S are Part of Speech (POS) tags and head words [8].
• POS tags: In the present work, we used the POS tag of each word in a question such as NN (Noun), JJ (adjective), etc. POS of each question word is added to the feature vector. A similar approach was successfully used for English [15,55]. This feature space is sometimes referred to as the bag-of-POS tags [53]. The Tagged-Unigram (TU) feature was formally introduced by [53]. TU feature is simply the unigrams augmented with POS tags. Loni et al [53] showed that considering the tagged-unigrams instead of normal unigrams can help the classifier to distinguish a word with different tags as two different features. For extracting the POS tags, the proposed classification work in Bengali uses a Bengali Shallow Parser 1 which produces POS tagged data as intermediate result.
• Question head word: Question head-word is the most informative word in a question as it specifies the object the question is looking for [13]. Correctly identifying head-words can significantly improve the classification accuracy. For example, in the question "What is the oldest city in Canada?" the headword is 'city'. The word 'city' in this question can highly contribute to classify this question as LOC: city.
Identifying the question's head-word is very challenging in Bengali because of its syntactic nature and no research has been conducted so far on this. Based on the position of the interrogative in the question, we use heuristics to identify the question head-words. According to the position of the interrogative, three cases are possible.
-Position-I (at the beginning): If the question-word (i.e., marked by WQ tag) appears at the beginning then the first NP chunk after the interrogative-word is considered as the head-word of the question. Let us consider the following question.
Example-2: ke(/WQ) gOdZa(/NNP) prawiRTA(/NN) karena(/VM) ?(/SYM)
English Gloss: Who established Goura ?
In the above example, gOdZa is the head-word.
-Position-II (in between): If the position of the question-word is neither at the beginning or at the end then the immediate NP-chunk before the interrogative-word is considered as the head-word. Let us consider the following question.
Example-3: gOdZa(/NNP) koWAyZa(/WQ) abashiwa(/JJ) ?(/SYM)
English Gloss: Where is Goura situated ?
In the above example gOdZa is considered as the question head-word.
-Position-III (at the end): If the question-word appears at the end (i.e., just before the end of sentence marker) then the immediate NP-chunk before the interrogative-word is considered as the question head-word. Therefore, a similar action is taken for Position II and III.
Semantic Features
Semantic features (f M ) are extracted based on the semantics of the words in a question. In this study, related words and named entities are used as f M .
• Related word: A Bengali synonym dictionary is used to retrieve the related words. Three lists of related words were manually prepared by analyzing the training data. If a question word belongs to any of the three lists (namely date, food, human activity), then its category name is added to the feature vector. For instance, the question "ke gedZera sbAXIna narapawi Cilena ?" (gloss: who was the independent ruler of Goura ?) contains the word narapawi which belongs to the human authority list. For this example question the semantic feature is added to the feature vector as: [{human-authority, 1}].
• Named entities: We used named entities (NE) as a semantic feature which was also recommended in other works [15,55] on other languages. To identify the Bengali named entities in the question text, a Margin Infused Relaxed Algorithm (MIRA) based Named Entity Recognizer (NER) [57] is used for the present study. For the Example-5 question, the NE semantic feature is added to the feature vector as: [Location, 1].
Example-5: ke gOdZa[Location] prawiRTA karena?
English Gloss: Who established Goura ?
Experiments and Results
Many supervised learning approaches [13,55,58] have been proposed for QC over the years. But these approaches primarily differ in the classifier they use and the features they train their classifier(s) on [8]. We assume that a Bengali question is unambiguous, i.e., a question belongs to only one class. Therefore, we considered multinomial classification which assigns the most likely class from the set of classes to a question. Recent studies [12][13][14] also considered one label per question.
We used state-of-the-art classifier combination approaches: ensemble, stacking and voting. We have used two contemporary methods for creating accurate ensembles, namely, bagging and boosting. We employed the Rapid Miner tool for all the experiments reported here. Each of the three classifier combination approaches was tested with Naïve Bayes (NB), Kernel Naïve Bayes (k-NB), Rule Induction (RI) and Decision Tree (DT) classifiers.
Classification accuracy is used to evaluate the results of our experiments. Accuracy is the widely used evaluation metric to determine the class discrimination ability of classifiers, and is calculated using the following equation: accuracy = number of correctly classified samples total number of tested samples
Corpus Annotation and Statistics
We carried out our experiments on the dataset described in [7]. The questions in this dataset are acquired from different domains, e.g., education, geography, history, science, etc. We hired two native language (i.e., Bengali) specialists for annotating the corpus. Another senior native language expert was hired to support the two language specialists. The annotators were instructed to consult the senior native language expert in case of any confusion. In order to minimize disagreement, two language specialists gathered to discuss the question taxonomy in detail before initiating the annotation task. We set a constraint that a question will be annotated such that it is unambiguous, i.e., only a question class will be assigned to a question. We measured the inter-annotator agreement using non-weighted kappa coefficients [59]. The kappa coefficient for the annotation task was 0.85 which represents very high agreement. In case of or disagreement, the senior language specialist took the final decision. The class-specific distribution of questions in the corpus is given in Table 3. It can be observed from Table 3 that the most frequent question class in the dataset is 'Person'. The dataset contains a total of 1,100 questions. We divided the question corpus into 7:3 ratio for experimentation. The experimental dataset consists of 1100 Bengali questions of which 70% are used for training and the rest (331 questions, 30%) for testing the classification models.
Coarse-Grained Classification
The empirical study of state-of-the-art classifier combination approaches (i.e., ensemble, stacking, and voting) was performed on the said dataset using four classifiers -namely, NB, k-NB, RI and DT. Each experiment can be thought of as a combination of three experiments since each classifier model was tested on {f L }, {f L , f S } and {f L , f S , f M } feature sets separately. Overall thirteen experiments were performed for coarse-grained classification and the evaluation results are reported in Table 4.
Ensemble Bagging
The bagging approach was applied separately to four classifiers (i.e., NB, k-NB, RI and DT) and the obtained accuracies are summarized in Table 4. Initially, the size (i.e., number of iterations) of the base learner was set to 2.
Subsequently, experiments were performed with gradually increasing size (size > 2). The classification accuracy enhanced with increase in size. However, after a certain size, the accuracy was almost stable. 9 and 8 for NB, k-NB, RI and DT respectively after which the corresponding classification accuracies converge. The Figure 1 shows the variation in size and accuracy for the best feature set.
Ensemble Boosting
Like bagging, boosting (AdaBoost.M1) was also applied separately to the four base classifiers. Table 4 tabulates the accuracies obtained with the boosting approach with the four classifiers. Here, we empirically fixed the iterations of boosting for the four classifiers to 12, 16, 10 and 8 respectively for the feature set {f L , f S , f M }, since the corresponding weight of 1 βt becomes less than 1 beyond those values. If 1 βt is less than 1, then the weight of the classifier model in boosting may be less than zero for that iteration. The Figure 2 shows the variation in size and accuracy for the best feature set.
Similarly, for the feature sets {f L , f S } and {f L } the iterations are set to 13, 18, 12, 9 and 14, 19, 14, 11 respectively for the four classifiers. Overall the DT classifier performs the best. However, unlike in bagging, k-NB performs better than RI with boosting.
Stacking
In stacking, three out of the four classifiers are used as the base learners (BL) and the remaining classifier is used as the model learner (ML). Therefore, four experiments were conducted separately for each of the four classifiers as the ML. The obtained accuracies are summarized in Table 4.
Experimental results revealed that with RI as the model learner and NB, k-NB, DT as the base learners, the classifier achieves the best classification accuracy.
Voting
In voting, four classifiers altogether were used as the base learners and majority vote was used as voting approach. The evaluation results of the voting approach are presented in Table 4.
Result Analysis of Coarse-Grained Classification
Classifier combination is an established research known under different names in the literature: committees of learners, mixtures of experts, classifier ensembles, multiple classifier systems, etc. A number of research [18,19,22,24] established that classifier combination could produce better results than single classifier. Generally, the key to the success of classifier combination approach is that it builds a set of diverse classifiers where each classifier is based on different subsets of the training data. Therefore, our objective is to verify the impact of the classifier combination approaches over the individual classifier approaches on Bengali QC task.
The automated Bengali QC system by [7] is based on four classifiers, namely NB, k-NB, RI and DT, which were used separately. The experimental results obtained by [7] are shown in Table 5. In that work, NB was used as the baseline and the DT classifier achieved the highest accuracy of 87.63% (cf. Table 5). A comparison of the results in Table 4 and Table 5 reveals that each classifier combination model performs better than the single classifier models in terms of classification accuracy. The prime reason is that classifier combination approaches reduce model bias and variance more effectively than individual classifiers.
In comparison to the earlier experiments reported in [7], Bagging approach helps to avoid over fitting by reducing variance [18]. However, after certain iteration, it cannot reduce variance. Hence, after certain iteration, it does not improve the performance of the model. Therefore, we observed that after size (i.e., number of iterations), it was unable to enhance the accuracy.
On the other hand, boosting approach enhance the performance of the model by primarily reducing the bias [60]. However, after certain iteration (size) it cannot be improved. Because after certain iterations, the corresponding weight of 1 βt becomes less than 1. If 1 βt is less than 1, then the weight of the classifier model in boosting may be less than zero for that iteration. Therefore, we were not able to improve the accuracy after specific boosting size.
In stacking, the model learner is trained on the outputs of the base learners that are trained based on a complete training set [21]. Out experiment reveals that RI as model learner and NB, k-NB, DT as the base learners outperforms the other models.
In the context of Bengali question classification task, we conclude from the experimental results that although classifier combination approach outperforms the individual classifier approach, the impact of different classifier combination approaches is almost same for the Bengali course classes. Because, we obtained almost similar accuracy for different classifier combination approaches, namely, ensemble, stacking and voting.
Fine-Grained Classification
Initially, we applied NB, k-NB, RI and DT classifiers separately. The NB classifiers achieved around 77% of accuracy while the k-NB and RI classifiers achieved around 80% of accuracy for the fine-grained question classes. Only the DT classifier obtained more than 80% accuracy for all the question classes. The detailed evaluation results of the finegrained question classification task using individual classifier are given in Table 6. The subsequent sections describe the experiments with classifier combination approaches.
f L f L +f S f L +f S +f M f L f L +f S f L +f S +f M NB F P
Ensemble Bagging
In this approach, we use four classifiers as base learners individually: NB, k-NB, RI and DT. Initially, the base learners are trained using the lexical features (f L ). Then semantic and syntactic features are added gradually for classification model generation. Therefore, three classification models were generated for each base learner. Thus, altogether 12 models were prepared for bagging. Like coarse-grained classification, initially the size (number of iteration) of the base learner was set to 2. Subsequently experiments were performed with gradually increasing sizes (size > 2). The classification accuracy increased with higher values of size. However, after certain iterations the accuracy was almost stable. For the fine-grained classes of PER coarse-class (i.e., F P ER ), with {f L , f S , f M }) feature set at size = 2 , the NB classifier achieved 81.98% classification accuracy and at size ≥ 9, it became stable with 82.87% accuracy. Similarly, with {f L , f S , f M } feature set the k-NB, RI and DT classifiers achieved stable accuracies at size equal to 13, 8 and 7 respectively. For the lexical feature set, the bagging size of NB, k-NB, RI and DT were 13, 20, 12 and 11 respectively after which the classification accuracy became stable. For the combined lexical and syntactic features, the recorded bagging size of NB, k-NB, RI and DT were 11, 18, 10 and 9 respectively. Figure 3 depicts the iteration size for the bagging approach.
Ensemble Boosting
Like the ensemble bagging approach, we applied boosting (i.e., AdaBoost.M1) separately to the four classifiers. Experimental results confirm that performances of the four base classifiers improve slightly using AdaBoost.M1. Table 7 presents the results of the boosting experiments and shows that altogether DT outperforms the other classifiers in the ensemble approach, i.e., bagging and boosting.
Figure 4: Size variation in Boosting
In the boosting approach, the number of iterations depends on 1 βt . When the value of 1 βt becomes less than 1, then for that iteration the weight of the boosting classification may be less than zero. Hence, we empirically fixed the iterations of AdaBoost.M1 for the four classifiers (i.e., NB, k-NB, RI and DT) to 13, 17, 11 and 9 respectively for the feature set {f L , f S , f M } since the weight of 1 βt becomes less than 1 after those values. Similarly, for feature set {f L , f S } and {f L }the iterations were 14, 19, 13, 10 and 15, 20, 15, 12 respectively for the four base classifiers. Figure 4 depicts the iteration sizes of the four classifiers in the boosting approach.
Stacking
As discussed in Section 3.2.3, in stacking one classifier plays the role of ML while the remaining classifiers act as BLs. Therefore, with four classifiers four experiments were conducted separately. The obtained accuracies are reported in Table 8. From the experimental results it was observed that the model trained with DT as the model learner and NB, k-NB, RI as the base learners achieved the best classification accuracy.
Voting
Unlike the ensemble approach, in the voting approach all the classifiers were applied at the same time to predict the question class. Table 9 tabulates the the accuracies obtained with this approach.
Result Analysis of Fine-Grained Classification
As research studies [18,19,22,24] argued that classifier combination approaches provide better prediction results over individual classifier approach, our motivation is to verify the impact of the classifier combination approaches on Bengali QC task. Initially, we carried out our experiment with individual classifier approach and applied NB, k-NB, RI and DT classifiers separately. Table 6 presents the results obtained using individual classifier approach. In fine-grained classification task, we used the identical features that were also used in coarse-grained classification. Inevitably, the obtained accuracies for fine-grained classification is less than the coarse-grained classification using the same feature sets.
Then, we applied the state-of-the-art classifier combination techniques on the lexical, syntactic and semantic feature sets. Figure 3 depicts the bagging size (i.e., number of iterations) for fine-grained classification. Breiman [18] stated that bagging approach improves the performance of a prediction model by reducing the variance. However, after certain iteration, it cannot reduce variance and the model becomes stable. Hence, after certain iteration, we were not able to improve the performance of the models. We noticed that the bagging approach requires more iteration to stable in fine-grained classification in comparison to the coarse-grained classification. In contrast, boosting approach enhance the performance of the model by primarily reducing the bias [60]. After certain iterations, the boosting approach cannot reduce the bias because the corresponding weight of 1 βt becomes less than 1. If 1 βt is less than 1, then the weight of the classifier model in boosting may be less than zero for that iteration. Hence, in Figure 4, we can see that the boosting size is stable after certain iterations. Table 7 shows that the boosting approach achieves slightly better performance than the bagging. In stacking approach, one classifier plays the role of ML and a set of classifiers act as BLs. In the stacking approaches, the setup with NB, k-NB, RI as BLs and DT as ML outperforms other setup combinations. The stacking approach outperforms the voting approach with slight margin. However, the boosting approach with the base classifier DT achieves the best. It was noticed from the fine-grained question classification that all the classifier combination approaches beat the individual classifier approaches with a notable margin.
Error Analysis
We checked the dataset and the system output to analyze the errors. We observed the following as the major sources of errors in the proposed system.
• Questions belonging to different question classes have the same content words which make the classifiers confuse and wrongly classify the questions into same class. For example, both the questions "koVna saByawAkeV bExika saByawA balA hayZa ?" (gloss: which civilization is called the Vedic Civilization?), "Arya saBya-wAkeV keVna bExika saByawA balA hayZa ?" (gloss: why the Arya Civilization is called the Vedic Civilization?) have the same content words: saByawAkeV, bExika, saByawA, hayZa.
• In Bengli, the dual interrogatives consist of two single interrogatives. Thus, classifiers get confused by encountering two interrogative words. Therefore, classifiers often misclassify such questions.
• The classifiers wrongly classified the Bengali questions which are long and complex. For example, 'keVna AXunika yugeVra paNdiweVrA maneV kareVna yeV, sinXu saByawA xrAbidZa jAwIra xbArA sqRti hayZeVCila ? (gloss: why the modern scholars think that the Indus Valley Civilization is created by the Aryans?).
Conclusions
Although QA research in other languages (such as English) has progressed significantly, for majority of Indian languages it is at the early stage of development. In this study, we addressed the QC task for Bengali, one of the most spoken languages in the world and the second most spoken language in India. We reported experiments for coarsegrained and fine-grained question classification. We employed lexical, syntactic and semantic features. We applied classifiers individually as well as combination approaches. The automated Bengali question classification system obtains up to 91.65% accuracy for coarse-grained classes and 87.79% for fine-grained classes using classifier combination approaches based on four classifiers, namely NB, k-NB, RI and DT. The contributions of this work are listed below.
• This work successfully deploys state-of-the-art classifier combination approaches for the question classification task in Bengali.
• We have empirically established the efficacy of the classifier combination approach over individual classifier approach for coarse-grained question classification as well as fine-grained question classification.
• We have extended the single layered (coarse-grained) taxonomy into two layered (coarse-grained and finegrained) taxonomy by incorporating 69 fine-grained classes to the question classification taxonomy.
• This work improves QC accuracy which in turns enhances the Bengali QA system performance.
In coarse-grained question classification, overall the voting approach with majority voting technique performs best among the four classifier combination approaches, namely bagging, boosting, stacking, and voting. However, the stacking approach produces the best results for fine-grained classification.
BODY, CREATION, CURRENCY, FOOD, INSTRUMENT, OTHER, PLANT, PROD-UCT, SPORT, SYMBOL, TECHNIQUE, TERM, WORD Miscellaneous (MISC) COLOR, CURRENCY, ENTERTAINMENT, LAN-GUAGE, OTHER, VEHICLE, AFFAIR, DISEASE, PRESS, RELIGIONTable 2: Bengali question examples Class Example Person (PER) ke gOdZa prawiRTA karena ? (gloss: Who established Goura?) Organization (ORG) sinXu saByawAra safgeV koVna saByawAra mila KuzjeV pAoVyZA yAyZa ? (gloss: Which civilization has resemblance with the Indus Valley Civilization ?Sani graheVra gadZa xUrawba kawa ? (gloss: What is the average distance of the planet Saturn from the Sun ?) Method (METH) AryasaByawA mahilArA kiBAbeV cula bAzXawa ? (gloss: How do the women braid hair in the Arya Civilization) Reason (REA) AryasaByawAkeV keVna bExika saByawA balA hayZa ? (gloss: Why the Arya Civilization is called the Vedic Civilization?) Definition (DEF) beVxa ki ? (gloss: What is Veda?) Miscellaneous (MISC) Arya samAjeV cArati barNa ki ki Cila ? (gloss: What are the four classes in the Arya Society?)
Example- 4 :
4[bAMlAxeSe arWanIwi kaleja](/NNP) kayZati (/WQ) ?(/SYM) English Gloss: How many economics colleges are in Bangladesh? Therefore, in the Example-4 [bAMlAxeSe arWanIwi kaleja ] is the question head-word. Now, if we consider the example "ke gOdZa prawiRTA karena ?" then the syntactic features will be: [{WQ, 1},{NNP, 1}, {NN, 1}, {VM, 1},{head-word,gOdZa}]. Here a feature is represented as { POS, frequency }.
Figure 1 :Figure 2 :
12Size and Accuracy variation in Bagging with {f L , f S , f M } Size and Accuracy variation in Boosting with {f L , f S , f M }
with the bagging approach, classification accuracy of each classifier increases notably with bagging. The classification accuracy on the {f L }, {f L , f S } and {f L , f S , f M } feature sets increases by 1.04%, 0.72% and 3.64% for best performing DT classifier. Similarly, with the boosting approach, the classification accuracy for the best performing DT classifiers notably increases by 1.02%, 0.89% and 3.50% on{f L }, {f L , f S } and {f L , f S , f M } feature set.The stacking approach increases the accuracy on the {f L , f S } feature set than the bagging and boosting approaches. This approach increases the classification accuracy by 1.36%, 2.74% and 0.69% on the {f L }, {f L , f S } and {f L , f S , f M } feature sets respectively. The voting approach not only increases the classification accuracy, but also provides the maximum accuracy for all the feature sets than the other combined approaches. The voting approach increases the classification accuracy on the {f L }, {f L , f S } and {f L , f S , f M } feature sets by 2.40%, 2.40% and 4.02% respectively. Therefore, overall the voting approach with majority voting performed the best among the four classifier combination approaches.
Each classifier was trained with {f L }, {f L , f S } and {f L , f S , f M } feature sets. The performance of the classifiers increases gradually with incorporation of syntactic and semantic features (i.e., {f L } → {f L , f S } → {f L , f S , f M }).
Figure 3 :
3Size variation in Bagging
Table 1 :
1Two-layer Bengali Question TaxonomiesCoarse-
grained
date:{ janmaxina, xina, xaSaka, GantA, sapwAha, mAsa, baCara, ...,etc.}; food:{ KAbAra, mACa, KAxya, mAKana, Pala,Alu, miRti, sbAxa, ..., etc.}; human authority:{ narapawi, rAjA, praXAnamanwrI, bicArapawi, mahAparicAlaka, ceyZAramyAna, jenArela, sulawAna, samrAta, mahAXyakRa, ..., etc.};
Table 3 :
3Corpus statisticsClass
Train Test Overall
Person
172
90
262
Organization
74
30
104
Location
76
30
106
Temporal
81
35
116
Numerical
71
30
101
Methodical
75
29
104
Reason
73
26
99
Definition
78
38
116
Miscellaneous
69
23
92
Total
769
331
1100
At size = 2 and feature set {f L , f S , f M }, the NB classifier achieved 82.23% accuracy and at size ≥ 9, it became stable with 83.25% accuracy. At size = 2 and feature set {f L , f S , f M },the k-NB classifier achieved 83.87% accuracy and at size ≥ 15, it became stable with 84.22% accuracy. At size = 2 and feature set {f L , f S , f M }, the RI classifier achieved 85.97% accuracy and at size ≥ 8, it became stable with 86.90% accuracy. At size = 2 and feature set {f L , f S , f M },the DT classifier achieved 88.09% accuracy and at size ≥ 7, it became stable with 91.27% accuracy. It was observed from the experiments that with bagging the DT classifier performs best on any feature set for any size. For the experiments with the f L features, the bagging size of NB, k-NB, RI and DT are 12, 19, 11 and 10 respectively after which classification accuracy becomes stable. Similarly, for experiments with {f L , f S } feature set, the optimal bagging sizes are 10, 17,
Table 4 :
4Classifier combination results for coarse-grained classificationApproach
Base-Learner
Model-Learner
f L
f L + f S f L +f S +f M
Bagging
NB
x
81.53
82.77
83.25
k-NB
x
82.09
83.37
84.22
RI
x
83.96
85.61
86.90
DT
x
85.23
86.41
91.27
Boosting
NB
x
81.74
82.71
83.51
k-NB
x
83.86
85.63
86.87
RI
x
83.55
85.59
86.27
DT
x
85.21
86.58
91.13
Stacking
k-NB, RI, DT
NB
81.76
82.79
83.64
NB, RI, DT
k-NB
83.86
85.54
86.75
NB, k-NB, DT
RI
85.55
87.69
91.32
NB, k-NB, RI,
DT
85.07
86.73
89.13
Voting
NB, k-NB, RI, DT
x
86.59
88.43
91.65
Table 5 :
5Experimental results of[7] Classifier
f L
f L +f S f L +f S +f M
NB
80.65 81.34
81.89
k-NB
81.09 82.37
83.21
RI
83.31 84.23
85.57
DT
84.19 85.69
87.63
Table 6 :
6Fine-grained classification using individual classifiersClassifier Class
f L
f L +f S f L +f S +f M
NB
F P ER
74.07 75.54
77.07
F ORG
75.33 76.55
77.70
F LOC
76.15 77.02
77.87
F T EM
75.74 77.16
77.97
F N UM
74.61 75.45
76.55
F MET H 76.35 77.42
78.50
F REA
76.19 77.20
78.02
F DEF
76.30 77.45
78.56
F MISC
75.80 76.95
77.40
k-NB
F P ER
75.72 77.33
78.41
F ORG
76.76 77.97
79.28
F LOC
77.52 78.55
79.40
F T EM
77.22 78.73
79.57
F N UM
76.09 76.94
78.05
F MET H 77.92 79.14
80.24
F REA
77.82 79.36
80.33
F DEF
77.99 79.40
80.43
F MISC
77.37 78.74
79.60
RI
F P ER
77.96 79.04
80.12
F ORG
78.29 79.56
80.75
F LOC
77.67 78.36
79.18
F T EM
79.17 80.76
81.73
F N UM
78.04 79.03
80.42
F MET H 79.87 81.00
82.12
F REA
79.62 80.93
82.06
F DEF
78.98 80.28
81.28
F MISC
78.59 79.91
80.90
DT
F P ER
80.37 82.06
83.61
F ORG
78.78 80.26
81.68
F LOC
78.51 79.63
80.94
F T EM
80.58 82.03
83.50
F N UM
79.00 80.50
81.85
F MET H 80.62 82.55
84.47
F REA
80.51 82.49
84.42
F DEF
79.89 81.07
82.49
F MISC
79.74 81.72
84.07
Table 7 :
7Ensemble results of fine-grained classification
Bagging
Boosting
Table 8 :
8Results of fine-grained classification with stackingBase Learner Model Learner Classf L f L +f S f L +f S +f Mk-NB,RI,DT
NB
F P ER
79.81 81.67
82.86
F ORG
81.79 83.02
84.02
F LOC
81.97 83.74
84.91
F T EM
81.45 82.81
83.73
F N UM
81.83 82.07
83.54
F MET H 82.15 83.13
84.09
F REA
82.24 83.36
84.42
F DEF
81.76 83.05
84.23
F MISC
80.21 82.33
83.21
NB,RI,DT
k-NB
F P ER
79.93 81.79
83.03
F ORG
81.86 83.16
84.13
F LOC
82.08 83.82
85.06
F T EM
81.52 83.01
83.87
F N UM
81.97 82.18
83.71
F MET H 82.28 83.20
84.18
F REA
82.31 83.43
84.45
F DEF
81.82 83.21
84.31
F MISC
80.29 82.42
83.35
NB,k-NB,DT RI
F P ER
80.56 83.06
84.22
F ORG
82.86 83.98
85.03
F LOC
80.23 81.49
82.95
F T EM
83.21 84.78
85.97
F N UM
82.37 83.42
84.77
F MET H 83.54 84.93
86.27
F REA
84.03 85.75
86.73
F DEF
80.01 82.33
84.21
F MISC
82.45 83.86
84.87
NB,k-NB,RI
DT
F P ER
84.97 86.69
88.71
F ORG
83.32 85.06
87.43
F LOC
82.93 84.21
85.71
F T EM
84.84 86.13
87.95
F N UM
83.57 85.17
87.49
F MET H 84.85 86.91
88.56
F REA
84.69 86.78
88.29
F DEF
84.38 85.65
87.51
F MISC
84.02 86.11
88.42
Table 9 :
9Results of fine-grained classification with votingBase LearnerClassf L f L +f S f L +f S +f MNB, k-NB,RI,DTF P ER
79.81 81.67
82.86
F ORG
81.79 83.02
84.02
F LOC
81.97 83.74
84.91
F T EM
81.45 82.81
83.73
F N UM
81.83 82.07
83.54
F MET H 82.15 83.13
84.09
F REA
82.24 83.36
84.42
F DEF
81.76 83.05
84.23
F MISC
80.21 82.33
83.21
All the Bengali examples in this paper are written in WX[56] notation which is a transliteration scheme for representing Indian languages in ASCII. 1 http:// ltrc.iiit.ac.in/analyzer/bengali/
The only available QA dataset for Bengali contains only 1,100 questions. In future, we would like to contribute to enlarge the dataset. One of the future directions of this study is employing the state-of-the-art neural network techniques. Also, we would like to apply the approaches used in this study to other less investigated languages.Acknowledgments
Speech and language processing. Dan Jurafsky, H James, Martin, PearsonDan Jurafsky and James H Martin. Speech and language processing. Pearson, 2014.
Speech and language processing. H James, Daniel Martin, Jurafsky, 710International EditionJames H Martin and Daniel Jurafsky. Speech and language processing. International Edition, 710, 2000.
Overview of the trec 2001 question answering track. NIST special publication. M Ellen, Voorhees, Ellen M Voorhees. Overview of the trec 2001 question answering track. NIST special publication, pages 42-51, 2002.
Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran. Eduard Hovy, Laurie Gerber, Human language technology research. ACLToward semanticsbased answer pinpointingEduard Hovy, Laurie Gerber, Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran. Toward semantics- based answer pinpointing. In Human language technology research, pp. 1-7. ACL, 2001.
Ibm's statistical question answering system. Abraham Ittycheriah, Martin Franz, Wei-Jing Zhu, Adwait Ratnaparkhi, Richard J Mammone, TREC. Abraham Ittycheriah, Martin Franz, Wei-Jing Zhu, Adwait Ratnaparkhi, and Richard J Mammone. Ibm's statis- tical question answering system. In TREC, 2000.
Performance issues and error analysis in an open-domain question answering system. Dan Moldovan, Marius Paşca, Sanda Harabagiu, Mihai Surdeanu, ACM Transactions on Information Systems (TOIS). 212Dan Moldovan, Marius Paşca, Sanda Harabagiu, and Mihai Surdeanu. Performance issues and error analysis in an open-domain question answering system. ACM Transactions on Information Systems (TOIS), 21(2):133-154, 2003.
Bengali question classification: Towards developing QA system. Somnath Banerjee, Sivaji Bandyopadhyay, Proceedings of the 3rd Workshop on South and Sotheast Asian Language Processing (SANLP), COLING. the 3rd Workshop on South and Sotheast Asian Language Processing (SANLP), COLINGSomnath Banerjee and Sivaji Bandyopadhyay. Bengali question classification: Towards developing QA system. In Proceedings of the 3rd Workshop on South and Sotheast Asian Language Processing (SANLP), COLING, pages 25-40, 2012.
A survey of state-of-the-art methods on question classification. Babak Loni, Delft University of TechnologyTech. RepBabak Loni. A survey of state-of-the-art methods on question classification. Delft University of Technology, Tech. Rep, 2011.
Xerox trec-8 question answering track report. A David, Hull, TREC. David A Hull. Xerox trec-8 question answering track report. In TREC, 1999.
The use of predictive annotation for question answering in trec8. John Prager, Dragomir Radev, Eric Brown, Anni Coden, Valerie Samn, Information Retrieval. 134John Prager, Dragomir Radev, Eric Brown, Anni Coden, and Valerie Samn. The use of predictive annotation for question answering in trec8. Information Retrieval, 1(3):4, 1999.
Exploiting syntactic and shallow semantic kernels for question answer classification. Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, Suresh Manandhar, Annual meeting-association for computational linguistics. 776Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. Exploiting syntactic and shal- low semantic kernels for question answer classification. In Annual meeting-association for computational lin- guistics, (45), pp. 776, 2007.
Question classification using support vector machines. Dell Zhang, Wee Sun Lee, Proceedings of Research and development in informaion retrieval. Research and development in informaion retrievalACMDell Zhang and Wee Sun Lee. Question classification using support vector machines. In Proceedings of Research and development in informaion retrieval, pages 26-32. ACM, 2003.
Question classification using head words and their hypernyms. Zhiheng Huang, Marcus Thint, Zengchang Qin, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingACLZhiheng Huang, Marcus Thint, and Zengchang Qin. Question classification using head words and their hyper- nyms. In Proceedings of Empirical Methods in Natural Language Processing, pages 927-936. ACL, 2008.
From symbolic to sub-symbolic information in question classification. Joao Silva, Luísa Coheur, Ana Cristina Mendes, Andreas Wichert, Artificial Intelligence Review. 352Joao Silva, Luísa Coheur, Ana Cristina Mendes, and Andreas Wichert. From symbolic to sub-symbolic informa- tion in question classification. Artificial Intelligence Review, 35(2):137-154, 2011.
Learning question classifiers: the role of semantic information. Xin Li, Dan Roth, Natural Language Engineering. 1203Xin Li and Dan Roth. Learning question classifiers: the role of semantic information. Natural Language Engineering, 12(03):229-249, 2006.
Maximum entropy markov models for information extraction and segmentation. Andrew Mccallum, Dayne Freitag, Fernando Cn Pereira, International Conference on Machine Learning (ICML). 17Andrew McCallum, Dayne Freitag, and Fernando CN Pereira. Maximum entropy markov models for information extraction and segmentation. In International Conference on Machine Learning (ICML), volume 17, pages 591- 598, 2000.
Support-vector networks. Corinna Cortes, Vladimir Vapnik, Machine Learning. 20Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273-297, 1995.
Bagging predictors. Machine learning. Leo Breiman, 24Leo Breiman. Bagging predictors. Machine learning, 24(2): 123-140, 1996.
Combining forecasts: A review and annotated bibliography. Robert T Clemen, International journal of forecasting. 54Robert T Clemen. Combining forecasts: A review and annotated bibliography. International journal of forecast- ing, 5(4):559-583, 1989.
Improving regression estimation: Averaging methods for variance reduction with extensions to general convex measure optimization. Michael Peter Perrone, Brown UniversityPhD thesisMichael Peter Perrone. Improving regression estimation: Averaging methods for variance reduction with exten- sions to general convex measure optimization. PhD thesis, Brown University, 1993.
Stacked generalization. H David, Wolpert, Neural networks. 52David H Wolpert. Stacked generalization. Neural networks, 5(2):241-259, 1992.
Neural network ensembles. Lars Kai Hansen, Peter Salamon, IEEE transactions on pattern analysis and machine intelligence. 12Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12:993-1001, 1990.
Neural network ensembles, cross validation, and active learning. Anders Krogh, Jesper Vedelsby, Advances in neural information processing systems. 7Anders Krogh, Jesper Vedelsby, et al. Neural network ensembles, cross validation, and active learning. Advances in neural information processing systems, 7:231-238, 1995.
Optimal linear combinations of neural networks. Sherif Hashem, Neural networks. 104Sherif Hashem. Optimal linear combinations of neural networks. Neural networks, 10(4):599-614, 1997.
Actively searching for an effective neural network ensemble. W David, Jude W Opitz, Shavlik, Connection Science. 83-4David W Opitz and Jude W Shavlik. Actively searching for an effective neural network ensemble. Connection Science, 8(3-4):337-354, 1996.
Generating accurate and diverse members of a neural-network ensemble. W Jude, Shavlik, Jude W Shavlik. Generating accurate and diverse members of a neural-network ensemble. 1996.
Question classification by ensemble learning. Xin Li, Xuan-Jing Huang, Li, IJCSNS. 63147Xin Li, Xuan-Jing Huang, and de WU Li. Question classification by ensemble learning. IJCSNS, 6(3):147, 2006.
The strength of weak learnability. E Robert, Schapire, Machine learning. 52Robert E Schapire. The strength of weak learnability. Machine learning, 5(2):197-227, 1990.
Transformation-based error-driven learning and natural language processing: A case study in part-ofspeech tagging. Eric Brill, Computational linguistics. 214Eric Brill. Transformation-based error-driven learning and natural language processing: A case study in part-of- speech tagging. Computational linguistics, 21(4):543-565, 1995.
Chinese question classification based on ensemble learning. Keliang Jia, Kang Chen, Xiaozhong Fan, Yu Zhang, Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, SNPD 2007. ACIS. IEEE3Keliang Jia, Kang Chen, Xiaozhong Fan, and Yu Zhang. Chinese question classification based on ensemble learn- ing. In Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, SNPD 2007. ACIS, volume 3, pages 342-347. IEEE, 2007.
Ensemble learning for question classification. Lei Su, Hongzhi Liao, Zhengtao Yu, Quan Zhao, Intelligent Computing and Intelligent Systems,ICIS. IEEELei Su, Hongzhi Liao, Zhengtao Yu, and Quan Zhao. Ensemble learning for question classification. In Intelligent Computing and Intelligent Systems,ICIS,pages 501-505.IEEE, 2009.
An overview of the DeepQA project. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, Others Building, Watson, AI magazine. 31David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, and others Building Watson: An overview of the DeepQA project. In AI magazine, 31(3), pages 59-79, 2010.
Experiments for Tuning the Values of Lexical Features in Question Answering for Spanish. Manuel Manuel Alberto Pérez-Coutiño, Aurelio Montes-Y-Gómez, Luis Villaseñor López-López, Pineda, CLEF (Working Notes). Manuel Alberto Pérez-Coutiño, Manuel Montes-y-Gómez, Aurelio López-López, and Luis Villaseñor Pineda. Experiments for Tuning the Values of Lexical Features in Question Answering for Spanish. In CLEF (Working Notes), 2005.
A Cross-Language Question/Answering-System for German and English. Günter Neumann, Bogdan Sacaleanu, Workshop of the Cross-Language Evaluation Forum for European Languages. SpringerGünter Neumann and Bogdan Sacaleanu. A Cross-Language Question/Answering-System for German and En- glish. In Workshop of the Cross-Language Evaluation Forum for European Languages, pages 559-571, Springer, 2003.
Question classification with log-linear models. Khaled Fa Mohammed, H M Nasser, Harb, ACM SIGART Bulletin. ACMFA Mohammed, Khaled Nasser, and HM Harb. Question classification with log-linear models. In ACM SIGART Bulletin, pages 21-30. ACM, 1993.
Paolo Rosso, Yassine Benajiba, Abdelouahi Lyhyaoui, Proc. 4th Conf. on Scientific Research Outlook & Technology Development in the Arab world. 4th Conf. on Scientific Research Outlook & Technology Development in the Arab worldPaolo Rosso, Yassine Benajiba, and Abdelouahi Lyhyaoui. In Proc. 4th Conf. on Scientific Research Outlook & Technology Development in the Arab world, pages 11-14, 2006.
IDRAAQ: New Arabic question answering system based on query expansion and passage retrieval. Lahsen Abouenour, Karim Bouzoubaa, Paolo Rosso, CELCT. Lahsen Abouenour, Karim Bouzoubaa, and Paolo Rosso. IDRAAQ: New Arabic question answering system based on query expansion and passage retrieval. In CELCT, 2012.
ASKMi: A Japanese question answering system based on semantic role analysis, In Coupling approaches, coupling media and coupling languages for information retrieval. Tetsuya Sakai, Yoshimi Saito, Yumi Ichimura, Makoto Koyama, Tomoharu Kokubu, Toshihiko Manabe, Tetsuya Sakai, Yoshimi Saito, Yumi Ichimura, Makoto Koyama, Tomoharu Kokubu, and Toshihiko Manabe. ASKMi: A Japanese question answering system based on semantic role analysis, In Coupling approaches, coupling media and coupling languages for information retrieval. pp. 215-231, 2004.
and Hajime Tsukada NTT's Japanese-English Cross-Language Question Answering System. Hideki Isozaki, Katsuhito Sudoh, NTCIR. Hideki Isozaki, Katsuhito Sudoh, and Hajime Tsukada NTT's Japanese-English Cross-Language Question An- swering System. In NTCIR, 2005.
Internet-based Chinese question-answering system. Zhang Yongkui, Bai Zhao Zheqian, Chen Lijun, Xinqing, Computer Engineering. 1534Zhang Yongkui, Zhao Zheqian, Bai Lijun, and Chen Xinqing. Internet-based Chinese question-answering system. In Computer Engineering, volume 15, pages 34, 2003.
Chinese question answering based on syntax analysis and answer classification. Ang Sun, Minghu Jiang, Yifan He, Lin Chen, Baozong Yuan, Acta Electronica Sinica. 36Ang Sun, Minghu Jiang, Yifan He, Lin Chen, and Baozong Yuan. Chinese question answering based on syntax analysis and answer classification, In Acta Electronica Sinica, volume 36, number 5, pages 833-839, 2008.
Prashnottar: a hindi question answering system. Shriya Sahu, Nandkishor Vasnik, Devshri Roy, In International Journal of Computer Science & Information Technology. 42149Shriya Sahu, Nandkishor Vasnik, and Devshri Roy. Prashnottar: a hindi question answering system. In Interna- tional Journal of Computer Science & Information Technology, 4(2):149, 2012.
A hindi question answering system using machine learning approach. Garima Nanda, Mohit Dua, Krishma Singla, Computational Techniques in Information and Communication Technologies (ICCTICT). IEEEGarima Nanda, Mohit Dua, and Krishma Singla. A hindi question answering system using machine learning approach. In Computational Techniques in Information and Communication Technologies (ICCTICT), pages 311-314. IEEE, 2016.
Hindi-English cross-lingual question-answering system. Satoshi Sekine, Ralph Grishman, ACM Transactions on Asian Language Information Processing (TALIP). 2Satoshi Sekine and Ralph Grishman. Hindi-English cross-lingual question-answering system. In ACM Transac- tions on Asian Language Information Processing (TALIP), 2(3): 181-192, 2003.
Towards a language independent encoding of documents. Pushpraj Shukla, Amitabha Mukherjee, Achla Raina, NLUCS 2004. 116Pushpraj Shukla, Amitabha Mukherjee, and Achla Raina. Towards a language independent encoding of docu- ments. In NLUCS 2004, page 116, 2004.
A review of the state of the art in hindi question answering systems. Amir Santosh Kumar Ray, Khaled Ahmad, Shaalan, Intelligent Natural Language Processing: Trends and Applications. SpringerSantosh Kumar Ray, Amir Ahmad, and Khaled Shaalan. A review of the state of the art in hindi question answer- ing systems. In Intelligent Natural Language Processing: Trends and Applications, pages 265-292. Springer, 2018.
A query answering system for e-learning Hindi documents. Praveen Kumar, Shrikant Kashyap, Ankush Mittal, Sumit Gupta, South Asian Language Review. 13Praveen Kumar, Shrikant Kashyap, Ankush Mittal, and Sumit Gupta. A query answering system for e-learning Hindi documents. In South Asian Language Review, 13(1&2):69-81, 2003.
Dialogue based question answering system in Telugu. Rami Reddy, Nandi Reddy, Sivaji Bandyopadhyay, Proceedings of the Workshop on Multilingual Question Answering. the Workshop on Multilingual Question AnsweringRami Reddy Nandi Reddy and Sivaji Bandyopadhyay. Dialogue based question answering system in Telugu. In Proceedings of the Workshop on Multilingual Question Answering, pages 53-60, 2006.
Gravity based punjabi question answering system. Gursharan Singh Dhanjal, Sukhwinder Sharma, Paramjot Kaur Sarao, In International Journal of Computer Applications. 1473Gursharan Singh Dhanjal, Sukhwinder Sharma, and Paramjot Kaur Sarao. Gravity based punjabi question an- swering system. In International Journal of Computer Applications, 147(3), 2016.
Design And Development Of A Named Entity Based Question Answering System For Malayalam Language. M S Bindu, Mary Idicula Sumam, Cochin University Of Science And TechnologyPhD thesisMS Bindu and Idicula Sumam Mary. Design And Development Of A Named Entity Based Question Answering System For Malayalam Language. PhD thesis, Cochin University Of Science And Technology, 2012.
Asqa: Academia sinica question answering system for NTCIR-5 CLQA. Cheng-Wei Lee, NTCIR-5 Workshop. JapanCheng-Wei Lee et al, Asqa: Academia sinica question answering system for NTCIR-5 CLQA. In NTCIR-5 Workshop,Japan, pages 202-208, 2005.
Ensemble approach for fine-grained question classification in Bengali. Somnath Banerjee, Sivaji Bandyopadhyay, the proceedings of the 27th Pacific Asia Conference on Language,Information, and Computation (PACLIC-27). Somnath Banerjee and Sivaji Bandyopadhyay. Ensemble approach for fine-grained question classification in Bengali. In the proceedings of the 27th Pacific Asia Conference on Language,Information, and Computation (PACLIC-27), pp. 75-84 2013.
Question classification by weighted combination of lexical, syntactic and semantic features. Babak Loni, Gijs Van Tulder, Pascal Wiggers, M J David, Marco Tax, Loog, TSD. SpringerBabak Loni, Gijs Van Tulder, Pascal Wiggers, David MJ Tax, and Marco Loog. Question classification by weighted combination of lexical, syntactic and semantic features. In TSD, pages 243-250. Springer, 2011.
Investigation of question classifier in question answering. Zhiheng Huang, Marcus Thint, Asli Celikyilmaz, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingACL2Zhiheng Huang, Marcus Thint, and Asli Celikyilmaz. Investigation of question classifier in question answering. In Proceedings of Empirical Methods in Natural Language Processing: Volume 2, pages 543-550. ACL, 2009.
Question classification with log-linear models. Phil Blunsom, Krystle Kocik, James R Curran, Proceedings of ACM SIGIR conference on Research and development in information retrieval. ACM SIGIR conference on Research and development in information retrievalACMPhil Blunsom, Krystle Kocik, and James R Curran. Question classification with log-linear models. In Proceed- ings of ACM SIGIR conference on Research and development in information retrieval, pages 615-616. ACM, 2006.
Transliteration among indian languages using wx notation. Sapan Diwakar, Pulkit Goyal, Rohit Gupta, Conference on Natural Language Processing, number EPFL-CONF-168805. Saarland University PressSapan Diwakar, Pulkit Goyal, and Rohit Gupta. Transliteration among indian languages using wx notation. In Conference on Natural Language Processing, number EPFL-CONF-168805, pp. 147-150. Saarland University Press, 2010.
Bengali named entity recognition using margin infused relaxed algorithm. Somnath Banerjee, Sudip Kumar Naskar, Sivaji Bandyopadhyay, International Conference on Text, Speech, and Dialogue. SpringerSomnath Banerjee, Sudip Kumar Naskar, and Sivaji Bandyopadhyay. Bengali named entity recognition using margin infused relaxed algorithm. In International Conference on Text, Speech, and Dialogue, pages 125-132. Springer, 2014.
Learning question classifiers. Xin Li, Dan Roth, Proceedings of the 19th international conference on Computational linguistics. the 19th international conference on Computational linguistics1Xin Li and Dan Roth. Learning question classifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1-7. ACL, 2002.
A coefficient of agreement for nominal scales. Jacob Cohen, Educational and psychological measurement. 20Jacob Cohen. A coefficient of agreement for nominal scales.. In Educational and psychological measurement 20(1), pages 37-46, 1960.
The strength of weak learnability Machine learning. E Robert, Schapire, 5Robert E Schapire The strength of weak learnability Machine learning, 5(2):197-227,1990
| [] |
[
"Semantic Annotation and Querying Framework based on Semi-structured Ayurvedic Text",
"Semantic Annotation and Querying Framework based on Semi-structured Ayurvedic Text"
] | [
"Hrishikesh Terdalkar \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia\n",
"Arnab Bhattacharya arnabb@cse.iitk.ac.in \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia\n",
"Madhulika Dubey drmadhulikadubey@gmail.com \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia\n",
"Ramamurthy S Bhavna \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia\n",
"Naneria Singh \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia\n"
] | [
"Department of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia",
"Department of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia",
"Department of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia",
"Department of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia",
"Department of Computer Science and Engineering\nIndian Institute of Technology Kanpur\nIndia"
] | [] | Knowledge bases (KB) are an important resource in a number of natural language processing (NLP) and information retrieval (IR) tasks, such as semantic search, automated question-answering etc. They are also useful for researchers trying to gain information from a text. Unfortunately, however, the state-of-the-art in Sanskrit NLP does not yet allow automated construction of knowledge bases due to unavailability or lack of sufficient accuracy of tools and methods. Thus, in this work, we describe our efforts on manual annotation of Sanskrit text for the purpose of knowledge graph (KG) creation. We choose the chapter Dhānyavarga from Bhāvaprakāśanighaṇṭu of the Ayurvedic text Bhāvaprakāśa for annotation. The constructed knowledge graph contains 410 entities and 764 relationships. Since Bhāvaprakāśanighaṇṭu is a technical glossary text that describes various properties of different substances, we develop an elaborate ontology to capture the semantics of the entity and relationship types present in the text. To query the knowledge graph, we design 31 query templates that cover most of the common question patterns. For both manual annotation and querying, we customize the Sangrahaka framework previously developed by us. The entire system including the dataset is available from https://sanskrit.iitk.ac.in/ayurveda/. We hope that the knowledge graph that we have created through manual annotation and subsequent curation will help in development and testing of NLP tools in future as well as studying of the Bhāvaprakāśanighaṇṭu text. | null | [
"https://arxiv.org/pdf/2202.00216v1.pdf"
] | 246,441,870 | 2202.00216 | 9a95053b3175ee2a90707d6816571333173948f2 |
Semantic Annotation and Querying Framework based on Semi-structured Ayurvedic Text
Hrishikesh Terdalkar
Department of Computer Science and Engineering
Indian Institute of Technology Kanpur
India
Arnab Bhattacharya arnabb@cse.iitk.ac.in
Department of Computer Science and Engineering
Indian Institute of Technology Kanpur
India
Madhulika Dubey drmadhulikadubey@gmail.com
Department of Computer Science and Engineering
Indian Institute of Technology Kanpur
India
Ramamurthy S Bhavna
Department of Computer Science and Engineering
Indian Institute of Technology Kanpur
India
Naneria Singh
Department of Computer Science and Engineering
Indian Institute of Technology Kanpur
India
Semantic Annotation and Querying Framework based on Semi-structured Ayurvedic Text
Knowledge bases (KB) are an important resource in a number of natural language processing (NLP) and information retrieval (IR) tasks, such as semantic search, automated question-answering etc. They are also useful for researchers trying to gain information from a text. Unfortunately, however, the state-of-the-art in Sanskrit NLP does not yet allow automated construction of knowledge bases due to unavailability or lack of sufficient accuracy of tools and methods. Thus, in this work, we describe our efforts on manual annotation of Sanskrit text for the purpose of knowledge graph (KG) creation. We choose the chapter Dhānyavarga from Bhāvaprakāśanighaṇṭu of the Ayurvedic text Bhāvaprakāśa for annotation. The constructed knowledge graph contains 410 entities and 764 relationships. Since Bhāvaprakāśanighaṇṭu is a technical glossary text that describes various properties of different substances, we develop an elaborate ontology to capture the semantics of the entity and relationship types present in the text. To query the knowledge graph, we design 31 query templates that cover most of the common question patterns. For both manual annotation and querying, we customize the Sangrahaka framework previously developed by us. The entire system including the dataset is available from https://sanskrit.iitk.ac.in/ayurveda/. We hope that the knowledge graph that we have created through manual annotation and subsequent curation will help in development and testing of NLP tools in future as well as studying of the Bhāvaprakāśanighaṇṭu text.
Introduction
Sanskrit (सं कृ त, IAST: Saṃskṛta) is one of the most prolific languages in the entire world, and text in Sanskrit far outnumber other classical languages. Consequently, with the advancement of natural language processing with the aid of computers, there has been a surge in the field of computational linguistics for Sanskrit over the last couple of decades. This has resulted in development of various tools such as the Samsaadhanii by Kulkarni (2016), The Sanskrit Heritage Platform (SHP) by Goyal et al. (2012), Sanskrit Sandhi and Compound Splitter (SSCS) by Hellwig and Nehrdich (2018), Sanskrit WordNet (SWN) by , etc. for linguistic tasks such as word segmentation, lemmatization, morphological generation, dependency parsing, etc. Despite this, many fundamental processing tasks such as anaphora resolution and named entity recognition that are needed for higher-order tasks such as discourse processing, are either not available or have a long way to go. Combined with the fact that Sanskrit is a morphologically rich language, for tasks such as machine translation, question-answering, semantic labeling, discourse analysis, etc. there are no ready-to-use tools available.
A standard way of capturing knowledge from a text is through the use of knowledge bases (KB). It is a form of data repository that stores knowledge in some structured or semi-structured form. A knowledge graph (KG) is a particular form of knowledge base that uses the graph data structure to store knowledge. In a KG, nodes represent real-world entities, and edges represent relationships between these entities. Knowledge about these entities and relationships is typically stored in the form of triplets (subject, predicate, object) denoting the relationship predicate a subject has with an object. For example, (Pāṇini, isauthorof, Aṣṭādhyāyī) captures the knowledge nugget 'Pāṇini is the author of Aṣṭādhyāyī'.
An important usage of KGs is automated question-answering (QA) where the task is to automatically find answers to questions posed in a natural language. It is an important highlevel task in the fields of Information Retrieval (IR) and Natural Language Processing (NLP). Questions can be either from a specific closed domain (such as, say, manuals of certain products) or from the open domain (such as what Google and many other search engines attempt to do). Also, they can be factoid-based (phrase-based or objective) or descriptive (subjective, such as why questions). Since its introduction by Voorhees (1999), one of the main approaches for the question-answering task has been through use of knowledge bases (Hirschman and Gaizauskas, 2001;Kiyota et al., 2002;Yih et al., 2015). Figure 1 shows an example of (a snippet of) a knowledge graph. The triplets are depicted visually. It contains several nodes (entities) such as madhūlī (मधू ली), nandīmukha (नदीमु ख), pitta (िपत), snigdha (िनग्ध), etc. and edges (relationships) including 'is Decreased by', 'is Property of', 'is Variant of', etc. The graph also shows properties associated with the entities and the relationships.
Given a corpus of text, there are many automated ways of constructing knowledge bases from it (Dong et al., 2014;Pujara and Singh, 2018;Mitchell et al., 2018;Wu et al., 2019). These attempts are fairly successful for languages such as English where the state-of-the-art in NLP tools is more advanced. However, due to paucity of such tools in Sanskrit, automated construction of knowledge bases in Sanskrit, to the best of our understanding and knowledge, is only moderately successful. A notable effort is by Terdalkar and Bhattacharya (2019) who attempted to automatically extract all human relationships from Itihāsa texts (Rāmāyaṇa and Mahābhārata) and synonym relationships from an Ayurvedic text (Bhāvaprakāśa). They reported that for an objective natural language query, the correct answer was present in the reported set of answers 50% of the times. They, however, do not report how accurately a triplet is automatically extracted, due to the lack of ground truth for the evaluation.
A more viable and accurate alternative of constructing knowledge bases is through the route of human annotation. Annotation of a corpus is the process of highlighting and/or extracting objective information from it. In addition to information extracted from a corpus, the knowledge base may use information that is not directly mentioned in the corpus, such as world knowledge (for example, a person is a living being) or an ontology or a fact specific to the domain of the corpus. Human annotators are typically aware of the domain; although, depending on the task, they need not always be experts in the subject. For example, vāta (वात) has a general meaning as 'wind', but in Ayurvedic context, it refers to the tridoṣa (ितदोष) by the name of vāta. This is not directly mentioned in every Ayurvedic text, but any domain expert is aware of this fact.
In this work, we follow the human annotation process of creating a knowledge graph. We choose the chapter Dhānyavarga (धायवगर् ) from the Bhāvaprakāśanighaṇṭu (भावपकाशिनघण्टु ) portion from the Ayurvedic text Bhāvaprakāśa (भावपकाश) as the corpus. Bhāvaprakāśa is one of the most prominent texts in Ayurveda, which is an important medical system developed in ancient India and is still in practice. A nighaṇṭu (nighaṇṭu) in the Indic knowledge system is a list of words, grouped into semantic and thematic categories and accompanied by relevant information about these words such as meanings, explanations or other annotations. It is analogous to a glossary in purpose, but differs in structure. In particular, the Bhāvaprakāśanighaṇṭu text, like most of Sanskrit literature, is in padya (verse) form. The text, while loosely following a theme or a structure, is still free flowing. Sanskrit literature contains a large number of such nighaṇṭu texts either as stand-alone books or as parts of other books.
The nighaṇṭu texts, owing to their partial structure are, therefore, amenable to construction of knowledge bases using human annotation. Further, since they contain a wealth of information, they are important resources for building knowledge bases that can be automatically questioned. A benefit of the presence of structure in nighaṇṭu texts is that annotators need not be domain experts as long as the structure is clear.
Contributions
The contributions of this paper are three-fold.
First, we describe a process of constructing a knowledge graph (KG) through manual annotation. This helps to capture the semantic information present in the text that is extremely difficult to do otherwise using automated language and text processing methods. The proposed annotation process also enables capturing relationships with entities that are not named directly in the text. We further discuss the curation process and the optimizations performed during the process of knowledge graph creation from the perspective of querying efficiency.
Second, through careful examination of the different types of entities and relationships mentioned in Bhāvaprakāśanighaṇṭu, we create a suitable ontology for annotating the text. We believe that this can be a good starting point for building an ontology for other Ayurvedic texts, and in particular, glossaries.
Third, we annotate one complete chapter from the text (Dhānyavarga), and create a KG from the annotations. For this purpose, we deploy a customized instance of Sangrahaka, an annotation and querying framework developed by us previously (Terdalkar and Bhattacharya, 2021). We also create 31 query templates in English and Sanskrit to feed into the templatized querying interface, that aids users in finding answers for objective questions related to the corpus. The system and the dataset can be accessed at https://sanskrit.iitk.ac.in/ayurveda/ 1 . Figure 2 shows the workflow of our proposed method. The first step after inspection of the corpus is of ontology creation. After creating a relevant ontology, i.e., specifying what kinds of relationships and entity types are there in the corpus, annotation is performed. Using the entities and relationships captured through annotation, a knowledge graph is constructed. The knowledge graph can be queried with the help of query templates to retrieve answers for templatized natural language questions. Answers are presented in both tabular and graphical formats.
Outline
The rest of the paper is organized as follows. §2 motivates the problem of creating a knowledge graph using manual annotations. §3 describes the annotation and curation process along with the construction of knowledge graph. §4 explains the querying mechanism. We conclude in §5 and discuss future directions.
Motivation for Manual Annotation
To the best of our knowledge, the state-of-the-art in Sanskrit NLP and IR is not advanced enough for automatic construction of knowledge bases from text. One of the first efforts towards automatic construction of knowledge graphs from Sanskrit text was made by Terdalkar and Bhattacharya (2019). The framework described does not yield results comparable to stateof-the-art models for English due to errors in various stages of the construction pipeline. As mentioned earlier, the success rate for even single relationships was not very high.
In this section, we discuss the issues with the Sanskrit state-of-the-art linguistic tools and the need for manual annotation for a semantic task such as automatic creation of knowledge graphs.
Word Segmentation
Sanskrit texts make heavy use of compound words in the form of sandhi and samāsa. Word segmentation, that splits a given compound word into its constituents is, therefore, an important need in Sanskrit. Notable works in this area are The Sanskrit Heritage Platform (SHP) (Huet, 2009;Goyal et al., 2012), Sanskrit Sandhi and Compound Splitter (SSCS) (Hellwig and Nehrdich, 2018;Krishna et al., 2016;Krishna et al., 2021).
Treating the segmentation task as splitting both sandhi and samāsa together, while useful, does not fit well in the pipeline described by Terdalkar and Bhattacharya (2019), where the split output is then passed to a morphological analyzer. An example is splitting of the word maharṣi as mahat + ṛṣiḥ, which while correct as a samāsa-split, if passed to a morphological analyzer as two separate words, produces an analysis 2 of the word mahat independently that does not fit the semantics of the word or the context. Terdalkar and Bhattacharya (2019) applied SSCS followed by SHP. This results in a word such as rāmalakṣmaṇau getting split into two words rāma and lakṣmaṇau due to SSCS resulting in the word rāma getting assigned the vocative case by SHP. They used a heuristic to resolve these errors, where the grammatical analysis of the second word was copied to the first word as well. However, this heuristic would change the semantics of the word rāmalakṣmaṇau.
Morphological Analysis
Sanskrit is a highly inflectional language. In Sanskrit, words are categorized as subanta (nounlike) and tiṅanta (verb-like). Morphological analysis is the task of identifying the stem (prāti padika or dhātu) of the given word form, along with other relevant linguistic information.
Notable works in this area are by Goyal et al. (2012) and Kulkarni (2016). These tools perform the best when the input given is without sandhi. If, however, the input also contains splits of samāsa as generated by tools described in the previous section ( §2.1), the morphological analyzers treat it as a separate word, resulting in an analysis of the word that may be correct on the syntactic level, but not so in the context of the sentence.
Other Linguistic Tasks
A dependency parser for Sanskrit from Samsaadhanii (Kulkarni, 2016) expects the sentences to be in an anvaya order (prose form). Further, it is based on a fixed vocabulary and, therefore, when inflected forms of words from outside the vocabulary are encountered, it fails to parse the sentence. For example, a word śālidhānya is not present in the vocabulary, so a sentence containing that word does not get parsed successfully. Krishna et al. (2021) in their recent work claim to be able to perform poetry-to-prose linearization and dependency parsing. However, we have not been able to obtain the source code or a functional interface to evaluate it for our data (we contacted the authors).
Another hurdle in the poetry-to-prose linearization is that the sentence boundaries are often not clearly marked. In general, a semantically complete sentence may span over multiple verses. On the other hand, at times a verse may contain multiple sentences as well. This can be seen in the sample of 10 verses given in Appendix A. Thus, extracting sentences with proper sentence boundaries is also a difficult task.
Semantic Information Extraction
Extracting the semantics of a sentence is a very important step in the construction of a knowledge graph. Automatic KG construction frameworks for English such as (Auer et al., 2007;Suchanek et al., 2007) extract semantic information from various information sources including Wikipedia articles and info-boxes. One of the challenges faced in this task is that the same concept can be expressed in English in numerous ways, such as "birthplace" or "place of birth". The issue of expressing a concept in more than one ways is extremely significant and much more severe for Sanskrit due to its semantic richness. In particular, the processes of samāsa and sandhi create long and semantically rich words. Table 1 highlights this phenomenon. The first column contains the concept while the second column enlists the words used in Dhānyavarga to express that concept. A "concept" captures the semantics of a word or a phrase.
It can be noted that even in the span of 90 verses, there are more than 10 different ways used to express the same concept '(a substance) decreases vāta'. In addition to that, the word vāta itself can be part of another compound, coupled with other words as can be seen in the example 'decreases vāta and pitta'. There are more than 5 usages of this complex concept, which is a superset of the earlier concept. Moreover, these are not the only ways in which the concept of increasing vāta is expressed.
There are numerous other words that can combine with the word vāta in the form of samāsa to indicate the concept of decrement for multiple entities at the same time. Moreover, in such cases, where a samāsa is used, the order of vāta and pitta could be reversed as well. Further, this list for a particular concept is not exhaustive for Sanskrit, and there are practically endless possible ways to denote the same concept.
One can observe that there are some common suffixes used in similar concepts. However, firstly, there is no exhaustive list of suffixes available associated with a particular concept. Second, the suffixes have different concepts in different contexts. For example, the suffix ghna (न) in the context of Ayurveda, or specifically of tridoṣa, means 'to decrease', e.g., pittaghna (pittaghna). The same suffix in the context of a person may mean 'to kill', e.g., śatrughna (शतु न) -one who kills his enemies (śatru).
Thus, using a fixed set of suffixes may not be a feasible solution.
To the best of our knowledge, there is no existing system for Sanskrit that can extract such semantic information in either a generic sense or in a specific context. Amarakoṣa Knowledge Net (Nair and and Sanskrit WordNet are also limited in their scope. For example, none of the words listed in the Table 1 to express the concept of 'increasing bala' can be found in either of these two resources.
Need for Annotation
Issue of compounding errors is relevant to any NLP pipeline, where individual parts of the pipeline have their own error rates. The success rate of the entire pipeline, being a multiplicative factor of the individual success rates (since all the parts have to be accurate for the entire task to be accurate), is significantly lower. Thus, the pipeline for the automated question-answering task that requires modules such as word segmentation, morphological analysis, part-of-speech tagging, dependency parsing, etc. has a very low accuracy. Further, the lack of semantic analysis tools or systems is a major hurdle in semantic tasks such as construction of knowledge graphs. Thus, even if the accuracy of the individual parts are improved significantly, the final semantic labeling remains a bottleneck.
We highlight this fact by taking an example of the first ∼ 10% of the Dhānyavarga, i.e., 10 verses corresponding to 21 lines. We have manually segmented the words in these lines and also converted the sentences to anvaya order. The first 10 verses correspond to a total of 14 prose sentences. The original text in verse format, in the sandhi-split format, and in anvaya format, is given in Appendix A.
There are a total of 35 occurrences of sandhi and 50 occurrences of samāsa in the text. SSCS is able to identify 34 of the sandhi (with an accuracy of 0.97) and 34 occurrences of samāsa correctly (with an accuracy of 0.68). However, the tool does not differentiate a sandhi from a samāsa. Therefore, when passed to the SHP it is likely to obtain incorrect analysis.
A single word-form in Sanskrit can have numerous valid morphological analyses. If there are N words in a sentence, and every word has a i analyses possible, then there are Π N i=1 a i possible combinations for the correct analysis of the sentence. SHP and Samsaadhanii both rank these solutions based on various linguistic features, and after pruning the unlikely ones, present the feasible solutions. For automatic processing pipelines, a particular choice of the solution is required, and the solution presented as the best by the tools, i.e., the first solution, is a natural choice. Thus, we present the evaluation by choosing the first solution.
We pass the manually created sandhi-split corpus through SHP for morphological analysis. 3 There are an average of 9 solutions per line (ranging from 0 to 72) reported. We evaluate based on the first reported solution.
There are 103 words, after manually splitting sandhi. SHP could not analyze 21 words, and wrongly analyzed 14 words, resulting in an overall accuracy of 0.66. Further, SHP split 34 words, of which 8 were incorrect splits, resulting in an accuracy of 0.76 for samāsa-split.
Additionally, we pass the anvaya-order sentences to the dependency parser tool by Kulkarni (2016). We also manually add missing verbs (adhyāhāra forms such as asti, santi, etc.) due to that being a requirement of the parser. Without samāsa split markers, the dependency parser manages to parse only 1 out of 14 sentences, while with the samāsa markers, it can parse 6 out of 14. Out of the 6 sentences that produce a dependency parse tree, 4 are simple 3-word sentences (sentences 2, 3, 4, 6 in Appendix A). In the other 2 instances (sentences 7, 10), errors were found in the dependency parse trees.
For example, consider a line from śloka 2: There are two sentences in this line, as can be seen by the boundary markers and anvaya text. Figure 3 shows the dependency parse for these two sentences. The dependency parse for the first dependency tree is correct. However, even for the sentences that get a correct dependency parse, the dependency relations we get are kartā and kartṛsamānādhikaraṇa which, while grammatically correct, still do not help in capturing the semantic concept of the sentence that kaṅgu is a type of kṣudradhānya. Also, tat in the second sentence is an anaphora of kṣudradhānya from the first sentence. Thus, the intended relation that tṛṇadhānya is a synonym of kṣudradhānya cannot be extracted without a module for anaphora resolution. Yet again, to the best of our knowledge, there is no such co-reference resolution system for Sanskrit.
More importantly, no existing tool has a capability of performing semantic tasks, which are a requirement for knowledge extraction. Manual annotation, therefore, is the only way to capture the semantic relations. In addition, it bypasses the entire NLP pipeline and, thus, has a high potential for creating a question-answering system that is much more accurate and reliable than a system based on automatically created knowledge graphs.
Another prevalent issue is the lack of datasets for training and evaluation of tasks such as question-answering or creation of knowledge bases. Creation of knowledge bases through manual annotation is, thus, of utmost importance both for the actual task of question-answering and for further research in the field, including automated knowledge base construction since these may act as ground-truth benchmark datasets for evaluation of future automated tools.
Annotation Process
Annotation has been done with the purpose of building a knowledge graph (KG). We fix the unit for annotation to be a line from a verse (śloka). We collect annotations of two typesentity and relation -described in detail in §3.3 and §3.4 respectively.
The corpus interface from Sangrahaka is capable of displaying extra information about each line. We use this feature to display word segmentation and morphological analysis of the text produced by SSCS and SHP, which can potentially help the annotators. Figure 4 shows a sample text from Dhānyavarga with linguistic information.
An annotator goes through the lines assigned to her and for each line, identifies the entities as well as the relationships between the entities appearing in it.
Corpus
Bhāvaprakāśanighaṇṭu is the nighaṇṭu portion of Bhāvaprakāśa. It contains a list and description of various medicinally relevant plants, flowers, fruits, animals, grains, animal products, metals, prepared substances, etc. These are divided into 23 chapters called vargas.
A general structure followed in the Bhāvaprakāśanighaṇṭu is as follows, Relationships (29) is Synonym of, is Type of, is Variant of, is Property of, is (Not) Property of, is Similar to, is Better/Larger/Greater than, is Worse/Smaller/Lesser than, is Newer than, is Older than, is Best/Largest/Greatest among, is Medium among, is Worst/Smallest/Least among, is Ingredient of, is Part of, is (Not) Part of, is Disease of, is Caused by, is (Not) Caused by, is Benefited by, is Harmed by, is Produced by, is Removed/Cured by, is Increased by, is Decreased/Reduced by, is Preparation of, is (Absence/Lack of) Preparation of, is Location of, is Time of • Each chapter contains several virtual sections pertaining to a single substance. Only when a substance has been described in entirety, discussion about another substance starts. 4
• Each section about a substance has the following information:
-Synonyms, if any, of the substance -Properties, e.g., color, smell, texture -Effects, e.g., effect on tridoṣa (vāta, pitta, and kapha) -Symptoms and diseases treated or cured by the substance -Variants, if any, of the substance -Properties of each variant, and their distinguishing characteristics -Comparison between the variants, if possible -Time and location where the substance is found or grown, if relevant • The order of information components about a substance within a section may vary.
The entire Bhāvaprakāśanighaṇṭu contains 2087 verses, corresponding to 4201 lines. We have chosen Dhānyavarga, a chapter about grains, which has a wide variety of entity types and relations. It contains 90 verses, corresponding to 183 lines.
Ontology
We have created an ontology consisting of 25 entity types and 29 relationship types by carefully going through multiple chapters from the Bhāvaprakāśanighaṇṭu including Dhānyavarga. An exhaustive list of them is mentioned in Table 2.
The decision to add a certain entity type or relation type is made based on the importance of the concept, frequency of its occurrence, and nature of frequently asked questions.
For example, the concept of vāta, pitta and kapha, collectively referred to as tridoṣa or fundamental elements (humors) of the body, is central to Ayurveda. Consequently, queries such as "What effect does a substance X have on a (one of the) tridoṣa?" is one of the most fundamental and common information requirement about the substance. Therefore, we have decided to create a category "Tridoṣa" only for these three entities. Therefore, any occurrence of these words or their synonyms, e.g., śleṣma, a synonym of a kapha, results in the creation of an entity of type Tridoṣa.
Similarly, the type of effect any substance has on each of the tridoṣa is either an increment or decrement. Therefore, we have identified relations is Increased by and is Decreased by.
After the ontology has been finalized, the next step is annotation.
Entity Annotation
Entities correspond to nodes in the knowledge graph. When a word that represents an entity is encountered, its lemma (prātipadika) and the entity type it belongs to are identified, and the entity is marked.
As an example, consider the following line from śloka31 of Dhānyavarga:
Devanagari: गोधू मः स ुमनोऽिप याििवधः स च कीिततः। IAST: godhūmaḥ sumano'pi syāttrividhaḥ sa ca kīrttitaḥ.
Meaning: Godhūma (wheat) is also called Sumana, and it is said to be of three kinds.
Here, there are two entities, godhūma and sumana, both of type "Substance". An entity needs to be added explicitly only the first time it is encountered.
In a case where a samāsa is used to indicate an effect on an entity, and the relation fits one of the relationship types, a relevant word (pada) from the segmentation (vigraha) of samāsa is used. For example, consider the following line from śloka33:
Devanagari: गोधू मः मध ुरः शीतो वातिपतहरो ग ुरुः। IAST: godhūmaḥ madhuraḥ śīto vātapittaharo guruḥ. Meaning: Godhūma is sweet, cold, hard to digest and removes (decreases) vāta and pitta.
Here, vātapittaharaḥ is a single word, which uses samāsa, to indicate that vāta and pitta are reduced by godhūma. Therefore, vātapittahara will not be added as an entity; instead the entities vāta and pitta are recognized.
Relation Annotation
Relations correspond to edges in the knowledge graph. A relation, which fits one of the relationship types from the ontology, is identified by interpreting the śloka. Subject and Object for this relation are then identified. Relations, where extra semantic information is known, such as madhura is known to be a rasa, are endowed with that extra information.
Consider the two examples of lines mentioned in the previous section ( §3.3), śloka31 and śloka33. Following relations are added based on these two lines:
sumana ⊢ is Synonym of → godhūma madhura ⊢ is (rasa) Property of → godhūma śīta ⊢ is Property of → godhūma vāta ⊢ is Decreased by → godhūma pitta ⊢ is Decreased by → godhūma guru ⊢ is Property of → godhūma
It should be noted that neither the subject nor the object may be present as words in the line that mentions a relationship about it. Consider, for example, the next line of the śloka33:
Devanagari: कफशु कपदो बयः िनग्धः सधानकृ सरः। IAST: kaphaśukraprado balyaḥ snigdhaḥ sandhānakṛtsaraḥ. Meaning: (Godhūma) increases kapha, śukra, bala, is snigdha, sandhānakṛt (helps in joining broken bones) and laxative.
Here, the description of properties of godhūma (from previous line) is continued. Therefore, one of the relations added is kapha ⊢ is Increased by → godhūma This relation has godhūma as Object, although it is not present in the line itself.
Unnamed Entities
On occasions, it may happen that an entity is referenced by its properties only, and it is not named at all in the text. Consider the following line from śloka39:
Devanagari: मु गो बहुिवधः श्यामो हिरतः पीतकतथा। IAST: mudgo bahuvidhaḥ śyāmo haritaḥ pītakastathā.
Meaning: Mudga is of various types -black, green, and yellow.
Thus, there are three colored variants of the substance mudga, but they are not named. In such a case, we create three unnamed entities (denoted by X-prefixed nodes) with entity type "Substance", same as that of mudga to refer to the three varieties. Each of these entities is given a unique identifier. The unique identifier is a combination of the unnamed entity number and the line number it occurs in. Thus, if the line number is 256358, the black variant is given the identifier X1-256358. Similarly, the green variant is identified as X2-256358 while the yellow variant is identified as X3-256358.
To describe these variants, three relations are added as well:
śyāma ⊢ is (varṇa) Property of → X1-256358 harita ⊢ is (varṇa) Property of → X2-256358 pīta ⊢ is (varṇa) Property of → X3-256358
The utility of such annotations becomes clear when these unnamed entities are later referred to in another line or another verse.
The next line of śloka39 reads Devanagari: वे तो रतच ते षात ु पू वर् ः पू वो लघ ुः मृ तः ॥३९॥ IAST: śveto raktaśca teṣāntu pūrvaḥ pūrvo laghuḥ smṛtaḥ. ||39|| Meaning: … white and red. Among them, each is successively easier to digest.
The word teṣām here refers to the five varieties of mudga, and gives a relation between them. So, we get two new unnamed entities in this line, X1-256359 and X2-256359 (note how X1 and X2 are re-used but with different line numbers).
We also get a total of four new relations to capture the successive ease in digestion properties: X1-256358 ⊢ is Better (in property laghu) than → X2-256358 X2-256358 ⊢ is Better (in property laghu) than → X3-256358 X3-256358 ⊢ is Better (in property laghu) than → X1-256359 X1-256359 ⊢ is Better (in property laghu) than → X2-256359 For the purpose of querying, the anonymous nodes are treated like any other node.
Auto-complete Suggestions
We have enhanced the annotation interface from Sangrahaka to improve user experience with Sanskrit text by adding transliteration-based suggestions. There are numerous standard schemes for Devanagari transliteration 5 . Whenever, a Devanagari entity is annotated, we transliterate it using indic-transliteration package (Sanskrit programmers, 2021) into all the available schemes. We maintain an index with all the transliterations. Now, when a user enters any text, we query our index and return all suggestions that match with the lower-cased version of the user text. For example, consider a word in Devanagari 'माष', which transliterates into 'mASa' (HK), 'mASha' (ITRANS), 'māṣa' (IAST), 'maa.sa' (Velthuis), 'mARa' (WX) and 'mAza' (SLP1). Now, a user may enter at least 3 starting characters from any of the scheme, e.g., 'mas', 'maa', Figure 5: Modified annotation interface with multi-transliteration-based suggestions 'maz', 'mar' etc. and the Devanagari word 'माष' will be suggested. The index is maintained globally. So, once an entity is entered by any annotator, the completions for that entity become available to all annotators.
These suggestions are enabled to all text input fields, namely, entity annotation, relationship annotations and querying interface. Figure 5 shows the modified annotation interface with auto-complete suggestions.
Curation
After the annotation step, before construction of the knowledge graph, a thorough curation step is required to resolve errors or inconsistencies that may inadvertently creep up during the annotation process.
Equivalent Entities
The linguistic information that we have added with the corpus is supposed to serve as a guideline for the annotation. However, since this information is generated using automated tools, there might be errors. For example, the word grāhī (गाही) refers to substances that have a property of absorbing liquid and increasing digestive power. The reported prātipadika of this word, automatically generated by SHP, is grāha (गाह) instead of grāhin. An annotator, by oversight, may mark the incorrect lemma. Additionally, for substance names in feminine gender, which also have this property, an adjective grāhiṇī (गािहणी) is used. The correct prātipadika in this case would be grāhiṇī. The node refers to the same property. So, semantically they are equivalent to each other, and ideally should be captured using a single name. These instances are common for properties of substances.
To address this issue, we add a relation is Synonym of between these entities. This, in conjunction with the optimization mechanism described in §3.8, tackles the issue of equivalent entities.
Inconsistent Node Categories
There may be differences of opinions between annotators regarding which category a particular node should belong to. For example, an entity jvara (वर) refers to fever. This entity was marked as a Symptom by some annotators and as a Disease by the others. Such cases were resolved through discussion among the curators.
Missing Node Categories
The framework allows entities to be mentioned in the relationships without being added as entities. While care was taken to always add entities before marking relationships involving those entities, there may still be instances of human error, where an annotator may forget to mark an entity. We created a set of inference rules to infer as many instances of such occurrences as possible. For example, if an entity is marked as a source of the relation is Property of, without having been added as an entity, we can automatically create that entity by assigning the category 'Property' to it.
Symmetric Relationships
The relation is Synonym of is symmetric, i.e., if A is a synonym of B, then, by definition, B is also a synonym of A. A query can be made using any of the synonyms, and the system should still be able to return the correct answer.
Suppose, S 1 , S 2 . . . , S N are N synonyms of a substance. If the synonym group is completely captured, then a user should be able to query using any synonym and still get the desired result. Properties of the substance corresponding to this synonym group can also be scattered across S i 's. Say, there are M properties P 1 , P 2 , . . . , P M , and some of the relations are P 1 is Property of S 1 , P 2 is Property of S 4 , P 3 is Property of S 3 , and so on. Now, if we want to query whether substance S 2 has property P 1 but we search using the name of the substance as S 2 , a direct query will not work, as there is no direct relation between the nodes S 2 and P 1 . For the correct answer, we would have to find every synonym of S 2 and check if any of them has the property P 1 . This requires a path query. A path query involving N synonyms may require as many as N − 1 edge traversals. Path queries are NP-hard (Mendelzon and Wood, 1995) and are, therefore, computationally expensive.
For example, rājikā, kṣava, kṣutābhijanaka, kṛṣṇīkā, kṛṣṇasarṣapa, rājī, kṣujjanikā, āsurī, tīkṣṇagandhā, cīnāka are all names of the same substance. A relation is added as follows, uṣṇa ⊢ is Property of → rājikā
Now, suppose that we want to query for a property of the substance kṣava, which while referring to the same entity as rājikā, does not have a property edge incident upon it.
We, therefore, are forced to use a path query, and the query has to explore all the synonym paths from kṣava to find out if kṣava itself or one of its synonyms has any property edge. The number of such paths can be impractically large, especially for large knowledge graphs.
We perform a simple optimization heuristic to tackle this issue. We first identify a synonym S K among all the synonyms having the highest degree, i.e., K ∈ {1, .., N }, such that K = argmax i degree(S i ). We treat this as the canonical name for that synonym group, and we add a relation is Synonym of from every S i , i ̸ = K to S K . Further, we transfer all the edges (other than the is Synonym of edge) from every S i , i ̸ = K to S K . In other words, if S i was connected to a node V by a relation R, after optimization, S K will be connected to node V by relation R. Now, every synonym has a direct edge to the canonical name, with all the properties getting attached to the canonical name only. Thus, a query on any synonym has to traverse at most 1 edge before reaching the desired node.
At the end of curation and optimization steps, there were 410 nodes and 764 relationships that constitute our knowledge graph.
Although the ideal way of question-answering is by posing queries in natural language, unfortunately, the state-of-the-art in Sanskrit NLP tools does not allow that. Hence, to simulate natural language queries, we use query templates.
The annotation and querying platform that we use, Sangrahaka, uses Neo4j graph database 6 for the purpose of storing and querying the knowledge graph. Cypher 7 is Neo4j's graph query language inspired by SQL, but optimized for graph querying and it makes use of intuitive ASCIIart syntax for querying. The platform utilizes the power of Cypher for connecting to the graph database. Natural language queries are simulated using query templates.
Query Templates
A query template consists of a set of natural language templates and an equivalent graph query template. Each of these templates contain placeholders. Values of these placeholders can be filled by choosing the required entity, entity type or relation, to convert the query template into a valid natural language query. The same replacement in the graph query template yields a valid graph query which can be directly used to fetch results from the graph database.
For example, consider a sample query template: The variable {0} here is a word representing an entity of type TRIDOSHA. The valid values for the variable in this query are vāta or pitta or kapha or one of their synonyms. So, natural language questions such as "Which substances increase kapha?", etc. can be realized using this query template.
In order to increase the number of the questions that can be answered, we have created a set of generic queries which help model any query up to a single relation. It contains the following three query templates:
• Which entity is related to entity {0} by relation {1}?
• How is entity {0} related to entity {1}?
• Show all matches where an entity of type {0} has relation {1} with an entity of type {2}.
We have a total of 31 natural language query templates in Sanskrit 8 to represent the most relevant queries. We have classified these templates semantically into 12 categories. Classification helps to locate an intended query template faster. An exhaustive list of these query templates and their categories is in Appendix B.
Query Answers
Result of graph queries are also graphs. The querying interface from Sangrahaka consisted of a graphical and a tabular display. Figure 6 shows a sample output using the query interface. In the graph, hovering a node lets the user see the properties associated with the node by hovering over Figure 6: Sample output using query interface featuring Sanskrit query templates the node in the graph. Each node label is a lemma (word-stem) associated with that node. In addition, provenance of the node such as which line from the corpus does that node correspond to, and the identifier of the annotator(s) who added that entity are also mentioned. The nodes are color-coded in such a way that nodes referring to entities of same type get the same color 9 .
Conclusions and Future Work
Current state of Sanskrit NLP makes manual annotation a necessity for semantic tasks. We propose a construction of a knowledge graph (KG) through manual annotation process with a special focus on capturing semantic information. We also introduce a mechanism to handle unnamed entities in a knowledge graph.
The deployed instance of the framework used for the purpose of annotation and querying can be accessed at https://sanskrit.iitk.ac.in/ayurveda/.
As a proof-of-concept, we have selected a chapter from the nighaṇṭu text Bhā vaprakāśanighaṇṭu, carefully created an ontology, and performed semantic annotation to construct a knowledge graph. Our methodology is extensible to other nighaṇṭu texts.
In future, we plan to complete the annotation of the rest of the Bhāvaprakāśanighaṇṭu. We also plan to explore more classical texts such as Rāmāyaṇa and Mahābhārata for annotating other kinds of relationships.
We also hope that the dataset created in the process will prove useful for further research efforts in the area of NLP in Sanskrit. We make the ontology and the dataset available at https://sanskrit.iitk.ac.in/ayurveda/dataset/.
Figure 1 :
1Example of a Knowledge Graph (KG).
Figure 2 :
2Workflow of semantic annotation for KG construction and querying
Figure 3 :
3Dependency parse trees for sentences from śloka 2.
Figure 4 :
4Sample text from Dhānyavarga with linguistic information
Table 1: Semantic variations due to richness of Sanskrit through examples from Dhānyavarga.Concept
Words or Phrases
increases bala
balya, balada, balāvaha, balaprada, balakara, balakṛt
increase vāta
vātala, vātakṛt, vātakara, vātajanaka, vātajananī, vātātikopana,
vātaprakopaṇa, vātavardhana, vātakopana
decreases pitta
pittaghna, pittapraṇāśana, pittapraśamana, pittahara, pittaghnī,
pittajit, pittāpaha, pittajit, pittahṛt, pittanut, pittavināśinī
decreases vāta and pitta vātapittaghna, pittavātaghna, pittavātavibandhakṛt, vātapitta
hara, vātapittahṛt
Sanskrit: ⟨ कवािदकं क्षु दधायं ⟩ ⟨ तृ णधायच तृ तम ् ⟩ IAST: ⟨ kaṅgvādikaṃ kṣudradhānyaṃ ⟩ ⟨ tṛṇadhānyañca tatsmṛtam ⟩ Meaning: kaṅgu etc. are types of kṣudradhānya. It (kṣudradhānya) is also calledtṛṇadhānya.
Anvaya: क्षु दधायम ्
कवािदकम ्
(अित)। तत ्
तृ णधायम ्
च मृ तम ्
(अित)।
Table 2 :
2Entity and relationship types in our ontology • Substances are semantically grouped in chapters. For example, all grains appear in the chapter Dhānyavarga.
• Sanskrit :
Sanskritके पदाथार् ः {0} इित दोषय वधर् नं कु वर् ित। • English: Which entities increase the dosha {0}? MATCH (dosha:TRIDOSHA)-[relation:IS_INCREASED_BY]->(entity) WHERE dosha.lemma = "{0}" RETURN entity• Cypher:
8 .
8कण्डन ेन िवना शु लाः है मताः (च) शालयः मृ ताः (सित)। 9. शालयः रतशािलः सकलमः पाण्डु कः शकु नाहृतः स ुगधकः कदर् मकः महाशािलः दूषकः प ुपाण्डकः प ुण्डरीकः मिहषमतकः दीघर् शू कः काचनकः हायनः लोधप ुपकः च तथा इयायाः बहवः बहुदे शजाः सित। 10. गथिवतरभीते ः ते समताः अत न भािषताः (सित)। 11. शालयः मध ुराः िनग्धाः बयाः बधापवचर् सः कषायाः लघवः रुयाः वयार् ः वृ याः बृ ं हणाः अपािनलकफाः शीताः िपतनाः तथा मू तलाः च (सित)। 12. दग्धमृ जाताः शालयः कषायाः लघ ुपािकनः सृ टमू तप ुरीषाः रूक्षाः ले मापकष र्णाः च (सित)। 13. कै दाराः (शालयः) वातिपतनाः ग ुरवः कफशु कलाः कषायाः अपवचर् काः मे याः बलावहाः च एव (सित)। 14. थलजाः (शालयः) वादवः िपतकफनाः वातविनदाः िकिचद ् ितताः कषायाः िवपाके कटु काः च B Query Templates सू िच (Contents) पदाथे ष ु के सबधाः। What are all the relationships? सू िच (Contents) के के दयाः। What are all the substances? सू िच (Contents) के के पदाथार् ः। What are all the entities? वणर् न (Detail) {0} इयय िवषये दशर् य। Show some details about {0}. वणर् न (Detail) {0} इयय िवषये अिधकं दशर् य। Show some more details about {0}. पकार (Type) {0} इयय पकारः कः। What is the type of {0}? Entity पकार (Type) सवे {0} इित पकारय पदाथार् ः िचन ु। Find all the entities of type {0}. Entity-Type ग ुण (Property) के षां दयाणां {0} इित ग ुणः अित। Which substances have a property {0}? Entity समानाथ र्क (Synonym) {0} इयय अयािन नामािन कािन। What are the synonyms of {0}? सबध (Relation) {0} इित सबधे न बधं सवं दशर् य। Find all entities related by the relation {0}. ितदोष (Tridoṣa) के पदाथार् ः {0} इित दोषय वधर् नं कु वर् ित। Which entities increase the dosha {0}? Entity ितदोष (Tridoṣa) के पदाथार् ः {0} इित दोषय वधर् नं {1} इित दोषय हासं च कु वर् ित। Which entities increase the dosha {0} and decrease the dosha {1}? ितदोष (Tridoṣa) के पदाथार् ः {0} {1} एतयोः दोषयोः वधर् नं {2} इित दोषय हासं च कु वर् ित। ितदोष (Tridoṣa) के पदाथार् ः {0} इित दोषय वधर् नं {1} {2} एतयोः दोषयोः हासं च कु वर् ित। रोग (Disease) के पदाथार् ः {0} इित रोगं कु वर् ित। Which entity causes the disease {0}? रोग (Disease) के पदाथार् ः {0} इित रोगं हरित। Which entity cures the disease {0}? पभाव (Effect) के पदाथार् ः {0} एतं िवकु वर् ित। Which entities affect Entity पभाव (Effect) के पदाथार् ः {0} इयय वधर् नं कु वर् ित। Which entities increase {0}? Entity अिधकरण (Space-Time) {0} इित पदाथ र्ः कदा जायते । When does {0} grow? Entity अिधकरण (Space-Time) {0} इित पदाथ र्ः कु त लयते । Where is {0} found? Entity साधारण (Generic) के पदाथार् ः {0} इित पदाथे न सह {1} इित सबधे न सबिधताः। Which entity is related to {0} by relation {1}? Entity, Relation साधारण (Generic) {0} इित पदाथ र्ः {1} इित पदाथे न सह कथं सबिधतः। How is {0} related to {1}? Entity, Relation साधारण (Generic) {0} इित पकारय पदाथै ः सह {1} इित सबधे न बधाः {2} इित पकारय पदाथार् न ् दशर् य। Show all matches where an entity of type {0} has relation {1} with an entity of type {2}.Category
Sanskrit Template
English Template
Input Type
सू िच (Contents)
पदाथार् नां के पकाराः।
What are all the entity
types?
Entity
Entity
Entity
दय (Substance)
{0} इयय ग ुणाः के ।
What are the properties of
{0}?
Entity
दय (Substance)
{0} इित दयय पकाराः के ।
What
are
the
types/variants
of
the
substance {0}?
Entity
Relation
सबध (Relation)
{0} {1} एतयोः मये कः
सबधः।
What is the relation be-
tween {0} and {1}?
Entity, Entity
Category
Sanskrit Template
English Template
Input Type
Entity
ितदोष (Tridoṣa)
के पदाथार् ः {0} इित दोषय हासं
कु वर् ित।
Which entities decrease
the dosha {0}?
Entity, Entity
Which entities increase
the doshas {0} and {1}
and decrease the dosha
{2}?
Entity,
Entity,
Entity
Which entities increase
the dosha {0} and de-
crease the doshas {1}
and {2}?
Entity,
Entity,
Entity
Entity
Entity
{0}?
Entity
पभाव (Effect)
के पदाथार् ः {0} एतम ै
लाभपदाः।
Which entities benefit
{0}?
Entity
पभाव (Effect)
के पदाथार् ः {0} एतम ै
क्षितपदाः।
Which entities harm
{0}?
Entity
पभाव (Effect)
के पदाथार् ः {0} इयय हासं
कु वर् ित।
Which entities decrease
{0}?
Entity-Type,
Relation, Entity-
Type
Please create an account and contact authors requesting access to annotation or querying interface.
Analysis: mahat (n. sg. acc. | n. sg. nom.)
We keep a timeout of 60 seconds, within which if the analysis is not found, we report the analysis as missing, i.e., 0 solutions for that line.
There is, however, no indication in the original text that a section/substance has ended and a new one has started. It must be inferred by reading the text.
https://en.wikipedia.org/wiki/Devanagari_transliteration
https://neo4j.com/ 7 https://neo4j.com/developer/cypher/ 8 We also have their English translated versions in the system.
AcknowledgementsWe thank Anil Kumar Gourishetty from IIT Roorkee for his help in connecting us with Sanskrit volunteers.9 Colors are not fixed. Thus, the color yellow is not indicative of a specific entity type. It only ensures that for a particular query answer, all yellow nodes will have the same entity type.A Dhānyavarga SampleA.1 Sample of TextWe present here an extract from Dhānyavarga used in §2.Table 3contains the first 10 verses from Dhānyavarga and a version with sandhi resolved. The sentence boundaries are denoted using '⟨' and '⟩' markers.Original TextSandhi SplitTable 3We next list the prose version of the verses listed inTable 3above.A.2 Poetry-to-Prose Conversion of Verses from
DBpedia: A nucleus for a web of open data. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary Ives, The Semantic Web. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. DBpedia: A nucleus for a web of open data. In The Semantic Web, pages 722-735.
Knowledge vault: A web-scale approach to probabilistic knowledge fusion. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, Wei Zhang, KDD. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowl- edge fusion. In KDD, pages 601-610.
A distributed platform for Sanskrit processing. Pawan Goyal, Gérard Huet, Amba Kulkarni, Peter Scharf, Ralph Bunker, COLING. Pawan Goyal, Gérard Huet, Amba Kulkarni, Peter Scharf, and Ralph Bunker. 2012. A distributed platform for Sanskrit processing. In COLING, pages 1011-1028.
Sanskrit word segmentation using character-level recurrent and convolutional neural networks. Oliver Hellwig, Sebastian Nehrdich, EMNLP. Oliver Hellwig and Sebastian Nehrdich. 2018. Sanskrit word segmentation using character-level recurrent and convolutional neural networks. In EMNLP, pages 2754-2763.
Natural language question answering: the view from here. Lynette Hirschman, Robert Gaizauskas, Natural Language Engineering. 74275Lynette Hirschman and Robert Gaizauskas. 2001. Natural language question answering: the view from here. Natural Language Engineering, 7(4):275.
Sanskrit segmentation. South Asian Languages Analysis Roundtable XXVIII, Denton. Gérard Huet, OhioGérard Huet. 2009. Sanskrit segmentation. South Asian Languages Analysis Roundtable XXVIII, Den- ton, Ohio (October 2009).
Dialog Navigator": A question answering system based on large text knowledge base. Yoji Kiyota, Sadao Kurohashi, Fuyuko Kido, COLING. Yoji Kiyota, Sadao Kurohashi, and Fuyuko Kido. 2002. "Dialog Navigator": A question answering system based on large text knowledge base. In COLING.
Word segmentation in sanskrit using path constrained random walks. Amrith Krishna, Pavankumar Santra, Satuluri, Prasanth Sasi, Bhumi Bandaru, Yajuvendra Faldu, Pawan Singh, Goyal, COLING. Amrith Krishna, Bishal Santra, Pavankumar Satuluri, Sasi Prasanth Bandaru, Bhumi Faldu, Yajuvendra Singh, and Pawan Goyal. 2016. Word segmentation in sanskrit using path constrained random walks. In COLING, pages 494-504.
2021. A graphbased framework for structured prediction tasks in sanskrit. Amrith Krishna, Ashim Santra, Pavankumar Gupta, Pawan Satuluri, Goyal, Computational Linguistics. 464Amrith Krishna, Bishal Santra, Ashim Gupta, Pavankumar Satuluri, and Pawan Goyal. 2021. A graph- based framework for structured prediction tasks in sanskrit. Computational Linguistics, 46(4):785-845.
Abhishek Nanda, and Pushpak Bhattacharyya. Malhar Kulkarni, Chaitali Dangarikar, Irawati Kulkarni, Global Wordnet Conference (GWC). Introducing Sanskrit WordNetMalhar Kulkarni, Chaitali Dangarikar, Irawati Kulkarni, Abhishek Nanda, and Pushpak Bhattacharyya. 2010. Introducing Sanskrit WordNet. In Global Wordnet Conference (GWC), pages 287-294.
Samsaadhanii: A Sanskrit computational toolkit. Amba Kulkarni, Amba Kulkarni. 2016. Samsaadhanii: A Sanskrit computational toolkit.
Finding regular simple paths in graph databases. O Alberto, Mendelzon, T Peter, Wood, SIAM Journal on Computing. 246Alberto O Mendelzon and Peter T Wood. 1995. Finding regular simple paths in graph databases. SIAM Journal on Computing, 24(6):1235-1258.
Never-ending learning. Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Bishan Yang, Justin Betteridge, Andrew Carlson, Bhanava Dalvi, Matt Gardner, Bryan Kisiel, Communications of the ACM. 615Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Bishan Yang, Justin Betteridge, Andrew Carlson, Bhanava Dalvi, Matt Gardner, Bryan Kisiel, et al. 2018. Never-ending learning. Communications of the ACM, 61(5):103-115.
The knowledge structure in Amarakośa. S Sivaja, Amba Nair, Kulkarni, International Sanskrit Computational Linguistics Symposium (ISCLS). Sivaja S Nair and Amba Kulkarni. 2010. The knowledge structure in Amarakośa. In International Sanskrit Computational Linguistics Symposium (ISCLS), pages 173-189.
Mining knowledge graphs from text. Jay Pujara, Sameer Singh, WSDM. Jay Pujara and Sameer Singh. 2018. Mining knowledge graphs from text. In WSDM, pages 789-790.
Indic-transliteration. Sanskrit programmers. 2021. Online; accessed 2021-11-18Sanskrit programmers. 2021. Indic-transliteration, jul 23. [Online; accessed 2021-11-18].
YAGO: A core of semantic knowledge. Fabian M Suchanek, Gjergji Kasneci, Gerhard Weikum, WWW. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowledge. In WWW, pages 697-706.
Framework for question-answering in sanskrit through automated construction of knowledge graphs. Hrishikesh Terdalkar, Arnab Bhattacharya, International Sanskrit Computational Linguistics Symposium (ISCLS). Hrishikesh Terdalkar and Arnab Bhattacharya. 2019. Framework for question-answering in sanskrit through automated construction of knowledge graphs. In International Sanskrit Computational Lin- guistics Symposium (ISCLS), pages 97-116.
Sangrahaka: A tool for annotating and querying knowledge graphs. Hrishikesh Terdalkar, Arnab Bhattacharya, ESEC/FSE 2021. Hrishikesh Terdalkar and Arnab Bhattacharya. 2021. Sangrahaka: A tool for annotating and querying knowledge graphs. In ESEC/FSE 2021, pages 1520-1524.
The TREC-8 question answering track report. M Ellen, Voorhees, Trec. 99Ellen M Voorhees. 1999. The TREC-8 question answering track report. In Trec, volume 99, pages 77-82.
Xindong Wu, Jia Wu, Xiaoyi Fu, Jiachen Li, Peng Zhou, Xu Jiang, Automatic knowledge graph construction: A report on the 2019 ICDM/ICBK contest. In ICDM. Xindong Wu, Jia Wu, Xiaoyi Fu, Jiachen Li, Peng Zhou, and Xu Jiang. 2019. Automatic knowledge graph construction: A report on the 2019 ICDM/ICBK contest. In ICDM, pages 1540-1545.
Semantic parsing via staged query graph generation: Question answering with knowledge base. Scott Wen-Tau Yih, Ming-Wei Chang, Xiaodong He, Jianfeng Gao, IJCNLP. Scott Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In IJCNLP, pages 1321-1331.
| [] |