paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
null | false | null | What are the three official languages in Belgium ? | Dutch, French, German |
|
null | false | 113 | In this section we discuss the state of the art on conversational systems in three perspectives: types of interactions, types of architecture, and types of context reasoning. Then we present a table that consolidates and compares all of them.
ELIZA BIBREF11 was one of the first softwares created to understand natural language processing. Joseph Weizenbaum created it at the MIT in 1966 and it is well known for acting like a psychotherapist and it had only to reflect back onto patient's statements. ELIZA was created to tackle five "fundamental technical problems": the identification of critical words, the discovery of a minimal context, the choice of appropriate transformations, the generation of appropriate responses to the transformation or in the absence of critical words, and the provision of an ending capacity for ELIZA scripts.
Right after ELIZA came PARRY, developed by Kenneth Colby, who is psychiatrist at Stanford University in the early 1970s. The program was written using the MLISP language (meta-lisp) on the WAITS operating system running on a DEC PDP-10 and the code is non-portable. Parts of it were written in PDP-10 assembly code and others in MLISP. There may be other parts that require other language translators. PARRY was the first system to pass the Turing test - the psychiatrists were able to make the correct identification only 48 percent of the time, which is the same as a random guessing.
A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) BIBREF12 appeared in 1995 but current version utilizes AIML, an XML language designed for creating stimulus-response chat robots BIBREF13 . A.L.I.C.E. bot has, at present, more than 40,000 categories of knowledge, whereas the original ELIZA had only about 200. The program is unable to pass the Turing test, as even the casual user will often expose its mechanistic aspects in short conversations.
Cleverbot (1997-2014) is a chatbot developed by the British AI scientist Rollo Carpenter. It passed the 2011 Turing Test at the Technique Techno-Management Festival held by the Indian Institute of Technology Guwahati. Volunteers participate in four-minute typed conversations with either Cleverbot or humans, with Cleverbot voted 59.3 per cent human, while the humans themselves were rated just 63.3 per cent human BIBREF14 .
A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) [13] appeared in 1995 but current version utilizes AIML, an XML language designed for creating stimulus-response chat robots [14]. | What is the A.L.I.C.E. in this paper? | A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) appeared in 1995 but current version utilizes AIML, an XML language designed for creating stimulus-response chat robots. |
null | false | null | Give me a list of artists and bands associated with City Pop genre that gained popularity in 70s and 80s Japan | 1. Anri
2. Mariya Takeuchi
3. Miki Matsubara
4. Hiroshi Sato
5. Masayoshi Takanaka
6. Yamashita Tatsuro
7. Happy End
8. Casiopea |
|
null | false | null | What continent is Peru a part of? | Peru is in South America |
|
null | false | null | Which is a species of fish? Velvetfish or Cotton | Velvetfish |
|
null | false | 295 | We used the Mboshi5K corpus BIBREF13 as a test set for all the experiments reported here. Mboshi (Bantu C25) is a typical Bantu language spoken in Congo-Brazzaville. It is one of the languages documented by the BULB (Breaking the Unwritten Language Barrier) project BIBREF14 . This speech dataset was collected following a real language documentation scenario, using Lig_Aikuma, a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets BIBREF15 . The corpus is multilingual (5130 Mboshi speech utterances aligned to French text) and contains linguists' transcriptions in Mboshi (in the form of a non-standard graphemic form close to the language phonology). It is also enriched with automatic forced-alignment between speech and transcriptions. The dataset is made available to the research community. More details on this corpus can be found in BIBREF13 .
TIMIT is also used as an extra speech corpus to train the informative prior. We used two different set of features: the mean normalized MFCC + INLINEFORM0 + INLINEFORM1 generated by HTK and the Multilingual BottleNeck (MBN) features BIBREF16 trained on the Czech, German, Portuguese, Russian, Spanish, Turkish and Vietnamese data of the Global Phone database.
This speech dataset was collected following a real language documentation scenario, using Lig Aikuma2 , a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets [16]. | What is the Lig Aikuma? | It is a mobile app specifically dedicated to fieldwork language documentation, which works both on android powered smartphones and tablets. |
null | false | 267 | The power of neural networks comes from their ability to find data representations that are useful for classification. Recurrent Neural Networks (RNN) are a special type of neural network, which can be thought of as the addition of loops to the architecture. RNNs use back propagation in the training process to update the network weights in every layer. In our experimentation we used a powerful type of RNN known as Long Short-Term Memory Network (LSTM). Inspired by the work by BIBREF15 , we experiment with combining various LSTM models enhanced with a number of novel features in an ensemble. More specifically we introduce:
In our experimentation we used a powerful type of RNN known as Long Short-Term Memory Network (LSTM). | What kind of RNN do they use in their experimentation? | Long Short-Term Memory Network |
1707.06939 | false | null | We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017.
We recruited 176 AMT workers to participate in our conceptualization task. | What crowdsourcing platform was used? | The answers are shown as follows:
* AMT
|
null | false | null | Why is nitrogen gas used during the packaging of potato chips? | The Brits call them “crisps” for a reason. Three reasons at least for using 100% nitrogen:
Nitrogen contains no oxygen like air does which oxidizes the potatoes and promotes bacteria growth.
Nitrogen contains no water, which removes “crispness” from the chips and promotes bacteria growth.
Nitrogen inflates the bag which protects the chips, but, honestly, that could be done with air which costs less than pure nitrogen. But we know the downsides of air, don’t we? |
|
null | false | 335 | While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior.
Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue.
We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc.
For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing.
We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
In the first architecture, we use LSTM (Hochreiter and Schmidhuber, 1997) for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM (Tai et al., 2015) to represent the input sentences in the form of a dependency tree. | What neural network architectures did the authors use in the prediction task? | In the first architecture, the authors use LSTM and in the second architecture, the authors use TreeLSTM. |
null | false | null | In the 19th century, Ljubija developed into one of the most important locations for the area's economy due to its iron ore deposits. After World War II Ljubija became one of the largest iron mines in the world which employed more than 5,000 people at any one time. Until 1963 it was governed as a separate municipality, before it was merged with Prijedor. | List some import aspects about location Ljubija in given text | 1. Because of its iron ore resources, Ljubija became one of the most significant destinations for the region's economy in the 19th century.
2. Ljubija grew to be one of the world's largest iron mines after World War II, employing more than 5,000 people at any given moment.
3. Prior to being amalgamated with Prijedor, it was governed as an independent municipality until 1963. |
null | false | null | What is a Haiku? | A haiku is a short poem composed of three phrases and 17 syllables, with the first and last phrases being 5 syllables and the second phrase being 7 syllables. Originating from Japan, haikus are typically centered around nature-related topics, such as Spring. |
|
null | false | null | What are Marge's sisters named on the Simpsons | Patty & Selma |
|
null | false | null | The Actors' Equity Association (AEA), commonly called Actors' Equity or simply Equity, is an American labor union representing those who work in live theatrical performance. Performers appearing in live stage productions without a book or through-storyline (vaudeville, cabarets, circuses) may be represented by the American Guild of Variety Artists (AGVA). The AEA works to negotiate quality living conditions, livable wages, and benefits for performers and stage managers. A theater or production that is not produced and performed by AEA members may be called "non-Equity". | What is Equity theater? | A theater or production that is performed by Actors' Equity Association (AEA) members is often called "Equity Theater". Actors' Equity Association commonly called Actor's Equity or more simply Equity, is an American Labor union representing those who work in live theatrical performance. A theater or production not produced by Actors' Equity members is often called "non-Equity". |
null | false | null | Why does it help to focus on the breath when practicing mindfulness? | The art of meditation, which is another name for the practice of mindfulness, is all about being in the present. In other words, cultivating an awareness of what is happening in the present moment. This is done by quietening the mind of all the random thoughts that are being continuously generated, and having the senses alerted to the signals that are being passed in the moment. The signals are the sensations or emotions or sounds that one is feeling and hearing in the present moment. To accomplish this, the breath serves as an anchor to the present moment. Being aware of one's breath keeps us in the present moment. When the mind eventually wanders, the breath is a signal to tune back into the present moment. The breath is always with us and can be found in an instant. |
|
null | false | null | What is the difference between walking and running? | Walking and running involves taking steps to gain velocity, typically on land. The differences come in terms of speed. Walking happens at a slower pace and pace than running. One that traverses longer distances at a faster rate when running versus walking. |
|
null | false | 73 | Building effective machine learning models for text requires data and different resources such as pre-trained word embeddings and reusable lexicons. Unfortunately, most of these resources are not entirely transferable to different domains, tasks or languages. Sentiment analysis is one such task that requires additional effort to transfer knowledge between languages.
In this paper, we studied the research question: Can we build reusable sentiment analysis models that can be utilized for making inferences in different languages without requiring separate models and resources for each language? To that end, we built a recurrent neural network model in the language that had largest data available. We took a general-to-specific model building strategy where the larger corpus that had reviews from different domains was first used to train the RNN model and a smaller single-domain corpus of sentiment reviews was used to specialize the model on the given domain. During scoring time, we used corpora for the given domain in different languages and translated them to English to be able to classify sentiments with the trained model. Experimental results showed that the proposed multilingual approach outperforms both the majority baseline and the lexicon-based baseline.
In this paper we made the sentiment analysis model specific to a single domain. For future work, we would like to investigate the effectiveness of our model on different review domains including hotel reviews and on different problems such as detecting stance.
Unfortunately, most of these resources are not entirely transferable to different domains, tasks or languages. Sentiment analysis is one such task that requires additional effort to transfer knowledge between languages. | What are the limitations of this article? | Unfortunately, most of these resources are not entirely transferable to different domains, tasks or languages. Sentiment analysis is one such task that requires additional effort to transfer knowledge between languages. |
null | false | null | What is a geodetic system? | A geodetic datum or geodetic system (also: geodetic reference datum, geodetic reference system, or geodetic reference frame) is a global datum reference or reference frame for precisely representing the position of locations on Earth or other planetary bodies by means of geodetic coordinates. Datums are crucial to any technology or technique based on spatial location, including geodesy, navigation, surveying, geographic information systems, remote sensing, and cartography. A horizontal datum is used to measure a location across the Earth's surface, in latitude and longitude or another coordinate system; a vertical datum is used to measure the elevation or depth relative to a standard origin, such as mean sea level (MSL). Since the rise of the global positioning system (GPS), the ellipsoid and datum WGS 84 it uses has supplanted most others in many applications. The WGS 84 is intended for global use, unlike most earlier datums. |
|
1808.03986 | false | null | Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model.
In our method, we take an image as input and generate a natural question as output. | What is the input to the differential network? | The answers are shown as follows:
* image
|
null | false | null | What is a synonym for something which is eco friendly? | Eco friendly items can also be described as being "green" |
|
null | false | null | Simón José Antonio de la Santísima Trinidad Bolívar y Palacios (24 July 1783 – 17 December 1830) was a Venezuelan military and political leader who led what are currently the countries of Colombia, Venezuela, Ecuador, Peru, Panama and Bolivia to independence from the Spanish Empire. He is known colloquially as El Libertador, or the Liberator of America. | Who is Simon Bolivar? | Simón José Antonio de la Santísima Trinidad Bolívar y Palacios (24 July 1783 – 17 December 1830) was a Venezuelan military and political leader who led what are currently the countries of Colombia, Venezuela, Ecuador, Peru, Panama and Bolivia to independence from the Spanish Empire. He is known colloquially as El Libertador, or the Liberator of America.
Simón Bolívar was born in Caracas in the Captaincy General of Venezuela into a wealthy family of American-born Spaniards (criollo), but lost both parents as a child. Bolívar was educated abroad and lived in Spain, as was common for men of upper-class families in his day. While living in Madrid from 1800 to 1802, he was introduced to Enlightenment philosophy and married María Teresa Rodríguez del Toro y Alaysa, who died in Venezuela from yellow fever in 1803. From 1803 to 1805, Bolívar embarked on a Grand Tour that ended in Rome, where he swore to end the Spanish rule in the Americas. In 1807, Bolívar returned to Venezuela and proposed gaining Venezuelan independence to other wealthy creoles. When the Spanish authority in the Americas weakened due to Napoleon's Peninsular War, Bolívar became a zealous combatant and politician in the Spanish American wars of independence.
Bolívar began his military career in 1810 as a militia officer in the Venezuelan War of Independence, fighting Royalist forces for the first and second Venezuelan republics and the United Provinces of New Granada. After Spanish forces subdued New Granada in 1815, Bolívar was forced into exile on Jamaica. In Haiti, Bolívar met and befriended Haitian revolutionary leader Alexandre Pétion. After promising to abolish slavery in Spanish America, Bolívar received military support from Pétion and returned to Venezuela. He established a third republic in 1817 and then crossed the Andes to liberate New Granada in 1819. Bolívar and his allies defeated the Spanish in New Granada in 1819, Venezuela and Panama in 1821, Ecuador in 1822, Peru in 1824, and Bolivia in 1825. Venezuela, New Granada, Ecuador, and Panama were merged into the Republic of Colombia (Gran Colombia), with Bolívar as president there and in Peru and Bolivia.
In his final years, Bolívar became increasingly disillusioned with the South American republics, and distanced from them because of his centralist ideology. He was successively removed from his offices until he resigned the presidency of Colombia and died of tuberculosis in 1830. His legacy is diverse and far-reaching within Latin America and beyond. He is regarded as a national and cultural icon throughout Latin America; the nations of Bolivia and Venezuela (as the Boliviarian Republic of Venezuela) are named after him, and he has been memorialized all over the world in the form of public art or street names and in popular culture. |
1910.14254 | false | null | We used the annotated dataset reported by degen2015investigating, a dataset of the utterances from the Switchboard corpus of telephone dialogues BIBREF21 that contain the word some. The dataset consists of 1,362 unique utterances with a noun phrase containing some (some-NP). For each example with a some-NP, degen2015investigating collected inference strength ratings from at least 10 participants recruited on Amazon's Mechanical Turk. Participants saw both the target utterance and ten utterances from the preceding discourse context. They then rated the similarity between the original utterance like (UNKREF8) and an utterance in which some was replaced with some, but not all like (UNKREF9), on a 7-point Likert scale with endpoints labeled “very different meaning” (1) and “same meaning” (7). Low similarity ratings thus indicate low inference strength, and high similarity ratings indicate high inference strength.
We used the annotated dataset reported by degen2015investigating, a dataset of the utterances from the Switchboard corpus of telephone dialogues BIBREF21 that contain the word some. The dataset consists of 1,362 unique utterances with a noun phrase containing some (some-NP). For each example with a some-NP, degen2015investigating collected inference strength ratings from at least 10 participants recruited on Amazon's Mechanical Turk. Participants saw both the target utterance and ten utterances from the preceding discourse context. They then rated the similarity between the original utterance like (UNKREF8) and an utterance in which some was replaced with some, but not all like (UNKREF9), on a 7-point Likert scale with endpoints labeled “very different meaning” (1) and “same meaning” (7). Low similarity ratings thus indicate low inference strength, and high similarity ratings indicate high inference strength. | Which dataset do they use? | The answers are shown as follows:
* the annotated dataset reported by degen2015investigating, a dataset of the utterances from the Switchboard corpus of telephone dialogues BIBREF21 that contain the word some
|
null | false | 395 | Natural language generation and document classification have been widely conducted using neural sequence models based on the encoder–decoder architecture. The underlying technique relies on the production of a context vector as the document representation, to estimate both tokens in natural language generation and labels in classification tasks. By combining recurrent neural networks with attention BIBREF0, the model is able to learn contextualized representations of words at the sentence level. However, higher-level concepts, such as discourse structure beyond the sentence, are hard for an RNN to learn, especially for longer documents. We hypothesize that NLP tasks such as summarization and document classification can be improved through the incorporation of discourse information.
In this paper, we propose to incorporate latent representations of discourse units into neural training. A discourse parser can provide information about the document structure as well as the relationships between discourse units. In a summarization scenario, for example, this information may help to remove redundant information or discourse disfluencies. In the case of document classification, the structure of the text can provide valuable hints about the document category. For instance, a scientific paper follows a particular discourse narrative pattern, different from a short story. Similarly, we may be able to predict the societal influence of a document such as a petition document, in part, from its discourse structure and coherence.
Specifically, discourse analysis aims to identify the organization of a text by segmenting sentences into units with relations. One popular representation is Rhetorical Structure Theory (RST) proposed by mann1988rhet, where the document is parsed into a hierarchical tree, where leaf nodes are the segmented units, known as Entity Discourse Units (EDUs), and non-terminal nodes define the relations.
As an example, in Figure FIGREF1 the two-sentence text has been annotated with discourse structure based on RST, in the form of 4 EDUs connected with discourse labels attr and elab. Arrows in the tree capture the nuclearity of relations, wherein a “satellite” points to its “nucleus”. The Nucleus unit is considered more prominent than the Satellite, indicating that the Satellite is a supporting sentence for the Nucleus. Nuclearity relationships between two EDUs can take the following three forms: Nucleus–Satellite, Satellite–Nucleus, and Nucleus–Nucleus. In this work, we use our reimplementation of the state of the art neural RST parser of BIBREF1, which is based on eighteen relations: purp, cont, attr, evid, comp, list, back, same, topic, mann, summ, cond, temp, eval, text, cause, prob, elab.
This research investigates the impact of discourse representations obtained from an RST parser on natural language generation and document classification. We primarily experiment with an abstractive summarization model in the form of a pointer–generator network BIBREF2, focusing on two factors: (1) whether summarization benefits from discourse parsing; and (2) how a pointer–generator network guides the summarization model when discourse information is provided. For document classification, we investigate the content-based popularity prediction of online petitions with a deep regression model BIBREF3. We argue that document structure is a key predictor of the societal influence (as measured by signatures to the petition) of a document such as a petition.
Our primary contributions are as follows: (1) we are the first to incorporate a neural discourse parser in sequence training; (2) we empirically demonstrate that a latent representation of discourse structure enhances the summaries generated by an abstractive summarizer; and (3) we show that discourse structure is an essential factor in modelling the popularity of online petitions.
(3) we show that discourse structure is an essential factor in modelling the popularity of online petitions. | What is an essential factor in modeling the popularity of online petitions shown by them? | Discourse structure. |
null | false | null | What is the JRE in Java? | The Java Runtime Environment (JRE) is software that Java programs require to run correctly. Java is a computer language that powers many current web and mobile applications. The JRE is the underlying technology that communicates between the Java program and the operating system. |
|
null | false | null | What happens if the sky falls down? | The question does not make sense. Sky is a term used to describe the upward view from the surface of the Earth. It can also be described as the space between Earth’s surface and outer space. Therefore the idea of the sky falling down is absurd. However, the famous idiom “The Sky is falling” refers to panic and mass hysteria. |
|
null | false | null | Piombino's first published poems appeared in 1965 in American Weave Literary Journal. Throughout the 1970s, he continued to publish in small literary journals, such as The World, Dodgems, Telephone, and Roof.
His first volume of poetry, titled simply Poems, was published by the Sun & Moon Press in 1988 and won an Author's Recognition Award from the Postgraduate Center for Mental Health in 1992.[citation needed] His second chapbook, Light Street, did not appear until 1996. It was followed by Theoretical Objects (1999), a collection of manifestos, aphorisms, essays, and seemingly autobiographical prose poems.
Piombino's poetry took a different turn with Hegelian Honeymoon (2004), in which he moved away from the conventions of Language poetry to explore more traditional forms. Inspired by the poetry that accompanied an exhibition of Japanese calligraphy, the poems are a cross between haiku and aphorism. The poems were originally published on the SUNY/Buffalo poetics list-serve before being published in chapbook form by Chax Press. | Extract a list of poems and the year they were published in the following form: {Poem name}: {Year published}
And sort the list chronologically | Sure, here is a chronologically-sorted list of poems and the year they were published:
Poems: 1988
Light Street: 1996
Theoretical Objects: 1999
Hegelian Honeymoon: 2004 |
null | false | 2 | Adversarial learning has been an effective mechanism to resolve the problem of the input features between the training and test examples having large divergences BIBREF27 , BIBREF13 . It has been successfully applied on domain adaption BIBREF7 , cross-lingual learning BIBREF15 and multi-task learning BIBREF17 . All settings involve feature shifting between the training and testing.
In this paper, our setting is different. We are using the annotations from non-experts, which are noise and can influence the final performances if they are not properly processed. Directly learning based on the resulting corpus may adapt the neural feature extraction into the biased annotations. In this work, we assume that individual workers have their own guidelines in mind after short training. For example, a perfect worker can annotate highly consistently with an expert, while common crowdsourcing workers may be confused and have different understandings on certain contexts. Based on the assumption, we make an adaption for the original adversarial neural network to our setting.
Our adaption is very simple. Briefly speaking, the original adversarial learning adds an additional discriminator to classify the type of source inputs, for example, the domain category in the domain adaption setting, while we add a discriminator to classify the annotation workers. Solely the features from the input sentence is not enough for worker classification. The annotation result of the worker is also required. Thus the inputs of our discriminator are different. Here we exploit both the source sentences and the crowd-annotated NE labels as basic inputs for the worker discrimination.
In the following, we describe the proposed adversarial learning module, including both the submodels and the training method. As shown by the left part of Figure 1 , the submodel consists of four parts: (1) a common Bi-LSTM over input characters; (2) an additional Bi-LSTM to encode crowd-annotated NE label sequence; (3) a convolutional neural network (CNN) to extract features for worker discriminator; (4) output and prediction.
Our adaption is very simple. Briefly speaking, the original adversarial learning adds an additional discriminator to classify the type of source inputs, for example, the domain category in the domain adaption setting, while we add a discriminator to classify the annotation workers. | How do the authors make the adaption for the original adversarial neural network to the setting in this work? | Briefly speaking, the original adversarial learning adds an additional discriminator to classify the type of source inputs. |
null | false | null | What are the main ingredients for making bread? | Flour, water, yeast, and salt |
|
2003.02639 | false | null | Three clear domains can be noticed in the behavior of $\langle cc \rangle $ versus $\wp $, at $t_f$, as shown in Fig. FIGREF15 (blue squares). Phase I: $\langle cc \rangle $ increases smoothly for $\wp < 0.4$, indicating that for this domain there is a small correlation between word neighborhoods. Full vocabularies are attained also for $\wp < 0.4$; Phase II: a drastic transition appears at the critical domain $\wp ^* \in (0.4,0.6)$, in which $\langle cc \rangle $ shifts abruptly towards 1. An abrupt change in $V(t_f)$ versus $\wp $ is also found (Fig. FIGREF16) for $\wp ^*$; Phase III: single-word languages dominate for $\wp > 0.6$. The maximum value of $\langle cc \rangle $ indicate that word neighborhoods are completely correlated.
Three clear domains can be noticed in the behavior of $\langle cc \rangle $ versus $\wp $, at $t_f$, as shown in Fig. FIGREF15 (blue squares). Phase I: $\langle cc \rangle $ increases smoothly for $\wp < 0.4$, indicating that for this domain there is a small correlation between word neighborhoods. Full vocabularies are attained also for $\wp < 0.4$; Phase II: a drastic transition appears at the critical domain $\wp ^* \in (0.4,0.6)$, in which $\langle cc \rangle $ shifts abruptly towards 1. An abrupt change in $V(t_f)$ versus $\wp $ is also found (Fig. FIGREF16) for $\wp ^*$; Phase III: single-word languages dominate for $\wp > 0.6$. The maximum value of $\langle cc \rangle $ indicate that word neighborhoods are completely correlated. | What are three possible phases for language formation? | The answers are shown as follows:
* Phase I: $\langle cc \rangle $ increases smoothly for $\wp < 0.4$, indicating that for this domain there is a small correlation between word neighborhoods. Full vocabularies are attained also for $\wp < 0.4$
* Phase II: a drastic transition appears at the critical domain $\wp ^* \in (0.4,0.6)$, in which $\langle cc \rangle $ shifts abruptly towards 1. An abrupt change in $V(t_f)$ versus $\wp $ is also found (Fig. FIGREF16) for $\wp ^*$
* Phase III: single-word languages dominate for $\wp > 0.6$. The maximum value of $\langle cc \rangle $ indicate that word neighborhoods are completely correlated
|
null | false | null | What is an acoholic? | An alcoholic is someone who is unable to control the amount of alcohol they consume. Loss of control with any substance is how addiction starts. Alcoholism is a form of uncontrolled addiction. The most dangerous kind of addiction is one where a person is in denial that they have an addiction. Alcoholism like other additions if not treated can lead to kidney failure and ultimately death in the long run. It's best if you know someone with an alcoholic addition to try and intervene and get them help through a 12-step or alcoholics anonymous program. |
|
null | false | 74 | Figure FIGREF31 shows the evaluation of our approach when discovering new informative keywords for model training (see Section SECREF2: Keyword Discovery). We compare our human-AI collaborative way of discovering new keywords against a query expansion (QE) approach BIBREF23, BIBREF24 that leverages word embeddings to find similar words in the latent semantic space. Specifically, we use pre-trained word embeddings based on a large Google News dataset for query expansion. For instance, the top keywords resulting from QE for `politician' are, `deputy',`ministry',`secretary', and `minister'. For each of these keywords, we use the crowd to label a set of tweets and obtain a corresponding expectation.
We observe that our approach consistently outperforms QE by an average of $4.62\%$ and $52.58\%$ AUC on CyberAttack and PoliticianDeath, respectively. The large gap between the performance improvements for the two datasets is mainly due to the fact that microposts that are relevant for PoliticianDeath are semantically more complex than those for CyberAttack, as they encode noun-verb relationship (e.g., “the king of ... died ...”) rather than a simple verb (e.g., “... hacked.”) for the CyberAttack microposts. QE only finds synonyms of existing keywords related to either `politician' or `death', however cannot find a meaningful keyword that fully characterizes the death of a politician. For instance, QE finds the keywords `kill' and `murder', which are semantically close to `death' but are not specifically relevant to the death of a politician. Unlike QE, our approach identifies keywords that go beyond mere synonyms and that are more directly related to the end task, i.e., discriminating event-related microposts from non related ones. Examples are `demise' and `condolence'. As a remark, we note that in Figure FIGREF31(b), the increase in QE performance on PoliticianDeath is due to the keywords `deputy' and `minister', which happen to be highly indicative of the death of a politician in our dataset; these keywords are also identified by our approach.
Specifically, we use pre-trained word embeddings based on a large Google News dataset2 for query expansion. | What are used for query expansion? | Pre-trained word embeddings based on a large Google News dataset2. |
null | false | null | The story begins only days after the conclusion of Taran Wanderer. With winter approaching, Taran and his companion Gurgi return from their wanderings to Caer Dallben after getting news from Kaw the crow that Princess Eilonwy has returned from the Isle of Mona. Indeed, they find her at home, along with her escort King Rhun of Mona and the former giant Glew, who had been magically restored to human size by a potion from Dallben.
Before Taran can propose to Eilonwy, the bard-king Fflewddur Fflam and his mount Llyan arrive with a gravely injured Gwydion, Prince of Don. Servants of Arawn had assaulted them and seized the magical black sword Dyrnwyn. Fflewddur also states that Taran was involved in the ambush, baffling everyone. With Achren's help, the truth is determined: Arawn himself has come from Annuvin to the verge of Caer Dallben in the guise of Taran, in order to lure Gwydion into the ambush.
Because Dyrnwyn may be pivotal as a threat to Arawn, Dallben consults the oracular pig Hen Wen to determine how it may be regained. During the reading, the ash rods used to communicate shatter and the two thirds of Hen Wen's answer are discouraging and vague. When Gwydion heals sufficiently, he sets out with Taran and others to meet with King Smoit. Gwydion insists that he alone should enter Annuvin to seek the sword, but Smoit's Cantrev Cadiffor is on the way. The small party divides, as Rhun and Eilonwy intend to visit the ships of Mona en route.
When Gwydion, Taran, and others reach Caer Cadarn, they are imprisoned by Magg, the treacherous former Chief Steward of Mona, who has entered service with Arawn and taken over the fortress. When Eilonwy approaches with the other party, she detects something amiss and they cautiously send Fflewddur Fflam to the fortress as a bard. After entertaining the soldiers for a night, he returns with the bad news. Then the companions encounter Gwystyl of the Fair Folk outside the stronghold, en route home after closing the waypost near Annuvin, personally bearing final observations to King Eiddileg about preparations for war by Arawn's forces. With Gwystyl's assistance and store of magical smokes, fires, and concealments, the companions break in and free the prisoners. The plan goes awry, however; King Smoit and his men are finally able to regain control only by Rhun's intervention, which costs his life.
Learning from Gwystyl of the activities in Annuvin, Gwydion turns from the quest for Dyrnwyn to planning for battle at Caer Dathyl. Gwystyl, Fflewddur, and Taran leave to gather support, respectively from the Fair Folk, the northern realms, and the Free Commots. Kaw, sent out by Taran to reconnoiter the enemy, is attacked by Gwythaints while spying near Annuvin, but manages to reach Medwyn, who asks all the creatures of air and land to oppose the forces of Arawn. Taran, Coll, Eilonwy, and Gurgi muster the Commots, who rally to their friendship with Taran, and sends them marching in groups to Caer Dathyl while the smiths and weavers rallied by Hevydd and Dwyvach work day and night to equip them.
Soon after Taran and the last Commots reach Caer Dathyl, King Pryderi arrives from the western realms. In council he announces his new allegiance to Arawn, for the good of all, because "Arawn will do what the Sons of Don have failed to do: Make an end of endless wars among the cantrevs, and bring peace where there was none before." He is rejected utterly but permitted to return unharmed to his army, and at the next day the battle begins. Although the Sons of Don and allies initially have the best of it, the Cauldron-Born arrive en masse before evening, overwhelming the allies and razing Caer Dathyl to the ground.
With High King Math killed, Gwydion is proclaimed the new High King. With the bulk of the Cauldron-Born deployed outside of Annuvin, Gwydion determines that the best chance is to attack while it is guarded by mortal men alone. He will lead the Sons of Don to waiting ships on the north coast and attack by sea, while Taran leads the Commots to delay the Cauldron-Born's return march, as their power wanes with time and distance from Annuvin.
Taran and his army are able to hold the tired Cauldron-Born warriors beyond arm's length by brute force, and turn the march from a straight and easy route into the rugged hills, although Coll dies in battle. Thanks to a company of Fair Folk, and to the animals sent by Medwyn, they destroy most of the Huntsmen who accompany and lead the undead. At last the Cauldron-Born break free of the hills and return to the lowland route. Regaining strength as they near Annuvin, it would be futile for the exhausted allies to meet them head-on again, so inevitably they take the long, easy route to Arawn's stronghold.
Taran and the remainder of his army finally reach Annuvin by a combination of the direct route, a mountain path of Doli's, and a secret pass over Mount Dragon shown to them by Achren. Taran sees that victory is nearly in Gwydion's hands, but also that the Cauldron-Born are about to reach Annuvin. In his alarm, Taran nearly falls off Mount Dragon, but is saved by the now-grown Gwythaint he had rescued so many years ago (The Book of Three). In a desperate attempt to fight off a group of Cauldron-Born who have discovered him on the mountain, he rolls a rock at them, and discovers Dyrnwyn in the hollow the stone occupied. Wielding Dyrnwyn, Taran slays the undead warrior who approaches to slay him, and at that instant all of the Cauldron-Born die as one.
Taran's group enters the fray, and the battle continues through the halls of Annuvin. Taran is almost deceived by Arawn - who has taken the guise of Gwydion - into giving up the sword. After the chaotic defeat of Arawn's forces, the companions gather before the Great Hall. Achren identifies Arawn in the form of a nearby serpent preparing to strike Taran and grabs him. He strikes her fatally, but Taran kills him with Dyrnwyn. With the death of Arawn, the stronghold of Annuvin bursts in flame and falls in ruins, destroying all of the magical implements inside; only Gurgi manages to save several scrolls containing knowledge of farming, smithing, and other crafts. The sword Dyrnwyn begins to fade, losing its magic.
The allies travel to Caer Dallben, where Gwydion tells them that in victory the Sons of Don, with all kinsmen and kinswomen, must return to the Summer Country. Indeed, all those who still have magic will depart, and the Fair Folk and Medwyn have closed their realms to outsiders. Dallben and Eilonwy must also go, and others who have served well, Taran among them, are given the chance to accompany them. Taran proposes to Eilonwy at last, and she accepts.
The Sons of Don plan to leave the next day. However, Taran becomes uncomfortable about his decision overnight. The witches Orddu, Orwen and Orgoch appear before him and reveal that they too are departing, and leave him with an unfinished tapestry depicting his life. He realizes there is much work to be done to rebuild Prydain, and he has made many promises; so he determines to remain behind. Eilonwy is able to willingly give up her magical nature in order to remain with him, and the two are married.
Dallben reveals that with this last quest, Taran has completed a path prophesied in the Book of Three whereby an orphan of "no station in life" would succeed the Sons of Don as High King. Dallben had traveled to seek such a one and try to hasten the day of Arawn's defeat; on this journey, he found a baby, hidden in the trees beside a battlefield and without any token of parentage, and took it in under the name Taran. Taran receives many gifts, including The Book of Three itself, although its powers, like all magic in Prydain, have also faded away with Arawn's demise, leaving it only as a mere chronicle of Taran's life. With Eilonwy by his side, Taran accepts his new responsibility and is hailed by his friends and battle companions as the new High King. | In this reference text summarizing plot of the book The High King, how did the sword Dyrnwyn lose its power? | When Taran killed Arawn with Dyrnwyn in the stronghold of Annuvin, all magical implements inside were destroyed - Dyrnwyn also lost its magic. |
null | false | null | How large is a regulation size basketball in the NBA? | The official size of the basketball used by the NBA is 29.5 inches in circumference. |
|
null | false | null | Next Generation reviewed the game, rating it three stars out of five, and stated that "Meat Puppet is the sort of game that keeps you playing just to see what happens next, so it's probably best not to give away anything else. With better control, it would have rated a star higher, no question. As it is, the gameplay is just solid enough to let you keep coming back."
Siew-Ching Goh for Australian newspaper The Age said "why set your heart on a showdown when the more likely outcome is being late for the appointment? You'll still have a lot of fun, anyway, running around with Lotos." | Show me general reviews received for game Meat Puppet using given text as a reference | 1. The game received a three out of five star review from Next Generation, who also noted that "Meat Puppet is the kind of game that keeps you playing just to see what happens next, so it's probably best not to give away anything else."
2. Without a doubt, it would have received a higher rating with better control. The gameplay is currently just reliable enough to keep you coming back.
3. "Why set your heart on a showdown when the more likely outcome is being late for the appointment?," wrote Siew-Ching Goh for the Australian publication The Age. You'll still have a great time playing with Lotos, though. |
1901.02262 | true | null | We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\subset $ ANS $\subset $ ALL.
The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. | Is there exactly one "answer style" per dataset? | Yes. |
1911.02821 | false | null | To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. We run the same experiment for five times and report the average score to ensure the reliability of results. For detailed hyper-parameter settings, please see Appendix. Besides, three popular CWS tools thulac BIBREF14, ictclas BIBREF15 and hanlp BIBREF16 are employed to segment the Chinese sentences into words.
To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm. In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc) as suggested in BERT-wwm BIBREF13 for both baselines and our method on each dataset. | What pre-trained models did they compare to? | The answers are shown as follows:
* BERT, ERNIE, and BERT-wwm
|
null | false | null | Identify which instrument is string or percussion: Damaru, Laouto | Damaru is percussion, Laouto is string. |
|
null | false | null | Housman received his BA in 1966 and MA in 1971 from Brandeis University. As one of the first postdocs in the lab of Harvey Lodish at MIT, Housman showed that all mammalian proteins begin with a methionine residue transferred from a specific met-initiator tRNA. Between 1973 and 1975 he taught at the University of Toronto and was on the staff of the Ontario Cancer Institute. He joined the MIT faculty in 1975. In his lab at MIT, he mentored Jim Gusella and Daniel Haber. | Where did David Houseman go to college? | David Houseman received his BA and MA in 1966 from Brandeis University. |
null | false | 134 | We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.
In contrast to the conventional stacked LSTMs where only hidden states are fed as in put to the next layer, the suggested architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. | What's the difference between the conventional stacked LSTMs and the suggested architecture? | The conventional stacked LSTMs only hidden states are fed as in put to the next layer, and the suggested architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lotheyr context using the soft gating mechanism of LSTMs. |
null | false | 461 | The problem of end-to-end learning of a communication system using an autoencoder has recently been shown to be a promising approach. We focus on the problem of test-time domain adaptation for such an autoencoder system whose channel is generatively-modeled using a mixture density network (MDN). Different from the setting of conventional training-time (unsupervised or semi-supervised) domain adaptation, here we have a fully-trained channel model and autoencoder from a source domain, that we would like to adapt to a target domain using only a small labeled dataset (and no unlabeled data). Moreover, since the distribution of the channel is expected to change frequently (e.g., a wireless link), the error rate of the autoencoder can degrade quickly, making it challenging to collect sufficient data for frequent retraining of the autoencoder. To address this, we propose a fast and sample-efficient method for adapting the autoencoder without modifying the encoder and decoder neural networks, and adapting only the MDN channel model. The method utilizes feature transformations at the decoder to compensate for changes in the channel distribution, and effectively present to the decoder samples close to the source distribution. Experimental evaluation on simulated datasets and real mmWave wireless channels demonstrate that the proposed method can adapt the MDN channel using very limited number of samples, and improve or maintain the error rate of the autoencoder under changing channel conditions.
The problem of end-to-end learning of a communication system using an autoencoder has recently been shown to be a promising approach. We focus on the problem of test-time domain adaptation for such an autoencoder system whose channel is generatively-modeled using a mixture density network (MDN). Different from the setting of conventional training-time (unsupervised or semi-supervised) domain adaptation, here we have a fully-trained channel model and autoencoder from a source domain, that we would like to adapt to a target domain using only a small labeled dataset (and no unlabeled data). | What is the focus of the paper? Is it the domain adaptation approach or the specific problem of domain adaptation (DA) of autoencoders using an MDN as the channel model? | The main focus of our paper is on the domain adaptation of an end-to-end autoencoder-based communication system, under a frequently-changing channel distribution. We also focus on a generative channel model based on mixture density networks. We agree with the reviewer that our earlier abstract seemed to focus more on the general domain adaptation problem. We have now revised the abstract to focus more on the specific problem of autoencoder and channel adaptation that we address in the paper. |
null | false | null | Identify which instrument is string or percussion: Samphor, Viola toeria | Viola toeria is string, Samphor is percussion. |
|
2001.09332 | false | null | The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences. | How big is dataset used to train Word2Vec for the Italian Language? | The answers are shown as follows:
* $421\,829\,960$ words divided into $17\,305\,401$ sentences
|
null | false | null | What is Grace Hopper known for? | Grace Hopper was the first person to create a compiler for a programming language and one of the first programmers of the Harvard Mark I computer, an electro-mechanical computer based on Analytical Engine. Hopper's work with computers started in 1943, when she started working at the Bureau of Ordnance's Computation Project at Harvard where she programmed the Harvard Mark I. Hopper not only programmed the computer, but created a 500-page comprehensive manual for it. Even though Hopper created the manual, which was widely cited and published, she was not specifically credited in it. Hopper is often credited with the coining of the term "bug" and "debugging" when a moth caused the Mark II to malfunction. While a moth was found and the process of removing it called "debugging," the terms were already part of the language of programmers. |
|
1909.03544 | false | null | pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data.
BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word.
Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096.
pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data.
BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word.
Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096. | What data is used to build the embeddings? | The answers are shown as follows:
* large raw Czech corpora available from the LINDAT/CLARIN repository
* Czech Wikipedia
|
null | false | 110 | We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations:
1.5-entmax: a Transformer with sparse entmax attention with fixed $\alpha =1.5$ for all heads. This is a novel model, since 1.5-entmax had only been proposed for RNN-based NMT models BIBREF14, but never in Transformers, where attention modules are not just one single component of the seq2seq model but rather an integral part of all of the model components.
$\alpha $-entmax: an adaptive Transformer with sparse entmax attention with a different, learned $\alpha _{i,j}^t$ for each head.
The adaptive model has an additional scalar parameter per attention head per layer for each of the three attention mechanisms (encoder self-attention, context attention, and decoder self-attention), i.e.,
and we set $\alpha _{i,j}^t = 1 + \operatornamewithlimits{\mathsf {sigmoid}}(a_{i,j}^t) \in ]1, 2[$. All or some of the $\alpha $ values can be tied if desired, but we keep them independent for analysis purposes.
We use a mini-batch size of 8192 tokens and warm up the learning rate linearly until 20k steps, after which it decays according to an inverse square root schedule. | What is the mini-batch size? | 8192 tokens. |
null | false | null | What are pajamas? | Pajamas are comfortable, loose-fitting clothing that people wear to bed at night. They are also referred to as “PJs.” |
|
null | false | 46 | In SQA, each sample is a triple, INLINEFORM0 , where INLINEFORM1 is a question in either spoken or text form, INLINEFORM2 is a multi-sentence spoken-form document, and INLINEFORM3 is the answer in text from. The task of this work is extractive SQA; that means INLINEFORM4 is a word span from the reference transcription of INLINEFORM5 . An overview framework of SQA is shown in Figure FIGREF1 . In this paper, we frame the source domain as reference transcriptions and the target domain as ASR hypotheses. Hence, we can collect source domain data more easily, and adapt the model to the target domain.
In this task, when the machine is given a spoken document, it needs to find the answer of a question from the spoken document. SQA can be solved by the concatenation of an ASR module and a question answering module. Given the ASR hypotheses of a spoken document and a question, the question answering module can output a text answer.
The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer. We used the standard evaluation script from SQuAD BIBREF1 to evaluate the performance.
In this paper, we frame the source domain as reference transcriptions and the target domain as ASR hypotheses. Hence, we can collect source domain data more easily, and adapt the model to the target domain. | What do authors frame source domain and target domain as respectively? | Reference transcriptions and ASR hypotheses. |
null | false | null | How do you make an electronic music song? | There are quite a few genres in electronic music in which electronic music songs are classified into. A couple of the most popular ones are techno, house, progressive house, trance, breakbeats, drum and bass, dubstep, and many more. With all of these styles, making an electronic music songs is the same. You need to make a beat using a combination of drums, kicks, snares, claps, high hats, toms, and samples. These are organized in a way to make a beat and loop through these rhythms. These rhythms are then layered in a way to help create the foundation for a song. Many popular tools that are used are digital audio workstations also known as a DAW. Drum machines, samplers, and more. Besides the beats and samples, many times other instruments are used such as synthesizers and vocoders. All of this is arranged using the DAW tool to create a song. |
|
null | false | 121 | Until recently, efforts to bring out semantics from structured-data relied heavily on expert knowledge BIBREF22, BIBREF8. For example, in order to better transcribe numerical time series of weather data to a textual forecast, Reiter et al. BIBREF8 devise complex template schemes in collaboration with weather experts to build a consistent set of data-to-word rules.
Modern approaches to the wide range of tasks based on structured-data (e.g. table retrieval BIBREF2, BIBREF23, table classification BIBREF24, question answering BIBREF25) now propose to leverage progress in deep learning to represent these data into a semantic vector space (also called embedding space). In parallel, an emerging task, called “data-to-text", aims at describing structured data into a natural language description. This task stems from the neural machine translation (NMT) domain, and early work BIBREF26, BIBREF27, BIBREF10 represent the data records as a single sequence of facts to be entirely translated into natural language. Wiseman et al. BIBREF10 show the limits of traditional NMT systems on larger structured-data, where NMT systems fail to accurately extract salient elements.
To improve these models, a number of work BIBREF28, BIBREF12, BIBREF29 proposed innovating decoding modules based on planning and templates, to ensure factual and coherent mentions of records in generated descriptions. For example, Puduppully et al. BIBREF12 propose a two-step decoder which first targets specific records and then use them as a plan for the actual text generation. Similarly, Li et al. BIBREF28 proposed a delayed copy mechanism where their decoder also acts in two steps: 1) using a classical LSTM decoder to generate delexicalized text and 2) using a pointer network BIBREF30 to replace placeholders by records from the input data.
Closer to our work, very recent work BIBREF11, BIBREF19, BIBREF13 have proposed to take into account the data structure. More particularly, Puduppully et al. BIBREF13 follow entity-centric theories BIBREF31, BIBREF32 and propose a model based on dynamic entity representation at decoding time. It consists in conditioning the decoder on entity representations that are updated during inference at each decoding step. On the other hand, Liu et al. BIBREF11, BIBREF19 rather focus on introducing structure into the encoder. For instance, they propose a dual encoder BIBREF19 which encodes separately the sequence of element names and the sequence of element values. These approaches are however designed for single-entity data structures and do not account for delimitation between entities.
Our contribution differs from previous work in several aspects. First, instead of flatly concatenating elements from the data-structure and encoding them as a sequence BIBREF11, BIBREF12, BIBREF10, we constrain the encoding to the underlying structure of the input data, so that the delimitation between entities remains clear throughout the process. Second, unlike all works in the domain, we exploit the Transformer architecture BIBREF21 and leverage its particularity to directly compare elements with each others in order to avoid arbitrary assumptions on their ordering. Finally, in contrast to BIBREF33, BIBREF13 that use a complex updating mechanism to obtain a dynamic representation of the input data and its entities, we argue that explicit hierarchical encoding naturally guides the decoding process via hierarchical attention.
More particularly, Puduppully et al. [29] follow entity-centric theories [10,20] and propose a model based on dynamic entity representation at decoding time. | What model has been proposed by Puduppully et al.? | A model based on dynamic entity representation at decoding time. |
null | false | null | On a cross country roadtrip across the United States where we expect to drive for many hours per day, categorize each of the following as either 'necessary', 'somewhat necessary' or 'nice to have'. A bathroom break, a music playlist, snacks, a map. | On a cross country roadtrip across the United States a bathroom break is necessary.
On a cross country roadtrip across the United States a music playlist is nice to have.
On a cross country roadtrip across the United States snacks are nice to have.
On a cross country roadtrip across the United States a a map is somewhat necessary. |
|
null | false | null | Identify which instrument is string or percussion: Malimba, Langspil | Langspil is string, Malimba is percussion. |
|
null | false | 490 | Figure 11: Layer-wise sparsity ratio results of LIP, SNIP and randomly pruned tickets. Note that we summarize the sparsity ratio of each layer: the ratio of the number of parameters whose values are equal to zero to the number of total parameters of the layer. And the x-axis of these figures is composed of the serial numbers of model layers. We sampled subnetworks with four different sparsities (sparsity = 36%, 59%, 89%, 95%) to observe.****Figure 13: Learning curves using four different training targets: a clean image (Baby.png), the same image added with noises, the same randomly scrambled and white noise. Note that we use four different models: the LIP subnetwork (S = 89%), randomly pruned subnetwork (S = 89%), SNIP subnetwork (S = 89%) and the dense model (S = 0%). And we trained them in isolation in the same experimental settings for 10000 iterations.****As shown in Fig. 11, the structure of the LIP subnetwork is drastically different from those found by SNIP and random pruning, in particular the distribution of layerwise sparsity ratios. LIP tends to preserve weights of the earlier layers (closer to the input), while pruning the latter layers more aggressively (e.g, Fig. 11(a)). In contrast, SNIP tends to prune much more of the earlier layers compared to the latter ones. Random pruning by default prunes each layer at approximately the same ratio.Comparing the three methods seem to suggest that for finding effective and transferable LIP subnetworks, to specifically keep more weights at earlier layers more is important. That is an explainable finding, since for image restoration tasks, the low-level features (color, texture, shape, etc.) presumably matter more and are more transferable, than the high-level features (object categories, etc.). The earlier layers are known to capture more of low-level image features, hence contributing more to retraining the image restoration performance with DIP. | What is the success of transferability of LIP (studying the sparsity pattern per network layer will help) and how to transfer DIP models to image classifications? | We have added the results of layer-wise sparsity ratio of different subnetworks (LIP, SNIP, Randomly Pruned) in Figure 11 and also plotted the convergence curves in Figure 13 of these subnetworks to better explore the reasons for the powerful transferability of LIP subnetworks. We have found that LIP tends to preserve weights of the earlier layers (closer to the input), while pruning the latter layers more aggressively (e.g, Figure 11(a).). In contrast, SNIP tends to prune much more of the earlier layers compared to the latter ones. Random pruning by default prunes each layer at approximately the same ratio. Comparing the three methods seems to suggest that to find effective and transferable LIP subnetworks, specifically keeping more weights at earlier layers more is important. That is an explainable finding, since for image restoration tasks, the low-level features (color, texture, shape, etc.) presumably matter more and are more transferable, than the high-level features (object categories, etc.). The earlier layers are known to capture more low-level image features, hence contributing more to retraining the image restoration performance with DIP. The detailed discussions are included in [Cons1.] Reviewer wMsj. |
null | false | null | How can one loose weight | There are several ways to shed some pounds which include having a balanced diet which is low in fats and carbohydrates, regular workouts or sports activities like running, brisk walking, swimming and hitting the gym etc. Yoga is also a good alternative to attain a healthy lifestyle and keep your weight in check. Some people use mobile applications to record their daily calorie intake and try to eat little less than the daily recommended calories. |
|
null | false | 134 | In this paper, we proposed a method of stacking multiple LSTM layers for modeling sentences, dubbed CAS-LSTM. It uses not only hidden states but also cell states from the previous layer, for the purpose of controlling the vertical information flow in a more elaborate way. We evaluated the proposed method on various benchmark tasks: natural language inference, paraphrase identification, and sentiment classification. Our models achieve the new state-of-the-art accuracy on SNLI and Quora Question Pairs datasets and obtain comparable results on MultiNLI and SST datasets. The proposed architecture can replace any stacked LSTM under one weak restriction—the size of states should be identical across all layers.
For future work we plan to apply the CAS-LSTM architecture beyond sentence modeling tasks. Various problems e.g. sequence labeling, sequence generation, and language modeling might benefit from sophisticated modulation on context integration. Aggregating diverse contexts from sequential data, e.g. those from forward and backward reading of text, could also be an intriguing research direction.
For future work we plan to apply the CAS-LSTM architecture beyond sentence modeling tasks. Various problems such as sequence labeling and language modeling might benefit from sophisti cated modulation on context integration. Aggregating diverse contexts from sequential data, e.g. those from forward and backward reading of text, could also be an intriguing research direction. | What do the authors plan to do in their future work? | For future work they plan to apply the CAS-LSTM architecture beyond sentence modeling tasks. |
1709.07916 | false | null | The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics. The words with italic and underline styles in Table 2 demonstrate the relation among the four DDEO areas. Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ). The strongest correlation among the topics was determined to be between exercise and obesity ( INLINEFORM0 ). Other notable correlations were: diabetes and obesity ( INLINEFORM1 ), and diet and obesity ( INLINEFORM2 ).
FLOAT SELECTED: Figure 2: DDEO Correlation P-Value
The main DDEO topics showed some level of interrelationship by appearing as subtopics of other DDEO topics.
Our results show users' interest about posting their opinions, sharing information, and conversing about exercise & diabetes, exercise & diet, diabetes & diet, diabetes & obesity, and diet & obesity (Figure FIGREF9 ).
FLOAT SELECTED: Figure 2: DDEO Correlation P-Value | How strong was the correlation between exercise and diabetes? | weak correlation with p-value of 0.08 |
null | false | 6 | Recently, with the emergence of neural seq2seq models, abstractive summarization methods have seen great performance strides BIBREF0, BIBREF1, BIBREF2. However, complex neural summarization models with thousands of parameters usually require a large amount of training data. In fact, much of the neural summarization work has been trained and tested in news domains where numerous large datasets exist. For example, the CNN/DailyMail (CNN/DM) BIBREF3, BIBREF4 and New York Times (NYT) datasets are in the magnitude of 300k and 700k documents, respectively. In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., BIBREF5). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections.
To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods.
Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. | What is the baseline of this paper? | A synthesis baseline. |
null | false | null | Federer played his first junior match in 1996 at the age of 14 at a grade 2 tournament in Switzerland. His main accomplishments as a junior player came at Wimbledon in 1998 when he won both the boys' singles final over Irakli Labadze, and in doubles teamed with Olivier Rochus defeating the team of Michaël Llodra and Andy Ram. In addition he reached the US Open Junior final in 1998, losing to David Nalbandian. Federer won four ITF junior singles tournaments in his career, including the prestigious Orange Bowl, where he defeated Guillermo Coria in the final. By the end of 1998 he attained the No. 1 junior world ranking and was awarded ITF junior World Champion. He ended his junior career at the end of 1998 with a high-ranking of No. 1 in singles and No. 7 in doubles (both attained on December 31, 1998) and a win–loss record of 78–20 in singles and 36–21 in doubles.
Junior Grand Slam results – Singles:
Australian Open: SF (1998)
French Open: 1R (1998)
Wimbledon: W (1998)
US Open: F (1998)
Junior Grand Slam results – Doubles:
Australian Open: SF (1998)
French Open: 1R (1998)
Wimbledon: W (1998)
US Open: 1R (1998)
1998–2002: Early professional career
Main article: Roger Federer's early career
Federer made his ATP debut at the 1998 Swiss Open Gstaad in his home country of Switzerland losing to Lucas Arnold Ker in the first round. Later that year, he won his first ATP match in Toulouse against Guillaume Raoux. He got a wildcard into the 1998 Swiss Indoors and lost in the first round to 4th seed and former world number 1 Andre Agassi. Federer finished his career as a 10-time champion of the tournament.
Federer entered the top 100 ranking for the first time on 20 September 1999 and started at the 1999 Marseille Open defeating the reigning champion of the 1998 French Open, Spaniard Carlos Moyá. His first final came at the Marseille Open in 2000, where he lost to fellow Swiss Marc Rosset. Federer won the 2001 Hopman Cup representing Switzerland, along with world No. 1 Martina Hingis. The duo defeated the American pair of Monica Seles and Jan-Michael Gambill in the finals. Federer later said that his experience with Hingis "definitely helped me to become the player I am today."
Federer's first singles win was at the 2001 Milan Indoor tournament, where he defeated Julien Boutter in the final. Although he won his first title already in 1999 on the Challenger tour, winning the doubles event in Segovia, Spain with Dutchman Sander Groen, the final was played on Federer's 18th birthday. In 2001, Federer made his first Grand Slam quarterfinal at the French Open, losing to former world No. 2 and eventual finalist Àlex Corretja. His run to the French quarterfinals launched him into the top 15 for the first time in his career.
His international breakthrough came at the 2001 Wimbledon Championships, when the 19-year-old Federer faced the four-time defending champion and all-time Grand Slam leader Pete Sampras. Federer beat the No. 1 seed in a five-set match to reach the quarterfinals. In the quarters he faced Englishman Tim Henman, eventually losing in a fourth-set tiebreaker.
The first final he reached at the Masters level came at the 2002 Miami Masters event, where he lost to former and future No. 1 Andre Agassi on hard court. Federer won his first Master Series event at the 2002 Hamburg Masters on clay, over Marat Safin; the victory put him in the top 10 for the first time. Federer made 10 singles finals between 1998 and 2002, of which he won four and lost six. He also made six finals in doubles. He finished 2001 with an ATP ranking of No. 13, and 2002 was the first year he was ranked within the top 10, finishing at No. 6.
2003: Grand Slam breakthrough at Wimbledon
Main article: 2003 Roger Federer tennis season
In 2003, Federer won his first Grand Slam singles title at Wimbledon, beating Andy Roddick in the semifinals and Mark Philippoussis in the final. In August he had a chance to take over the No. 1 ranking for the first time from Andre Agassi if he made it to the Montreal final. However, he fell in the semifinals to Roddick, in a final-set tiebreaker, leaving him 120 points behind Agassi. This, coupled with early losses to David Nalbandian at Cincinnati and the US Open, denied Federer the chance to become No. 1 for the duration of the season.
Federer won his first and only doubles Masters event in Miami with Max Mirnyi and made it to one singles Masters event in Rome on clay, which he lost. Federer made it to nine finals on the ATP Tour and won seven of them, including the 500 series events at Dubai and Vienna. Lastly, Federer won the year-end championships over Andre Agassi, finishing the year as world No. 2, narrowly behind Andy Roddick by only 160 points.
2004: Imposing dominance
Main article: 2004 Roger Federer tennis season
During 2004 Federer won three Grand Slam singles titles and became the first person to do in one season so since Mats Wilander in 1988. His first major hard-court title came at the Australian Open over Marat Safin, making him the world No. 1 for the first time. He then won his second Wimbledon crown over Andy Roddick. At the US Open, Federer defeated the 2001 champion, Lleyton Hewitt, for his first title there.
Federer won three ATP Masters events, one on clay in Hamburg, and the other two on hard surfaces at Indian Wells and in Canada. Federer took the ATP 500 series event at Dubai and wrapped up the year by winning the year-end championships for the second time. He also won his first tournament on home soil by capturing the Swiss Open in Gstaad. His 11 singles titles were the most of any player in two decades, and his record of 74–6 was the best since Ivan Lendl in 1986. He reached the year-end No. 1 ranking for the first time.
2005: Consolidating dominance
Main article: 2005 Roger Federer tennis season
In 2005, Federer failed to reach the finals of the first two Grand Slam tournaments, losing the Australian Open semifinal to eventual champion Safin after holding match points, and the French Open semifinal to eventual champion Rafael Nadal. However, Federer reestablished his dominance on grass, winning Wimbledon for a third time by defeating Andy Roddick. At the US Open, Federer defeated Andre Agassi in the latter's last major final.
Federer also took four Masters wins: Indian Wells, Miami and Cincinnati on hard court, and Hamburg on clay. The win in Miami was particularly noteworthy as it was the first final contested between Federer and Nadal. Federer recovered from two sets and a break down to take the final in five sets. Furthermore, Federer won two ATP 500 series events at Rotterdam and Dubai. Federer lost the year-end championships to David Nalbandian in five sets while playing through a foot injury that sidelined him for almost the rest of the season after September. He maintained his position as No. 1 for the entire season.
Federer won 11 singles titles, which ties his 2004 season. Federer's 81 match victories were the most since Pete Sampras in 1993, and his record of 81–4 (95.2%) remains the third-best winning percentage in the Open Era behind John McEnroe's 1984 and Jimmy Connors's 1974.
2006: Career-best season
Main article: 2006 Roger Federer tennis season
The 2006 season was statistically the best season of Federer's career. In November 2011, Stephen Tignor, chief editorial writer for Tennis.com, ranked Federer's 2006 season as statistically the second-best season of all time during the Open Era, behind Rod Laver's Grand Slam year of 1969.
Federer hits a forehand at the 2006 US Open, where he became the first man in history to achieve the Wimbledon-US Open double for three consecutive seasons.
Federer won 12 singles titles (the most of any player since Thomas Muster in 1995 and John McEnroe in 1984) and had a match record of 92–5 (the most wins since Ivan Lendl in 1982). Federer reached the finals in an astounding 16 of the 17 tournaments he entered during the season.
In 2006, Federer won three Grand Slam singles titles and reached the final of the other, with the only loss coming against Nadal in the French Open. This was Federer and Nadal's first meeting in a Grand Slam final. He was the first man to reach all four finals in a calendar year since Rod Laver in 1969. Federer defeated Nadal in the Wimbledon Championships final. In the Australian Open, Federer defeated Marcos Baghdatis, and at the US Open, Federer defeated 2003 champion Roddick. In addition, Federer reached six Masters finals, winning four on hard surfaces and losing two on clay to Nadal. Federer, however, consistently pushed Nadal to the limit on clay throughout the season taking him to fourth-set tiebreakers in Monte-Carlo and Paris, and a thrilling match in Rome that went to a deciding fifth-set tiebreaker.
Federer won one ATP 500 series event in Tokyo and captured the year-end championships for the third time in his career, again finishing the year as world No. 1. Federer only lost to two players during 2006, to Nadal four times in finals, and to 19-year-old Andy Murray in the second round of the 2006 Cincinnati Masters, in what was Federer's only defeat before the final of a tournament that year. Federer finished the season on a 29-match winning streak, as well as winning 48 of his last 49 matches after the French Open.
Near the end of the season he won his hometown tournament, the Swiss Indoors in Basel, Switzerland for the first time, having finished runner up in 2000 and 2001, and missing the tournament in 2004 and 2005 due to injuries.
2007: Holding off young rivals
Main article: 2007 Roger Federer tennis season
In 2007, Federer reached all four Grand Slam singles finals, winning three of them again. He won the Australian Open without dropping a set, beating Fernando González in the final. This made him the first man in the 21st century to accomplish the feat, as Björn Borg at the 1980 French Open was the last to win a Grand Slam tournament without the loss of a set. Federer had entered the year on a huge winning streak and after capturing his fourth Dubai crown Federer's winning streak stood at 41 matches, the longest of his career and only five shy of the record. Federer entered Indian Wells as the three-time defending champion, but his streak ended in controversy. He was defeated by an Argentine, Guillermo Cañas, who had failed a drug test for illegal doping.
Federer was called "Darth Federer" by fans and commentators at the 2007 US Open.
This surprising first-round loss marked the first time he was defeated since August 2006, a period spanning over 7 months.
During the clay season Federer's victory in the Hamburg Masters final was particularly impressive, as it snapped Nadal's 81-match winning streak on clay, an Open-era record. Federer turned the match around from a set down to sweep 12 of the final 14 games, including a final set bagel. At the French Open, some anticipated that Federer could become the first man in almost 40 years to hold all four majors simultaneously, having just resoundingly defeated young rival Nadal on clay entering the tournament. However, in a repeat of the previous year Federer played a tough four-set final against Nadal, but was undone by going 1/17 on break-point chances.
At Wimbledon Federer entered the tournament not only as the four-time defending champion, but also riding a 48-match winning streak on grass. Once again, he defeated Rafael Nadal for a second consecutive year in the final, this time in a thrilling five-set encounter that many analysts hailed as the greatest Wimbledon final since 1980. Victory at Wimbledon equaled him with Björn Borg for the record of five consecutive championships at the All England Club.
Federer reached the final in Montreal before playing a young and relatively unknown Serbian named Novak Djokovic. Djokovic proved his potential by beating the world No. 1 in a final-set tiebreaker upset. Federer rebounded in Cincinnati to capture his fifth title of the year. Federer entered the US Open as the three-time defending champion and faced Djokovic in the final. This time, Federer prevailed in a close straight-set match. Victory in New York moved him ahead of Laver and Borg for third on the all-time list of major championship victories. Throughout the tournament the American press nicknamed him Darth Federer for his all-black attire (which included tuxedo-striped shorts) and the tournament played "The Imperial March" from Star Wars when he was announced onto the court for each of his matches. He closed out the year with victories in Basel and the year-end championships in Shanghai.
He finished the season as the year-end No. 1 for the fourth year in a row, demonstrating his dominance, and during these four years he won 11 Grand Slam singles titles. After his phenomenal triple Grand Slam season yet again, Federer became the only player in history to win three majors in a year for three years (2004, 2006, 2007). It was the third consecutive season that Federer held the No. 1 ranking for all 52 weeks of the year.
2008: Illness, Olympic Gold, and fifth US Open
Main article: 2008 Roger Federer tennis season
Federer's success in 2008 was severely hampered by a lingering bout of mononucleosis, which he suffered during the first half of the year. At the end of the year he suffered a back injury.
In 2008, Federer captured one Grand Slam, a singles title at the US Open over Andy Murray. Federer was defeated by Nadal in two Grand Slam finals, the French Open and Wimbledon, which was regarded as the best match of tennis history by many, when he was going for six straight wins to break Björn Borg's record. He came back from two sets down to force a fifth set, where he fell just two points from the title. At the Australian Open Federer lost in the semifinals to eventual winner Djokovic, which ended his record of 10 consecutive finals. He lost twice in Masters finals on clay to Nadal, at Monte Carlo and Hamburg. Federer captured three titles playing in 250-level events at Estoril, Halle, and Basel.
At the Olympic Games Federer and Stan Wawrinka won the gold medal in doubles, after beating the Bryan brothers American team in the semifinals and the Swedish duo of Simon Aspelin and Thomas Johansson in the final. However, Federer could reach only the quarterfinals in the singles draw, bowing out to then No. 8 James Blake, ceding his No. 1 ranking to Nadal after being at the top for a record 237 consecutive weeks. He ended the year ranked No. 2.
Federer entered the 2009 season with 13 Grand Slams, only one behind Pete Sampras' all-time record. The season began with a loss to Nadal in the final of the Australian Open in a hotly contested five-set match. Federer struggled following the defeat in Melbourne and entered the clay season without a title.
Federer winning the 2009 French Open, and completing the career Grand Slam
Federer's season turned around in the final Masters event of the clay season when he defeated Nadal on clay for only the second time to capture the Madrid Masters. Federer entered the French Open with few predicting him to win the elusive Parisian title having lost to Nadal in the final weekend for the past four seasons. After Nadal's unexpected defeat to Robin Söderling, Federer became the overwhelming favorite. In his next match, he came from two sets and break point down in the third set to defeat Tommy Haas in five sets. He also fought back from a two-sets-to-one deficit against a young Juan Martín del Potro to win a five setter in the semifinals. In the final, he faced Söderling, and with straight sets victory, he finally captured the Coupe des Mousquetaires and career Grand Slam. This victory also tied him with Pete Sampras for the most Grand Slam singles titles.
Federer turned his sights to the grass courts of Wimbledon, where he breezed his way up to the final. In the championship match he faced long-time rival Andy Roddick in what was their eighth and final meeting at a Grand Slam. Roddick pushed Federer into a record-setting fifth set, which Federer claimed 16–14 to win his 15th Grand Slam singles title, breaking the all-time record of Pete Sampras.
Federer continued his summer run by winning his third title on the lightning-fast courts of the Cincinnati Masters, defeating Novak Djokovic in the final. At the US Open he defeated Söderling in the quarters and Djokovic, for the third consecutive year, in the semifinals. On the penultimate point of the Djokovic match he hit what many consider to be the greatest shot of his career, a tweener winner, to set up match points. Federer was defeated by del Potro in the final despite leading two sets to one and falling just two points from the title in the fourth set.
The 2009 season was perhaps the most historically relevant of Federer's career as he completed a career Grand Slam by winning his first French Open title and won a men's record fifteenth Grand Slam singles title at Wimbledon, surpassing Pete Sampras's mark of fourteen. The Wimbledon final was also historic for being the longest Grand Slam final in terms of games played with Federer prevailing 16–14 in the fifth set.
Federer finished the season as the year-end No. 1 for the fifth time in his career.
2010: Fourth Australian Open
Main article: 2010 Roger Federer tennis season
Federer won a record 16th major at the 2010 Australian Open.
Federer started the year with a win at the Australian Open, where he defeated Andy Murray in the final, extending the Grand Slam singles record to sixteen titles and matching Andre Agassi's record of four Australian Open titles. Since Wimbledon 2005 Federer had made 18 out of 19 finals in Grand Slam tournaments, a period of sustained excellence unparalleled in the Open Era. This tournament, however, marked the end of his dominance at the majors.
At the French Open, Federer won his 700th tour match and 150th tour match on clay. However, he failed to reach a Grand Slam semifinal for the first time since the 2004 French Open, losing to Söderling in the last 8 and relinquishing his No. 1 ranking, having been just one week away from equaling Pete Sampras's record of 286 weeks as world No. 1. In a huge upset at Wimbledon, Federer lost in the last 8 again to Tomáš Berdych and fell to No. 3 in the rankings for the first time in 6 years and 8 months.
Towards the middle of July, Federer hired Pete Sampras' old coach Paul Annacone on a trial basis to put his tennis game and career back on track. At the 2010 US Open Federer reached the semifinals, where he lost a five-set match to Novak Djokovic after holding two match points. Federer made it to four Masters finals prevailing against Mardy Fish at the Cincinnati Masters.
Federer finished the year in strong form, winning indoor titles at the Stockholm Open, Swiss Indoors, and the ATP Finals in London bringing his tally to 66 career titles. Federer won the year-end championships in London by beating rival Rafael Nadal for his fifth title at the event. He beat all contenders except Nadal in straight sets. It remains the only tournament in his career where Federer defeated all fellow members of the Big Four. In 2010 Federer finished in the top two for the eighth consecutive season.
2011: Tour Finals title record
Main article: 2011 Roger Federer tennis season
The year 2011 was a lean year for Federer, although great by most player's standards. He was defeated in straight sets in the semifinals of the 2011 Australian Open by eventual champion Novak Djokovic, marking the first time since July 2003 that he did not hold any of the four major titles. In the French Open semifinals, Federer ended Djokovic's undefeated streak of 43 consecutive wins with a four-set victory. Federer then lost in the final to Rafael Nadal. At Wimbledon Federer advanced to his 29th consecutive Grand Slam quarterfinal, losing to Jo-Wilfried Tsonga. It marked the first time in his career that he had lost a Grand Slam tournament match after winning the first two sets.
At the US Open Federer lost in the semifinals to Novak Djokovic in five sets. In a repeat of previous year's semifinal event, Federer again squandered two match points on his own serve before losing after winning first two sets for second consecutive time in the year. The loss meant that it was the first time since 2002 that Federer had not won any of the four grand slam titles.
In September 2011, in a South African poll, Federer was voted the second most trusted and respected person in the world next to Nelson Mandela.
Federer finished the season successfully in the indoor season, winning his last three tournaments of the year at the Swiss Indoors, Paris Masters, and ATP Finals, forming a 16 match winning streak. Federer finished the year ranked No. 3. | Which season was Roger Federer's most important in his career? | 2009 was arguably the most important of his career since he won his first and only French Open title, completing the career grand slam. He was also able to surpass Pete Sampras' record of fourteen men's grand slam titles.
Until then, Federer had won the three other grand slams - Wimbledon, Australian Open and US Open but the French Open was elusive as he had to repeatedly compete on the clay court of Rolland Garros against Rafael Nadal, who was a clay court expert and he lost to him on multiple occasions.
2004 was also an important year in Federer's career because he established dominance in the sport by winning three out of the four grand slams of the year and ending the year as the world number one for the first time in his long career. |
null | false | null | Twitter is an online social media and social networking service owned and operated by American company Twitter, Inc., on which users post or reply to texts, images and videos known as "tweets". Registered users can tweet, like, "retweet" tweets and direct message (DM), while unregistered users only have the ability to view public tweets. Users interact with Twitter through browser or mobile frontend software, or programmatically via its APIs.
Twitter was created by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams in March 2006 and launched in July of that year. Twitter, Inc. is based in San Francisco, California and has more than 25 offices around the world. By 2012, more than 100 million users tweeted 340 million tweets a day, and the service handled an average of 1.6 billion search queries per day. In 2013, it was one of the ten most-visited websites and has been described as "the SMS of the Internet". By the start of 2019, Twitter had more than 330 million monthly active users. In practice, the vast majority of tweets are tweeted by a minority of users. In 2020, it was estimated that approximately 48 million accounts (15% of all accounts) were fake.
On October 27, 2022, business magnate Elon Musk acquired Twitter, Inc. for US$44 billion, gaining control of the platform. On December 20, 2022, Musk announced he would step down as CEO once a replacement had been found. As of April 2023, Musk has yet to announce a successor and remains CEO. | List the founders of Twitter from the above passage in a comma separated format. | Jack Dorsey, Noah Glass, Biz Stone, Evan Williams |
null | false | null | What's a good time of year to visit Seattle? | The best time of year to visit Seattle is in the summer. The weather is nicest from June to September, and there are many outdoor activities to enjoy. However, there is lots to do all year. For example, in spring many visitors come to the University of Washington campus to see cherry blossoms. The nearby tulip festival in the Skagit Valley is another popular spring attraction. |
|
null | false | null | Among the most respected and well-known awards for science fiction are the Hugo Award for literature, presented by the World Science Fiction Society at Worldcon, and voted on by fans; the Nebula Award for literature, presented by the Science Fiction and Fantasy Writers of America, and voted on by the community of authors; the John W. Campbell Memorial Award for Best Science Fiction Novel, presented by a jury of writers; and the Theodore Sturgeon Memorial Award for short fiction, presented by a jury. One notable award for science fiction films and TV programs is the Saturn Award, which is presented annually by The Academy of Science Fiction, Fantasy, and Horror Films. | Extract the list of science fiction awards from the paragraph below in the form of bulleted list, each item should include award name and whom is it presented by. | - Hugo Award for literature, presented by the World Science Fiction Society.
- Nebula Award for literature, presented by the Science Fiction and Fantasy Writers of America.
- John W. Campbell Memorial Award for Best Science Fiction Novel, presented by a jury of writers.
- Theodore Sturgeon Memorial Award for short fiction, presented by a jury.
- Saturn Award, presented by by The Academy of Science Fiction, Fantasy, and Horror Films. |
1811.01001 | true | null | In the present work, we address these limitations by providing a more nuanced evaluation of the learning capabilities of RNNs. In particular, we investigate the effects of three different aspects of a network's generalization: data distribution, length-window, and network capacity. We define an informative protocol for assessing the performance of RNNs: Instead of training a single network until it has learned its training set and then evaluating it on its test set, as BIBREF9 do in their study, we monitor and test the network's performance at each epoch during the entire course of training. This approach allows us to study the stability of the solutions reached by the network. Furthermore, we do not restrict ourselves to a test set of sequences of fixed lengths during testing. Rather, we exhaustively enumerate all the sequences in a language by their lengths and then go through the sequences in the test set one by one until our network errs $k$ times, thereby providing a more fine-grained evaluation criterion of its generalization capabilities.
Previous studies have examined various length distribution models to generate appropriate training sets for each formal language: BIBREF16 , BIBREF11 , BIBREF12 , for instance, used length distributions that were skewed towards having more short sequences than long sequences given a training length-window, whereas BIBREF9 used a uniform distribution scheme to generate their training sets. The latter briefly comment that the distribution of lengths of sequences in the training set does influence the generalization ability and convergence speed of neural networks, and mention that training sets containing abundant numbers of both short and long sequences are learned by networks much more quickly than uniformly distributed regimes. Nevertheless, they do not systematically compare or explicitly report their findings. To study the effect of various length distributions on the learning capability and speed of LSTM models, we experimented with four discrete probability distributions supported on bounded intervals (Figure 2 ) to sample the lengths of sequences for the languages. We briefly recall the probability distribution functions for discrete uniform and Beta-Binomial distributions used in our data generation procedure.
Furthermore, we do not restrict ourselves to a test set of sequences of fixed lengths during testing. Rather, we exhaustively enumerate all the sequences in a language by their lengths and then go through the sequences in the test set one by one until our network errs $k$ times, thereby providing a more fine-grained evaluation criterion of its generalization capabilities.
To study the effect of various length distributions on the learning capability and speed of LSTM models, we experimented with four discrete probability distributions supported on bounded intervals (Figure 2 ) to sample the lengths of sequences for the languages. | Are the unobserved samples from the same distribution as the training data? | No. |
null | false | null | Given the context below, please answer the provided question:
Context:
In fact, the first highlight moment of the Chat Assistant occurred twelve years ago at the launch event of the fifth-generation iPhone. Scott Forstall, then Apple's software chief, pressed a button on the iPhone to summon Siri and asked it questions. Inquiring about the weather, searching for restaurants, checking stock prices. Keep in mind that this was a time when smartphones were far from widespread, and being able to check stock prices on a mobile phone was already a very novel thing.
Question:
Who was the software chief during the first release of Apple Siri? | Scott Forstall |
|
null | false | 202 | Natural language generation (NLG) has been one of the key topics of research in natural language processing, which was highlighted by the huge body of work on NLG surveyed in BIBREF0, BIBREF1. With the advances of several devices capable of understanding spoken language and conducting conversation with human (e.g., Google Home, Amazon Echo) and the shrinking gap created by the digital devices, it is not difficult to foresee that the market and application areas of NLG systems will continue to grow, especially in applications whose users are non-experts. In such application, a user often asks for certain information and waits for the answer and a NLG module would return the answer in spoken language instead of text such as in question-answering systems or recommendation systems. The NLG system in these two applications uses templates to generate the answers in natural language for the users. A more advanced NLG system in this direction is described in BIBREF2, which works with ontologies annotated using the Attempto language and can generate a natural language description for workflows created by the systems built in the Phylotastic project. The applications targeted by these systems are significantly different from NLG systems, whose main purpose is to generate high-quality natural language description of objects or reports, such as those reported in the recent AAAI conference BIBREF3, BIBREF4, BIBREF5.
The present paper is motivated by the need to generate natural language description of computational results to non-expert users such as those developed in the Phylotastic project. In this project, the users are experts in evolutionary biology but are none experts in ontologies and web services. When a user places a request, he/she will receive a workflow consisting of web services, whose inputs and outputs are specified by instances of classes in the ontologies working with web services, as well as the ordering and relationships between the services. To assist the user in understanding the workflow, a natural language description of the workflow is generated. In order to accomplish the task, the NLG system in the Phylotastic project proposes to annotate elements of the ontologies using Attempto, a simple subset of English with precisely defined syntax and semantics.
In this paper, we propose a system that addresses the limitation of the system discussed in the Phylotastic project BIBREF2. Specifically, we assume that the annotations given in an ontology are natural language sentences. This is a reasonable assumption given that the developers of an ontology are usually those who have intimate knowledge about entities described in the ontology and often have some sort of comments about classes, objects, and instances of the ontology. We then show that the system is very flexible and can be used for the same purpose with new ontologies.
The rest of the paper is organized as follows. Section SECREF2 briefly reviews the basics of Grammatical Framework (GF)BIBREF6. Section SECREF3 describes the main modules of the system. Section SECREF4 includes two use cases of the system using an available ontologies against in the context of reasoning about ontologies. Specifically, it compares with the system used in the Phylotastic project and an ontology about people. This section also contains a use case that highlights the versatility of the proposed system by addressing a challenge to create an abstract Wikipedia BIBREF7. Related works are discussed in Section SECREF5. Section SECREF6 concludes the paper.
This is a reasonable assumption given that the developers of an ontology are usually those who have intimate knowledge about entities described in the ontology and often have some sort of comments about classes, objects, and instances of the ontology. | Why do they think their assumption is reasonable? | The developers of an ontology are usually those who have intimate knowledge about entities described in the ontology and often have some sort of comments about classes, objects, and instances of the ontology. |
null | false | 72 | This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Deep neural networks have been widely used in text classification and have achieved promising results BIBREF0 , BIBREF1 , BIBREF2 . Most focus on content information and use models such as convolutional neural networks (CNN) BIBREF3 or recursive neural networks BIBREF4 . However, for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored. On social media platforms, a user can act either as the author of a post or as a reader who expresses his or her comments about the post.
In this paper, we classify posts taking into account post authorship, likes, topics, and comments. In particular, users and their “likes” hold strong potential for text mining. For example, given a set of posts that are related to a specific topic, a user's likes and dislikes provide clues for stance labeling. From a user point of view, users with positive attitudes toward the issue leave positive comments on the posts with praise or even just the post's content; from a post point of view, positive posts attract users who hold positive stances. We also investigate the influence of topics: different topics are associated with different stance labeling tendencies and word usage. For example we discuss women's rights and unwanted babies on the topic of abortion, but we criticize medicine usage or crime when on the topic of marijuana BIBREF5 . Even for posts on a specific topic like nuclear power, a variety of arguments are raised: green energy, radiation, air pollution, and so on. As for comments, we treat them as additional text information. The arguments in the comments and the commenters (the users who leave the comments) provide hints on the post's content and further facilitate stance classification.
In this paper, we propose the user-topic-comment neural network (UTCNN), a deep learning model that utilizes user, topic, and comment information. We attempt to learn user and topic representations which encode user interactions and topic influences to further enhance text classification, and we also incorporate comment information. We evaluate this model on a post stance classification task on forum-style social media platforms. The contributions of this paper are as follows: 1. We propose UTCNN, a neural network for text in modern social media channels as well as legacy social media, forums, and message boards — anywhere that reveals users, their tastes, as well as their replies to posts. 2. When classifying social media post stances, we leverage users, including authors and likers. User embeddings can be generated even for users who have never posted anything. 3. We incorporate a topic model to automatically assign topics to each post in a single topic dataset. 4. We show that overall, the proposed method achieves the highest performance in all instances, and that all of the information extracted, whether users, topics, or comments, still has its contributions.
In this paper, we propose the user-topic-comment neural network (UTCNN), a deep learning model that utilizes user, topic, and comment information. | What does the UTCNN utilize in deep learning? | User, topic, and comment information. |
1811.02906 | false | null | As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. This results in 3,599 training examples (519 offense, 3080 other) from on the `One Million Post' corpus. We conduct pre-training of the neural model as a binary classification task (similar to the Task 1 of GermEval 2018)
As introduced above, the `One Million Post' corpus provides annotation labels for more than 11,000 user comments. Although there is no directly comparable category capturing `offensive language' as defined in the shared task, there are two closely related categories. From the resource, we extract all those comments in which a majority of the annotators agree that they contain either `inappropriate' or `discriminating' content, or none of the aforementioned. We treat the first two cases as examples of `offense' and the latter case as examples of `other'. | What are the near-offensive language categories? | The answers are shown as follows:
* inappropriate
* discriminating
|
null | false | null | Where is the best place to work in my house? | If you are working from home full time you will want make sure you pick the best area in your house to work that offers you a quiet space where you can focus for work. Pick an area that is not busy like your master bedroom or separate bedroom that no one uses that can be used as an office. You will want to make sure this space has a door so you can prevent outside noise from being a distraction. This space will need a desk that you can work at, and you might want to consider a sit-stand desk so you can can keep good blood in your body flow throughout the day. Sitting all day restricts blood flow and actually inhibits your ability to produce work results at your best so get something that prevents this. |
|
null | false | null | Which things are typically hot and which are cold: an icicle, a snowball, lava, a cat, tea, a meat locker, summer on the east coast of the United States, a sauna and a Bikram yoga. | The hot items in this list are lava, summer, a sauna and Bikram yoga. The cold items are an icicle, a snowball, and a meat locker. A cat is typically not categorized as either hot or cold, and tea can be either hot or cold. |
|
null | false | 340 | * Equal contribution. Listing order is random.
Lemmatization is a core NLP task that involves a string-to-string transduction from an inflected word form to its citation form, known as the lemma. More concretely, consider the English sentence: The bulls are running in Pamplona. A lemmatizer will seek to map each word to a form you may find in a dictionary—for instance, mapping running to run. This linguistic normalization is important in several downstream NLP applications, especially for highly inflected languages. Lemmatization has previously been shown to improve recall for information retrieval BIBREF0 , BIBREF1 , to aid machine translation BIBREF2 , BIBREF3 and is a core part of modern parsing systems BIBREF4 , BIBREF5 .
However, the task is quite nuanced as the proper choice of the lemma is context dependent. For instance, in the sentence A running of the bulls took place in Pamplona, the word running is its own lemma, since, here, running is a noun rather than an inflected verb. Several counter-examples exist to this trend, as discussed in depth in haspelmath2013understanding. Thus, a good lemmatizer must make use of some representation of each word's sentential context. The research question in this work is, then, how do we design a lemmatization model that best extracts the morpho-syntax from the sentential context?
Recent work BIBREF7 has presented a system that directly summarizes the sentential context using a recurrent neural network to decide how to lemmatize. As N18-1126's system currently achieves state-of-the-art results, it must implicitly learn a contextual representation that encodes the necessary morpho-syntax, as such knowledge is requisite for the task. We contend, however, that rather than expecting the network to implicitly learn some notion of morpho-syntax, it is better to explicitly train a joint model to morphologically disambiguate and lemmatize. Indeed, to this end, we introduce a joint model for the introduction of morphology into a neural lemmatizer. A key feature of our model is its simplicity: Our contribution is to show how to stitch existing models together into a joint model, explaining how to train and decode the model. However, despite the model's simplicity, it still achieves a significant improvement over the state of the art on our target task: lemmatization.
Experimentally, our contributions are threefold. First, we show that our joint model achieves state-of-the-art results, outperforming (on average) all competing approaches on a 20-language subset of the Universal Dependencies (UD) corpora BIBREF8 . Second, by providing the joint model with gold morphological tags, we demonstrate that we are far from achieving the upper bound on performance—improvements on morphological tagging could lead to substantially better lemmatization. Finally, we provide a detailed error analysis indicating when and why morphological analysis helps lemmatization. We offer two tangible recommendations: one is better off using a joint model (i) for languages with fewer training data available and (ii) languages that have richer morphology.
Our system and pre-trained models on all languages in the latest version of the UD corpora are released at https://sigmorphon.github.io/sharedtasks/2019/task2/.
Second, by providing the joint model with gold morphological tags, we demonstrate that we are far from achieving the upper bound on performance—improvements on morphological tagging could lead to substantially better lemmatization. | Could the improvements on morphological tagging lead to substantially better lemmatization? | Yes, it could. |
null | false | null | What is the best lure color when fishing for blue Marlin on the Big Island | The best lure color as of 2022 was yellow and black skirt with black head |
|
null | false | null | Classify the following Stephen King works as either a 'book', a 'movie' or 'both': Shawshank Redemption, Carrie, Under the Dome, Sleeping Beauties, The Shining, Sleepwalkers, Dreamcatcher, Bag of Bones, Creepshow, Insomnia, The Green Mile, Cat's Eye | Shawshank Redemption - both
Carrie - both
Under the Dome - book
Sleeping Beauties - book
The Shining - both
Sleepwalkers - movie
Dreamcatcher - both
Bag of Bones - book
Creepshow - movie
Insomnia - book
The Green Mile - both
Cat's Eye - movie |
|
null | false | 113 | In this section we discuss the state of the art on conversational systems in three perspectives: types of interactions, types of architecture, and types of context reasoning. Then we present a table that consolidates and compares all of them.
ELIZA BIBREF11 was one of the first softwares created to understand natural language processing. Joseph Weizenbaum created it at the MIT in 1966 and it is well known for acting like a psychotherapist and it had only to reflect back onto patient's statements. ELIZA was created to tackle five "fundamental technical problems": the identification of critical words, the discovery of a minimal context, the choice of appropriate transformations, the generation of appropriate responses to the transformation or in the absence of critical words, and the provision of an ending capacity for ELIZA scripts.
Right after ELIZA came PARRY, developed by Kenneth Colby, who is psychiatrist at Stanford University in the early 1970s. The program was written using the MLISP language (meta-lisp) on the WAITS operating system running on a DEC PDP-10 and the code is non-portable. Parts of it were written in PDP-10 assembly code and others in MLISP. There may be other parts that require other language translators. PARRY was the first system to pass the Turing test - the psychiatrists were able to make the correct identification only 48 percent of the time, which is the same as a random guessing.
A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) BIBREF12 appeared in 1995 but current version utilizes AIML, an XML language designed for creating stimulus-response chat robots BIBREF13 . A.L.I.C.E. bot has, at present, more than 40,000 categories of knowledge, whereas the original ELIZA had only about 200. The program is unable to pass the Turing test, as even the casual user will often expose its mechanistic aspects in short conversations.
Cleverbot (1997-2014) is a chatbot developed by the British AI scientist Rollo Carpenter. It passed the 2011 Turing Test at the Technique Techno-Management Festival held by the Indian Institute of Technology Guwahati. Volunteers participate in four-minute typed conversations with either Cleverbot or humans, with Cleverbot voted 59.3 per cent human, while the humans themselves were rated just 63.3 per cent human BIBREF14 .
ELIZA [12] was one of the first softwares created to understand natural language processing. Joseph Weizenbaum created it at the MIT in 1966 and it is well known for acting like a psychotherapist and it had only to reflect back onto patient’s statements. | What is the ELIZA in this paper? | ELIZA was one of the first softwares created to understand natural language processing. Joseph Weizenbaum created it at the MIT in 1966 and it is well known for acting like a psychotherapist and it had only to reflect back onto patient’s statements. |
null | false | null | Tell me what continent are these countries located: China, United States, India, France, Brazil. | China and India are in Asia, United States is in North America, France is in Europe, and Brazil is in South America. |
|
1910.05456 | true | null | Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue).
Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.
Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.
In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.
Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue).
We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.
Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.
Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language. | Are agglutinative languages used in the prediction of both prefixing and suffixing languages? | Yes. |
null | false | null | What type of poetry is Sappho best known for? | Sappho is an ancient Greek poet best known for her lyric poetry. |
|
null | false | 510 | We use an automated script that first creates a list of parameters in each layer, sorts them in descending order according to the gradient values, and checks if zeroing out the weights of the first 5% parameters from this list leads to near zero accuracy for that class. If not, then we select double the number of parameters chosen earlier and repeat this process. If the accuracy is near zero, we repeat the process with half the number of parameters chosen earlier. Please note, this process is just for identifying the parameters relevant to the restricted classes, and their weights are restored after this process. | Furthermore, it is not clear how the parameters with the highest gradients are selected. Is a fixed threshold used? | There are no fixed thresholds for selecting the parameters with the highest gradients. Instead, we use a process similar to the binary search for automatically selecting the parameters. We use an automated script that first creates a list of parameters in each layer, sorts them in descending order according to the gradient values, and checks if zeroing out the weights of the first 5\% parameters from this list leads to near zero accuracy for that class. If not, then we select double the number of parameters chosen earlier and repeat this process. If the accuracy is near zero, we repeat the process with half the number of parameters chosen earlier. Please note, this process is just for identifying the parameters relevant to the restricted classes, and their weights are restored after this process. We have now added these details in the Sec. 11.1 in the appendix of the revised paper for better understanding. |
null | false | null | Is 99% a good grade on an essay? | Yes, 99% is an excellent grade to receive on an essay! |
|
null | false | null | The Great Lakes, also called the Great Lakes of North America, are a series of large interconnected freshwater lakes in the mid-east region of North America that connect to the Atlantic Ocean via the Saint Lawrence River. There are five lakes, which are Superior, Michigan, Huron, Erie, and Ontario and are in general on or near the Canada–United States border. Hydrologically, lakes Michigan and Huron are a single body joined at the Straits of Mackinac. The Great Lakes Waterway enables modern travel and shipping by water among the lakes.
The Great Lakes are the largest group of freshwater lakes on Earth by total area and are second-largest by total volume, containing 21% of the world's surface fresh water by volume. The total surface is 94,250 square miles (244,106 km2), and the total volume (measured at the low water datum) is 5,439 cubic miles (22,671 km3), slightly less than the volume of Lake Baikal (5,666 cu mi or 23,615 km3, 22–23% of the world's surface fresh water). Because of their sea-like characteristics, such as rolling waves, sustained winds, strong currents, great depths, and distant horizons, the five Great Lakes have long been called inland seas. Depending on how it is measured, by surface area, either Lake Superior or Lake Michigan-Huron is the second-largest lake in the world and the largest freshwater lake. Lake Michigan is the largest lake that is entirely within one country.
The Great Lakes began to form at the end of the Last Glacial Period around 14,000 years ago, as retreating ice sheets exposed the basins they had carved into the land, which then filled with meltwater. The lakes have been a major source for transportation, migration, trade, and fishing, serving as a habitat to many aquatic species in a region with much biodiversity. The surrounding region is called the Great Lakes region, which includes the Great Lakes Megalopolis. | What are the Great Lakes? | The Great Lakes are comprised of five lakes: Superior, Michigan, Huron, Erie, and Ontario. |
null | false | null | What is the legality of owning an alligator as a pet in the United States? | 26 states outright forbid owning an alligator as a pet. 12 states allow ownership with proper permitting. 11 states allow ownership with no need for a permit; of those 11 states, 5 do not have any permit requirements to own any exotic animal. While Wyoming prohibits certain game animals from being owned as pets, the state does not include nor exclude alligators in specific listings, so presumably, one can own a pet alligator unpermitted in Wyoming. |
|
null | false | null | Write a friendly letter to my landlord, convincing them to let me get a cat. His name is George and my name is Steve. I live alone in Unit 2C and I work long hours. Include details regarding my rental contract that currently only permits dogs under 25 pounds and caged animals and requires pet deposit and rent. Give him examples as to why those animals are not ideal (include my dog allergy) and why a cat is better. Let him know I plan a robotic vacuum and mop to keep things clean. Tell him I've already found a potential pet at the local animal shelter, who is a two-year-old shelter cat who is housebroken and friendly. Remind him that Unit 2C does not have any carpeted floors and a tiled bathroom which is good for the future litter box. Convince him I'm open to a conversation to discuss further. | Hello, George,
This is Steve in Unit 2C. I hope your day is going well. I have a request I would like to ask of you.
As you know, I live alone. I have been considering getting a pet for companionship, but after reviewing my rental contract, I noticed the only permitted animals are small dogs and animals that can be kept in a tank, like hamsters and fish.
I am hoping you would reconsider these restrictions and allow me to have a cat. I am more than willing to resign a rental contract to include the requisite pet deposit and pet rent.
I thought about a dog, since those animals are already in the contract, but I don't want a dog. Since I work long hours, it's not ideal to have a dog in my unit. My unit is small and could be claustrophobic for a dog who has a lot of energy. I also don't want to leave a dog alone because it may bark and disturb the other tenants in the building. Dogs also urinate and defecate in the shared yard around the property, which is gross, considering not everyone picks up after their dogs. Also, I am allergic to dogs, so this rules out dogs.
I also considered a hamster or fish, but I am reluctant to have either of those, as well. Firstly, they do not offer the same cuddly companionship of a cat. If a hamster got loose, it could cause far more damage to a unit than a cat could -- the hamster could defecate in many areas, chew electrical wires, and chew through drywall. Fish pose a problem if the tank cracks or breaks, which would cause lots of water damage to the flooring.
I recognize there are drawbacks to any pet, including cats, and I understand that some landlords are reluctant to permit cats, because there is an impression that cats may be more destructive, such as peeing in inappropriate places. However, I believe a cat is the least-destructive option of all. As my unit is not carpeted, I don't believe persistent urination stains will be an issue. Cats are easy to litter box train and I plan to put the litter box in the tiled bathroom where there is no opportunity for urine to get into the subfloor. Cats are less destructive than dogs, because they don't chew on things like furniture and doorways. Cats also are not nearly as loud as barking dogs.
I have already looked into pets at the local animal shelter and found a possible new roommate, who is a friendly two-year-old cat and is housebroken and box-trained. This is an ideal pet, since I don't have to risk accidents with box-training a kitten, and it's unlikely the cat will be destructive if it's already housebroken.
I believe a cat would be the ideal companion for me, given my work schedule and my preference for my ideal pet. I am happy to discuss my desire in person and further elaborate my plan on how I will continue to maintain a clean and tidy unit with a cat, such as purchasing an automatic robot vacuum and mop to ensure an extra clean space.
Please consider my request. I look forward to your response.
Sincerely,
Steve |
|
null | false | 109 | NLP research is gradually evolving from lexical to compositional semantics BIBREF10 through the adoption of novel meaning-preserving and context-aware paradigms such as convolutional networks BIBREF11 , recurrent belief networks BIBREF12 , statistical learning theory BIBREF13 , convolutional multiple kernel learning BIBREF14 , and commonsense reasoning BIBREF15 . But while other NLP tasks have been extensively investigated, sarcasm detection is a relatively new research topic which has gained increasing interest only recently, partly thanks to the rise of social media analytics and sentiment analysis. Sentiment analysis BIBREF16 and using multimodal information as a new trend BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF14 is a popular branch of NLP research that aims to understand sentiment of documents automatically using combination of various machine learning approaches BIBREF21 , BIBREF22 , BIBREF20 , BIBREF23 .
An early work in this field was done by BIBREF6 on a dataset of 6,600 manually annotated Amazon reviews using a kNN-classifier over punctuation-based and pattern-based features, i.e., ordered sequence of high frequency words. BIBREF1 used support vector machine (SVM) and logistic regression over a feature set of unigrams, dictionary-based lexical features and pragmatic features (e.g., emoticons) and compared the performance of the classifier with that of humans. BIBREF24 described a set of textual features for recognizing irony at a linguistic level, especially in short texts created via Twitter, and constructed a new model that was assessed along two dimensions: representativeness and relevance. BIBREF5 used the presence of a positive sentiment in close proximity of a negative situation phrase as a feature for sarcasm detection. BIBREF25 used the Balanced Window algorithm for classifying Dutch tweets as sarcastic vs. non-sarcastic; n-grams (uni, bi and tri) and intensifiers were used as features for classification.
BIBREF26 compared the performance of different classifiers on the Amazon review dataset using the imbalance between the sentiment expressed by the review and the user-given star rating. Features based on frequency (gap between rare and common words), written spoken gap (in terms of difference between usage), synonyms (based on the difference in frequency of synonyms) and ambiguity (number of words with many synonyms) were used by BIBREF3 for sarcasm detection in tweets. BIBREF9 proposed the use of implicit incongruity and explicit incongruity based features along with lexical and pragmatic features, such as emoticons and punctuation marks. Their method is very much similar to the method proposed by BIBREF5 except BIBREF9 used explicit incongruity features. Their method outperforms the approach by BIBREF5 on two datasets.
BIBREF8 compared the performance with different language-independent features and pre-processing techniques for classifying text as sarcastic and non-sarcastic. The comparison was done over three Twitter dataset in two different languages, two of these in English with a balanced and an imbalanced distribution and the third one in Czech. The feature set included n-grams, word-shape patterns, pointedness and punctuation-based features.
In this work, we use features extracted from a deep CNN for sarcasm detection. Some of the key differences between the proposed approach and existing methods include the use of a relatively smaller feature set, automatic feature extraction, the use of deep networks, and the adoption of pre-trained NLP models.
In this work, we use features extracted from a deep CNN for sarcasm detection. | Is CNN or RNN used in the work? | CNN. |
null | false | null | Bityla sericea is a species of moth in the family Noctuidae. This species is endemic to New Zealand. It is classified as "At Risk, Naturally Uncommon" by the New Zealand Department of Conservation. This species was first described by Arthur Gardiner Butler in 1877 using a specimen obtained from John Enys and collected in the South Island. George Hudson discussed and illustrated this species both in his 1898 book, New Zealand moths and butterflies (Macro-lepidoptera) and his 1928 publication The Butterflies and Moths of New Zealand. In 1905 Hudson, thinking he was describing a new species, also gave the species the name Orthosia pallida. George Francis Hampson, also mistakenly believing this was a separate species placed O. pallida in the genus Bityla. Hudson discussed B. pallida in his 1928 publication. John S. Dugdale synonymised B. pallida with B. sericea in 1988. The holotype specimen of B. sericea is held at the Natural History Museum, London. | Given the reference text about moths, what family does Bityla sericea belong to? | Noctuidae |
null | false | null | What kind of jelly is best for a peanut butter and jelly sandwich? | Strawberry is best. Grape is acceptable. Mint is bizarre. |
|
null | false | 27 | Task-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe.
Tools for developing such systems start by guiding developers to collect training data for intent classification: the task of identifying which of a fixed set of actions the user wishes to take based on their query. Relatively few public datasets exist for evaluating performance on this task, and those that do exist typically cover only a very small number of intents (e.g. BIBREF0, which has 7 intents). Furthermore, such resources do not facilitate analysis of out-of-scope queries: queries that users may reasonably make, but fall outside of the scope of the system-supported intents.
Figure FIGREF1 shows example query-response exchanges between a user and a task-driven dialog system for personal finance. In the first user-system exchange, the system correctly identifies the user's intent as an in-scope balance query. In the second and third exchanges, the user queries with out-of-scope inputs. In the second exchange, the system incorrectly identifies the query as in-scope and yields an unrelated response. In the third exchange, the system correctly classifies the user's query as out-of-scope, and yields a fallback response.
Out-of-scope queries are inevitable for a task-oriented dialog system, as most users will not be fully cognizant of the system's capabilities, which are limited by the fixed number of intent classes. Correctly identifying out-of-scope cases is thus crucial in deployed systems—both to avoid performing the wrong action and also to identify potential future directions for development. However, this problem has seen little attention in analyses and evaluations of intent classification systems.
This paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents.
We evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset. BERT BIBREF1 yields the best in-scope accuracy, scoring 96% or above even when we limit the training data or introduce class imbalance. However, all methods struggle with identifying out-of-scope queries. Even when a large number of out-of-scope examples are provided for training, there is a major performance gap, with the best system scoring 66% out-of-scope recall. Our results show that while current models work on known classes, they have difficulty on out-of-scope queries, particularly when data is not plentiful. This dataset will enable future work to address this key gap in the research and development of dialog systems. All data introduced in this paper can be found at https://github.com/clinc/oos-eval.
To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. | What are the characteristics of a new dataset with 23,700 queries? | The new dataset is in the same style made by real users of task-oriented systems with 23,700 queries that are short and unstructured. |
null | false | null | The World War II Soviet submarine L-21 belonged to the L-class or Leninets class of minelayer submarines. She was part of the last series (Group 4) of her class, having some improvements including more torpedo tubes. The commander during the war was Sergey S. Mogilevskiy. | Given this short paragraph about the L-21 submarine, what was an example of an improvement made compared to earlier versions? | The L-21 had more torpedo tubes. |
null | false | null | Identify which instrument is string or woodwind: Folk harp, Rothphone | Rothphone is woodwind, Folk harp is string. |
|
null | false | null | What states make up New England? | New England is comprised of the states Rhode Island, Connecticut, Massachusetts, Vermont, New Hampshire, and Maine |
|
null | false | null | Who wrote the second episode of season four of Game of Thrones? | George R.R. Martin wrote "The Lion and the Rose." The TV series is also based off of his series, A Song of Ice and Fire. |
|
null | false | null | Which is a species of fish? Lion or Lionfish | Lionfish |
|
null | false | null | Who proposed the use of Tau as the circle constant, instead of Pi | Tau was proposed as a better circle constant than Pi by Michael Hartl in The Tau Manifesto, published on June 28, 2010. Hartl's idea for Tau was inspired by Bob Palais, who had earlier published an article questioning whether Pi was the best constant to use. |
|
null | false | null | The population of Puerto Rico according to the 2020 census was 3,285,874, an 11.8% decrease since the 2010 United States Census. The commonwealth's population peaked in 2000, when it was 3,808,610, before declining (for the first time in census history) to 3,725,789 in 2010. Emigration due to economic difficulties and natural disasters, coupled with a low birth rate, have caused the population decline to continue in recent years. | How much has the population of Puerto Rico been growing? | The population of Puerto Rico decreased 11.8% between 2010 and 2020, from 3,725,789 in 2010 to 3,285,874 in 2020, according to the United States Census. Emigration due to economic difficulties and natural disasters, coupled with a low birth rate, have caused the population decline to continue in recent years. |
null | false | null | Give me a list of fruit that are yellow color | Lemon, banana, mango, pineapple, yellow dragonfruit, yellow watermelon, golden kiwi |
|
null | false | null | In 2019, Virginia men's basketball won the NCAA Championship in "March Madness", the single-elimination national college basketball tournament considered by YouGov polled American viewers (as of the same year) to be the most exciting collegiate sporting event. In 2015, when Virginia first won its first Capital One Cup its teams won the 2014 College Cup, the 2015 College World Series, and the 2015 NCAA Tennis Championships. When it repeated the feat in 2019, the program won both March Madness and the 2019 Men's Lacrosse Championship. | According to this paragraph, tell me what is referred to as the most exciting collegiate sporting event. | What is referred to as the most exciting collegiate sporting event is when the Virginia men's basketball team won the NCAA Championship in 2019. |
null | false | null | Why are peanuts not kosher for Passover. | Peanuts are a type of legume. Legumes are banned as they are often mixed with wheat (and Jews typically only eat unleavened bread during Passover). |
|
null | false | null | What is insurance? | Insurance is a means of protection from financial loss in which, in exchange for a fee, a party agrees to compensate another party in the event of a certain loss, damage, or injury. It is a form of risk management, primarily used to hedge against the risk of a contingent or uncertain loss.
An entity which provides insurance is known as an insurer, insurance company, insurance carrier, or underwriter. A person or entity who buys insurance is known as a policyholder, while a person or entity covered under the policy is called an insured. The insurance transaction involves the policyholder assuming a guaranteed, known, and relatively small loss in the form of a payment to the insurer (a premium) in exchange for the insurer's promise to compensate the insured in the event of a covered loss. The loss may or may not be financial, but it must be reducible to financial terms. Furthermore, it usually involves something in which the insured has an insurable interest established by ownership, possession, or pre-existing relationship. |
|
null | false | null | What is there to do in Cleveland Ohio? | Cleveland is located in Northeast Ohio and is home to the Rock & Roll Hall-of-Fame, among many other great places. Cleveland has a broad culture and is know sports pride and great local food locations. If you can withstand the winter months, Cleveland is a great place to visit year round! |
|
null | false | null | Where did Bob Ross learn to paint? | Bob Ross took a painting class while he was stationed in Alaska. |
|
null | false | null | Why is a pomegranate known as a hand grenade? | When pomegranates become overripe, they will literally burst on the tree, like a grenade. In Spanish, they are known as granadas for this reason. |
|
null | false | null | Identify which animal species is alive or extinct: Microraptor, Whale | Whale is alive, Microraptor is extinct. |
|
null | false | 22 | We now evaluate LiLi in terms of its predictive performance and strategy formulation abilities.
Data: We use two standard datasets (see Table 4): (1) Freebase FB15k, and (2) WordNet INLINEFORM0 . Using each dataset, we build a fairly large graph and use it as the original KB ( INLINEFORM1 ) for evaluation. We also augment INLINEFORM2 with inverse triples ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 ) for each ( INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ) following existing KBC methods.
Parameter Settings. Unless specified, the empirically set parameters (see Table 1) of LiLi are: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . For training RL-model with INLINEFORM11 -greedy strategy, we use INLINEFORM12 , INLINEFORM13 , pre-training steps=50000. We used Keras deep learning library to implement and train the prediction model. We set batch-size as 128, max. training epoch as 150, dropout as 0.2, hidden units and embedding size as 300 and learning rate as 5e-3 which is reduced gradually on plateau with factor 0.5 and patience 5. Adam optimizer and early stopping were used in training. We also shuffle INLINEFORM14 in each epoch and adjust class weights inversely proportional to class frequencies in INLINEFORM15 .
Labeled Dataset Generation and Simulated User Creation. We create a simulated user for each KB to evaluate LiLi. We create the labeled datasets, the simulated user’s knowledge base ( INLINEFORM0 ), and the base KB ( INLINEFORM1 ) from INLINEFORM2 . INLINEFORM3 used as the initial KB graph ( INLINEFORM4 ) of LiLi.
We followed BIBREF16 for labeled dataset generation. For Freebase, we found 86 relations with INLINEFORM0 triples and randomly selected 50 from various domains. We randomly shuffle the list of 50 relations, select 25% of them as unknown relations and consider the rest (75%) as known relations. For each known relation INLINEFORM1 , we randomly shuffle the list of distinct triples for INLINEFORM2 , choose 1000 triples and split them into 60% training, 10% validation and 20% test. Rest 10% along with the leftover (not included in the list of 1000) triples are added to INLINEFORM3 . For each unknown relation INLINEFORM4 , we remove all triples of INLINEFORM5 from INLINEFORM6 and add them to INLINEFORM7 . In this process, we also randomly choose 20% triples as test instances for unknown INLINEFORM8 which are excluded from INLINEFORM9 . Note that, now INLINEFORM10 has at least 10% of chosen triples for each INLINEFORM11 (known and unknown) and so, user is always able to provide clues for both cases. For each labeled dataset, we randomly choose 10% of the entities present in dataset triples, remove triples involving those entities from INLINEFORM12 and add to INLINEFORM13 . At this point, INLINEFORM14 gets reduced to INLINEFORM15 and is used as INLINEFORM16 for LiLi. The dataset stats in Table 4 shows that the base KB (60% triples of INLINEFORM17 ) is highly sparse (compared to original KB) which makes the inference task much harder. WordNet dataset being small, we select all 18 relations for evaluation and create labeled dataset, INLINEFORM18 and INLINEFORM19 following Freebase. Although the user may provide clues 100% of the time, it often cannot respond to MLQs and CLQs (due to lack of required triples/facts). Thus, we further enrich INLINEFORM20 with external KB triples.
Given a relation INLINEFORM0 and an observed triple ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ) in training or testing, the pair ( INLINEFORM4 , INLINEFORM5 ) is regarded as a +ve instance for INLINEFORM6 . Following BIBREF18 , for each +ve instance ( INLINEFORM7 , INLINEFORM8 ), we generate two negative ones, one by randomly corrupting the source INLINEFORM9 , and the other by corrupting the target INLINEFORM10 . Note that, the test triples are not in INLINEFORM11 or INLINEFORM12 and none of the -ve instances overlap with the +ve ones.
Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.
Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.
Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.
F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .
BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@" blindly, no guessing mechanism.
w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.
Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score.
We used Keras deep learning library to implement and train the prediction model. We set batch-size as 128, max. training epoch as 150, dropout as 0.2, hidden units and embedding size as 300 and learning rate as 5e-3 which is reduced gradually on plateau with factor 0.5 and patience 5. | What is the batch-size when training the prediction model? | 128. |
1909.02560 | false | null | After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix.
After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. | How much dramatically results drop for models on generated adversarial examples? | The answers are shown as follows:
* BERT on Quora drops from 94.6% to 24.1%
|