paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
---|---|---|---|---|---|
1706.07206 | false | null | In our experiments, we use as input the 2210 tokenized sentences of the Stanford Sentiment Treebank test set BIBREF2 , preprocessing them by lowercasing as was done in BIBREF8 . On five-class sentiment prediction of full sentences (very negative, negative, neutral, positive, very positive) the model achieves 46.3% accuracy, and for binary classification (positive vs. negative, ignoring neutral sentences) the test accuracy is 82.9%.
In our experiments, we use as input the 2210 tokenized sentences of the Stanford Sentiment Treebank test set BIBREF2 , preprocessing them by lowercasing as was done in BIBREF8 . | Which datasets are used for evaluation? | The answers are shown as follows:
* Stanford Sentiment Treebank
|
null | false | null | Identify which instrument is string or percussion: Babendil, Bowed guitar | Bowed guitar is string, Babendil is percussion. |
|
null | false | null | whats the total number of US senator? | 100 |
|
null | false | null | Sufism in Bangladesh is more or less similar to that in the whole Indian subcontinent. India, it is claimed, is one of the five great centers of Sufism, the other four being Persia (including central Asia), Baghdad, Syria, and North Africa. Sufi saints flourished in Hindustan (India) preaching the mystic teachings of Sufism that easily reached the common people, especially the spiritual truth seekers in India. Sufism in Bangladesh is also called pirism, after the pirs or teachers in the Sufi tradition (also called Fakir). | What are the five biggest centers of Sufism? | The five main centers of Sufism are India, Persia, Baghdad, Syria, and North Africa. |
null | false | null | Who were the Sumerians? | The Sumerian culture (3000-2350 BC) is regarded as humanity's first urban civilization. It originated in southern Mesopotamia, between the Tigris and Euphrates rivers, and is related to Neolithic cultures, particularly the Uruk culture. They invented the wheel and writing, had a division of labor-based organization, and were incredible builders. Their Sumerian language is unrelated to any other known language. Sumerian is a cuneiform language, with ideograms, symbols, and abstract forms representing ideas rather than specific words or sounds. |
|
null | false | null | What does it take for a leader to notice & appreciate you in the organization | A leader's time is usually busy and a leader is usually spending time thinking about the organization's highest priorities. Helping the leader deliver on organization's priorities is a smart way to get him to notice & appreciate you in the organization |
|
null | false | null | Identify which instrument is string or percussion: Kenong, Zeze | Zeze is string, Kenong is percussion. |
|
null | false | null | Identify which instrument is string or percussion: Kepyak, Koto | Koto is string, Kepyak is percussion. |
|
1911.01188 | false | null | Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set.
We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3.
Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set.
We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. | What translationese effects are seen in the analysis? | The answers are shown as follows:
* potentially indicating a shining through effect
* explicitation effect
|
null | false | 400 | Understanding and applying qualitative knowledge is a fundamental facet of intelligence. For example, we may read that exercise improves health, and thus decide to spend more time at the gym; or that larger cars cause more pollution, and thus decide to buy a smaller car to be environmentally sensitive. These skills require understanding the underlying qualitative relationships, and being able to apply them in specific contexts.
To promote research in this direction, we present the first open-domain dataset of qualitative relationship questions, called QuaRTz (“Qualitative Relationship Test set”). Unlike earlier work in qualitative reasoning, e.g., BIBREF0, the dataset is not restricted to a small, fixed set of relationships. Each question $Q_i$ (2-way multiple choice) is grounded in a particular situation, and is paired with a sentence $K_i$ expressing the general qualitative knowledge needed to answer it. $Q_i$ and $K_i$ are also annotated with the properties being compared (Figure FIGREF1). The property annotations serve as supervision for a potential semantic parsing based approach. The overall task is to answer the $Q_i$ given the corpus $K = \lbrace K_i\rbrace $.
We test several state-of-the-art (BERT-based) models and find that they are still substantially (20%) below human performance. Our contributions are thus (1) the dataset, containing 3864 richly annotated questions plus a background corpus of 400 qualitative knowledge sentences; and (2) an analysis of the dataset, performance of BERT-based models, and a catalog of the challenges it poses, pointing the way towards solutions.
Our contributions are thus (1) the dataset, containing 3864 richly annotated questions plus a background corpus of 400 qualitative knowledge sentences; and (2) an analysis of the dataset, performance of BERT-based models, and a catalog of the challenges it poses, pointing the way towards solutions | How many richly annotated questions are contained in the dataset? | 3864 richly annotated questions. |
1808.02113 | false | null | Although attention in deep neural networks was not initially introduced to inform observers, but to help a model make predictions, it can also be used to inform. In the instances where a model thinks all historical samples should be considered equally important in a sequential analysis task, we must look elsewhere for a computationally inexpensive means to understand what happened at the stopping point. In this paper, we have introduced such a means by monitoring attention changes over the sequential analysis to inform observers. This method introduces negligible overhead, an important consideration in real-time systems, and is not tied to the implementation details or task of the model, other than the prerequisite of an attention layer.
In the instances where a model thinks all historical samples should be considered equally important in a sequential analysis task, we must look elsewhere for a computationally inexpensive means to understand what happened at the stopping point. | Can their method of creating more informative visuals be applied to tasks other than turn taking in conversations? | The answers are shown as follows:
* computationally inexpensive means to understand what happened at the stopping point
|
null | false | null | Why are in person events important to business growth? | In person events are incredibly important to business growth because they allow a company to connect with it's customers and prospects directly. The business can create experiences, connections between it's employees and customers, and between customers to help form a community of people who align based on self identifiers like interests, technologies, industry and solutions. These connections help build strong bonds between customer and brand by creating strong feelings of connectedness and inclusion through experiences during their in person event. |
|
1704.02686 | false | null | As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation.
As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation. | What dimensions of word embeddings do they produce using factorization? | The answers are shown as follows:
* 300-dimensional vectors
|
null | false | 110 | In NMT, the Transformer BIBREF0 is a sequence-to-sequence (seq2seq) model which maps an input sequence to an output sequence through hierarchical multi-head attention mechanisms, yielding a dynamic, context-dependent strategy for propagating information within and across sentences. It contrasts with previous seq2seq models, which usually rely either on costly gated recurrent operations BIBREF15, BIBREF16 or static convolutions BIBREF17.
Given $n$ query contexts and $m$ sequence items under consideration, attention mechanisms compute, for each query, a weighted representation of the items. The particular attention mechanism used in BIBREF0 is called scaled dot-product attention, and it is computed in the following way:
where $\mathbf {Q} \in \mathbb {R}^{n \times d}$ contains representations of the queries, $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{m \times d}$ are the keys and values of the items attended over, and $d$ is the dimensionality of these representations. The $\mathbf {\pi }$ mapping normalizes row-wise using softmax, $\mathbf {\pi }(\mathbf {Z})_{ij} = \operatornamewithlimits{\mathsf {softmax}}(\mathbf {z}_i)_j$, where
In words, the keys are used to compute a relevance score between each item and query. Then, normalized attention weights are computed using softmax, and these are used to weight the values of each item at each query context.
However, for complex tasks, different parts of a sequence may be relevant in different ways, motivating multi-head attention in Transformers. This is simply the application of Equation DISPLAY_FORM7 in parallel $H$ times, each with a different, learned linear transformation that allows specialization:
In the Transformer, there are three separate multi-head attention mechanisms for distinct purposes:
Encoder self-attention: builds rich, layered representations of each input word, by attending on the entire input sentence.
Context attention: selects a representative weighted average of the encodings of the input words, at each time step of the decoder.
Decoder self-attention: attends over the partial output sentence fragment produced so far.
Together, these mechanisms enable the contextualized flow of information between the input sentence and the sequential decoder.
In the Transformer, there are three separate multihead attention mechanisms for distinct purposes: • Encoder self-attention: builds rich, layered representations of each input word, by attending on the entire input sentence. • Context attention: selects a representative weighted average of the encodings of the input words, at each time step of the decoder. • Decoder self-attention: attends over the partial output sentence fragment produced so far. | What are the three separate multi-head attention mechanisms in the Transformer? | Encoder self-attention, Context attention, and Decoder self-attention. |
null | false | null | What is the difference between a goose and a geese? | A goose and a geese refer to the same type of bird. These bird(s) are of the Anatidae biological family of water birds. Anatidate includes ducks, geese, seabirds, storks, and waterfowls. Goose refers to a single bird where geese refers to multiple birds. |
|
null | false | null | Which of the following two-dimensional shapes are made using only curved lines? And which use only straight lines? oval, rectangle, square, pentagon, circle and rhombus. | The oval and circle are made using only curved lines.
The square, pentagon, rectangle and rhombus are made using only straight lines. |
|
null | false | null | K2, at 8,611 metres (28,251 ft) above sea level, is the second-highest mountain on Earth, after Mount Everest (at 8,849 metres (29,032 ft)). It lies in the Karakoram range, partially in the Gilgit-Baltistan region of Pakistan-administered Kashmir and partially in a China-administered territory Trans-Karakoram Tract included in the Taxkorgan Tajik Autonomous County of Xinjiang.
K2 also became popularly known as the Savage Mountain after George Bell—a climber on the 1953 American expedition—told reporters, "It's a savage mountain that tries to kill you." Of the five highest mountains in the world, K2 is the deadliest; approximately one person dies on the mountain for every four who reach the summit. Also occasionally known as Mount Godwin-Austen, other nicknames for K2 are The King of Mountains and The Mountaineers' Mountain, as well as The Mountain of Mountains after prominent Italian climber Reinhold Messner titled his book about K2 the same.
The summit was reached for the first time by the Italian climbers Lino Lacedelli and Achille Compagnoni, on the 1954 Italian expedition led by Ardito Desio. In January 2021, K2 became the final eight-thousander to be summited in the winter; the mountaineering feat was accomplished by a team of Nepalese climbers, led by Nirmal Purja and Mingma Gyalje Sherpa.
K2 is the only 8,000+ metres (26,000+ ft) peak that has never been climbed from its eastern face. Ascents have almost always been made in July and August, which are typically the warmest times of the year; K2's more northern location makes it more susceptible to inclement and colder weather. The peak has now been climbed by almost all of its ridges. Although the summit of Everest is at a higher altitude, K2 is a more difficult and dangerous climb, due in part to its more inclement weather. As of February 2021, only 377 people have completed the ascent to its summit. There have been 91 deaths during attempted climbs.
There are a number of routes on K2, of somewhat different character, but they all share some key difficulties, the first being the extremely high altitude and resulting lack of oxygen: there is only one-third as much oxygen available to a climber on the summit of K2 as there is at sea level. The second is the propensity of the mountain to experience extreme storms of several days duration, which have resulted in many of the deaths on the peak. The third is the steep, exposed, and committing nature of all routes on the mountain, which makes retreat more difficult, especially during a storm. Despite many attempts the first successful winter ascents occurred only in 2021. All major climbing routes lie on the Pakistani side.[citation needed] The base camp is also located on the Pakistani side. | Without quoting directly from the text, give me a rundown of the greatest risks from attempting to climb K2. | Although not as well-known as Mount Everest, K2 is the second highest mountain on earth and is arguably the most dangerous to climb. In addition to the lack of oxygen experienced at such high altitudes, all routes to summit K2 are steep with significant exposure. This exposure, combined with the mountain's propensity to experience extreme storms lasting for days, make for a deadly combination. The extreme terrain and unpredictable weather leave little to no margin for error. |
null | false | 99 | We performed classification experiments using a basic off-the-shelf classifier, namely Logistic Regression (LR) with L2 penalty; this also allows us to compare results with our baseline. We applied a standardization of the features and we used the default configuration for parameters as described in scikit-learn package BIBREF33. We also tested other classifiers (such as K-Nearest Neighbors, Support Vector Machines and Random Forest) but we omit results as they give comparable performances. We remark that our goal is to show that a very simple machine learning framework, with no parameter tuning and optimization, allows for accurate results with our network-based approach.
We used the following evaluation metrics to assess the performances of different classifiers (TP=true positives, FP=false positives, FN=false negatives):
Precision = $\frac{TP}{TP+FP}$, the ability of a classifier not to label as positive a negative sample.
Recall = $\frac{TP}{TP+FN}$, the ability of a classifier to retrieve all positive samples.
F1-score = $2 \frac{\mbox{Precision} \cdot \mbox{Recall}}{\mbox{Precision} + \mbox{Recall}}$, the harmonic average of Precision and Recall.
Area Under the Receiver Operating Characteristic curve (AUROC); the Receiver Operating Characteristic (ROC) curve BIBREF34, which plots the TP rate versus the FP rate, shows the ability of a classifier to discriminate positive samples from negative ones as its threshold is varied; the AUROC value is in the range $[0, 1]$, with the random baseline classifier holding AUROC$=0.5$ and the ideal perfect classifier AUROC$=1$; thus larger AUROC values (and steeper ROCs) correspond to better classifiers.
In particular we computed so-called macro average–simple unweighted mean–of these metrics evaluated considering both labels (disinformation and mainstream). We employed stratified shuffle split cross validation (with 10 folds) to evaluate performances.
Finally, we partitioned networks according to the total number of unique users involved in the sharing, i.e. the number of nodes in the aggregated network represented with a single-layer representation considering together all layers and also pure tweets. A breakdown of both datasets according to size class (and political biases for the US scenario) is provided in Table 1 and Table 2.
In Table 3 we first provide classification performances on the US dataset for the LR classifier evaluated on the size class described in Table 1. We can observe that in all instances our methodology performs better than a random classifier (50% AUROC), with AUROC values above 85% in all cases.
For what concerns political biases, as the classes of mainstream and disinformation networks are not balanced (e.g., 1,292 mainstream and 4,149 disinformation networks with right bias) we employ a Balanced Random Forest with default parameters (as provided in imblearn Python package BIBREF35). In order to test the robustness of our methodology, we trained only on left-biased networks or right-biased networks and tested on the entire set of sources (relative to the US dataset); we provide a comparison of AUROC values for both biases in Figure 4. We can notice that our multi-layer approach still entails significant results, thus showing that it can accurately distinguish mainstream news from disinformation regardless of the political bias. We further corroborated this result with additional classification experiments, that show similar performances, in which we excluded from the training/test set two specific sources (one at a time and both at the same time) that outweigh the others in terms of data samples–respectively "breitbart.com" for right-biased sources and "politicususa.com" for left-biased ones.
We performed classification experiments on the Italian dataset using the LR classifier and different size classes (we excluded $[1000, +\infty )$ which is empty); we show results for different evaluation metrics in Table 3. We can see that despite the limited number of samples (one order of magnitude smaller than the US dataset) the performances are overall in accordance with the US scenario. As shown in Table 4, we obtain results which are much better than our baseline in all size classes (see Table 4):
In the US dataset our multi-layer methodology performs much better in all size classes except for large networks ($[1000, +\infty )$ size class), reaching up to 13% improvement on smaller networks ($[0, 100)$ size class);
In the IT dataset our multi-layer methodology outperforms the baseline in all size classes, with the maximum performance gain (20%) on medium networks ($[100, 1000)$ size class); the baseline generally reaches bad performances compared to the US scenario.
Overall, our performances are comparable with those achieved by two state-of-the-art deep learning models for "fake news" detection BIBREF9BIBREF36.
In order to understand the impact of each layer on the performances of classifiers, we performed additional experiments considering separately each layer (we ignored T and U features relative to pure tweets). In Table 5 we show metrics for each layer and all size classes, computed with a 10-fold stratified shuffle split cross validation, evaluated on the US dataset; in Figure 5 we show AUROC values for each layer compared with the general multi-layer approach. We can notice that both Q and M layers alone capture adequately the discrepancies of the two distinct news domains in the United States as they obtain good results with AUROC values in the range 75%-86%; these are comparable with those of the multi-layer approach which, nevertheless, outperforms them across all size classes.
We obtained similar performances for the Italian dataset, as the M layer obtains comparable performances w.r.t multi-layer approach with AUROC values in the range 72%-82%. We do not show these results for sake of conciseness.
We further investigated the importance of each feature by performing a $\chi ^2$ test, with 10-fold stratified shuffle split cross validation, considering the entire range of network sizes $[0, +\infty )$. We show the Top-5 most discriminative features for each country in Table 6.
We can notice the exact same set of features (with different relative orderings in the Top-3) in both countries; these correspond to two global network propertie–LWCC, which indicates the size of the largest cascade in the layer, and SCC, which correlates with the size of the network–associated to the same set of layers (Quotes, Retweets and Mentions).
We further performed a $\chi ^2$ test to highlight the most discriminative features in the M layer of both countries, which performed equally well in the classification task as previously highlighted; also in this case we focused on the entire range of network sizes $[0, +\infty )$. Interestingly, we discovered exactly the same set of Top-3 features in both countries, namely LWCC, SCC and DWCC (which indicates the depth of the largest cascade in the layer).
An inspection of the distributions of all aforementioned features revealed that disinformation news exhibit on average larger values than mainstream news.
We can qualitatively sum up these results as follows:
Sharing patterns in the two news domains exhibit discrepancies which might be country-independent and due to the content that is being shared.
Interactions in disinformation sharing cascades tends to be broader and deeper than in mainstream news, as widely reported in the literature BIBREF8BIBREF2BIBREF7.
Users likely make a different usage of mentions when sharing news belonging to the two domains, consequently shaping different sharing patterns.
Similar to BIBREF9, we carried out additional experiments to answer the following question: how long do we need to observe a news spreading on Twitter in order to accurately classify it as disinformation or mainstream?
With this goal, we built several versions of our original dataset of multi-layer networks by considering in turn the following lifetimes: 1 hour, 6 hours, 12 hours, 1 day, 2 days, 3 days and 7 days; for each case, we computed the global network properties of the corresponding network and evaluated the LR classifier with 10-fold cross validation, separately for each lifetime (and considering always the entire set of networks). We show corresponding AUROC values for both US and IT datasets in Figure 6.
We can see that in both countries news diffusion networks can be accurately classified after just a few hours of spreading, with AUROC values which are larger than 80% after only 6 hours of diffusion. These results are very promising and suggest that articles pertaining to the two news domains exhibit discrepancies in their sharing patterns that can be timely exploited in order to rapidly detect misleading items from factual information.
We performed classification experiments on the Italian dataset using the LR classifier and different size classes (we excluded [1000, +∞) which is empty); | How to perform classification experiments on the Italian dataset? | The authors performed classification experiments on the Italian dataset using the LR classifier and different size classes. |
1810.06743 | false | null | FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs
A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.
FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.
There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance.
FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs
We apply this conversion to the 31 languages
FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.
Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance. | Which languages do they validate on? | The answers are shown as follows:
* We apply this conversion to the 31 languages
* Arabic, Hindi, Lithuanian, Persian, and Russian.
* Dutch
* Spanish
|
null | false | null | Identify which instrument is string or percussion: Thappu, Tati | Tati is string, Thappu is percussion. |
|
null | false | null | What are the largest hotel chains in the United States? | Marriott
Radisson
Best Western
Hilton
Hyatt |
|
null | false | null | Give me a bulleted list of major soccer leagues in Europe. | * English Premier League (EPL): This league is the home of some of the famous foot ball clubs like Manchester United, Liverpool FC, Chelsea, Arsenal and many more.
* German Football League (Bundesliga): Some of the major teams in this league are Bayern Munich, and Borussia Dortmund.
* Spanish Football League (La Liga): It is the host of the world two most famous football clubs i.e. FC Barcelona and Real Madrid
* Football Premier League (Ligue 1): In recent years, this is also gaining popularity thanks to the club, Paris Saint Germain, that have some world class footballers.
* Italian League (Serie A): Another premier league that is home of some of the oldest clubs like Juventus Turin, Napoli, AC Milan, Inter Milan, and many more. |
|
null | false | 6 | Student reflections are comments provided by students in response to a set of instructor prompts. The prompts are directed towards gathering students' feedback on course material. Student reflections are collected directly following each of a set of classroom lectures over a semester. In this paper, the set of reflections for each prompt in each lecture is considered a student reflection document. The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. Our dataset consists of documents and summaries from four course instantiations: ENGR (Introduction to Materials Science and Engineering), Stat2015 and Stat2016 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS (Data Structures in Computer Science). All reflections were collected in response to two pedagogically-motivated prompts BIBREF16: “Point of Interest (POI): Describe what you found most interesting in today's class” and “Muddiest Point (MP): Describe what was confusing or needed more detail.”
For each reflection document, at least one human (either a TA or domain expert) created summaries. Table TABREF4 shows example reference summary produced by one annotator for the CS course. Table TABREF5 summarizes the dataset in terms of number of lectures, number of prompts per lecture, average number of reflections per prompt, and number of abstractive reference summaries for each set of reflections.
The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. | What is the objective of this paper? | The objective of this work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. |
1910.05456 | false | null | To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology.
To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. | What are the tree target languages studied in the paper? | The answers are shown as follows:
* English, Spanish and Zulu
|
null | false | 277 | AliMe Bot is a kind of retrieval-based online service of E-commerce which collects a lot of predefined question-answering pairs. Through data analysis, we find that many variants of a question exist which means a sort of questions can correspond to a same answer. Based on the observation, naturally, we can view these questions with the same answer as a bag. Obviously, the bag contains diverse expressions of a question, which may provide more matching evidence than only one question due to the rich information contained in the bag. Motivated by the fact, different from existing query-question (Q-Q) matching method, we propose a new query-bag matching approach for retrieval-based chatbots. Concretely, when a user raises a query, the query-bag matching model provides the most suitable bag and returns the corresponding answer of the bag. To our knowledge, there is no query-bag matching study exists, and we focus on the new approach in this paper.
Recalling the text matching task BIBREF0, recently, researchers have adopted the deep neural network to model the matching relationship. ESIM BIBREF1 judges the inference relationship between two sentences by enhanced LSTM and interaction space. SMN BIBREF2 performs the context-response matching for the open-domain dialog system. BIBREF3 BIBREF3 explores the usefulness of noisy pre-training in the paraphrase identification task. BIBREF4 BIBREF4 surveys the methods in query-document matching in web search which focuses on the topic model, the dependency model, etc. However, none of them pays attention to the query-bag matching which concentrates on the matching for a query and a bag containing multiple questions.
When a user poses a query to the bot, the bot searches the most similar bag and uses the corresponding answer to reply to the user. The more information in the query covered by the bag, the more likely the bag's corresponding answer answers the query. What's more, the bag should not have too much information exceeding the query. Thus modelling the bag-to-query and query-to-bag coverage is essential in this task.
In this paper, we propose a simple but effective mutual coverage component to model the above-mentioned problem. The coverage is based on the cross-attention matrix of the query-bag pair which indicates the matching degree of elements between the query and bag. The mutual coverage is performed by stacking the cross-attention matrix along two directions, i.e., query and bag, in the word level respectively. In addition to the mutual coverage, a bag representation in word level is issued to help discover the main points of a bag. The bag representation then provides new matching evidence to the query-bag matching model.
We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. The ablation study shows the usefulness of the components. The contributions in this paper are summarized as follows: 1) To the best of our knowledge, we are the first to adopt query-bag matching in the information-seeking conversation. 2) We propose the mutual coverage model to measure the information coverage in the query-bag matching. 3) We release the composite Quora dataset to facilitate the research in this area.
Through data analysis, we find that many variants of a question exist which means a sort of questions can correspond to a same answer. | What do many variants of a question exist mean? | A sort of questions can correspond to a same answer. |
null | false | 34 | Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $
\lambda (x).place\_of\_birth(Barack\_Obama, x)
$
However, automatically mapping a natural language question such as Where was Barack Obama born? to its corresponding KB query remains a challenging task.
There are three key issues that make learning this mapping non-trivial. First, there are many paraphrases of the same question. Second, many of the KB entries are unseen during training time; however, we still need to correctly predict them at test time. Third, a KB such as Freebase typically contains millions of entities and thousands of predicates, making it difficult for a system to predict these entities at scale BIBREF1 , BIBREF3 , BIBREF0 . In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data.
First, we use a long short-term memory (LSTM) BIBREF4 encoder to embed the question. Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations. Character-level modeling has been previously shown to generalize well to new words not seen during training BIBREF5 , BIBREF6 , which makes it ideal for this task. Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output. The combined use of character-level modeling and a semantic relevance function allows us to successfully produce likelihood scores for the KB entries that are not present in our vocabulary, a challenging task for standard encoder-decoder frameworks.
Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time. In particular, without use of ensembles, we achieve 70.9% accuracy in the Freebase2M setting and 70.3% accuracy in the Freebase5M setting on the SimpleQuestions dataset, outperforming the previous state-of-arts of 62.7% and 63.9% BIBREF0 by 8.2% and 6.4% respectively. Moreover, we only use the training questions provided in SimpleQuestions to train our model, which cover about 24% of words in entity aliases on the test set. This demonstrates the robustness of the character-level model to unseen entities. In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work BIBREF0 , BIBREF1 .
In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data. | What novel model does this paper provide? | Character-level encoder-decoder framework. |
null | false | null | Why is pickleball so popular? | Pickleball combines elements of both tennis and ping pong. Tennis can take up to 2 years to learn the skills in order to play a proper tennis match while pickleball can be learned very quickly. The motor skills involved with the fast play of ping pong are necessary with pickleball however there is more exercise involved with pickleball. When you combine the easy to learn nature of pickleball with the fun exercise involved this is why pickleball is becoming the fastest growing sport in the United States. |
|
2001.08051 | false | null | Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.
The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible.
FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.
Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.
The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table
FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences. | How is the proficiency score calculated? | They used 6 indicators for proficiency (same for written and spoken) each marked by bad, medium or good by one expert. |
null | false | null | Which of these items are clothing brands and which are banks: Levi’s, Gap, Goldman Sachs, Banana Republic, JP Morgan Chase, Seven, and Guess. | The banks are JP Morgan Chase and Goldman Sachs, and the clothing brands are Guess, Gap, Banana Republic, Levi’s, and Seven. |
|
null | false | 104 | We use the Bayesian model of garg2012unsupervised as our base monolingual model. The semantic roles are predicate-specific. To model the role ordering and repetition preferences, the role inventory for each predicate is divided into Primary and Secondary roles as follows:
For example, the complete role sequence in a frame could be: INLINEFORM0 INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 INLINEFORM9 . The ordering is defined as the sequence of PRs, INLINEFORM10 INLINEFORM11 , INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 INLINEFORM16 . Each pair of consecutive PRs in an ordering is called an interval. Thus, INLINEFORM17 is an interval that contains two SRs, INLINEFORM18 and INLINEFORM19 . An interval could also be empty, for instance INLINEFORM20 contains no SRs. When we evaluate, these roles get mapped to gold roles. For instance, the PR INLINEFORM21 could get mapped to a core role like INLINEFORM22 , INLINEFORM23 , etc. or to a modifier role like INLINEFORM24 , INLINEFORM25 , etc. garg2012unsupervised reported that, in practice, PRs mostly get mapped to core roles and SRs to modifier roles, which conforms to the linguistic motivations for this distinction.
Figure FIGREF16 illustrates two copies of the monolingual model, on either side of the crosslingual latent variables. The generative process is as follows:
All the multinomial and binomial distributions have symmetric Dirichlet and beta priors respectively. Figure FIGREF7 gives the probability equations for the monolingual model. This formulation models the global role ordering and repetition preferences using PRs, and limited context for SRs using intervals. Ordering and repetition information was found to be helpful in supervised SRL as well BIBREF9 , BIBREF8 , BIBREF10 . More details, including the motivations behind this model, are in BIBREF3 .
To model the role ordering and repetition preferences, the role inventory for each predicate is divided into Primary and Secondary roles as follows | What is the role inventory for each predicate divided into? | Primary and Secondary roles. |
null | false | 157 | The task of interpreting and following natural language (NL) navigation instructions involves interleaving different signals, at the very least the linguistic utterance and the representation of the world. For example, in turn right on the first intersection, the instruction needs to be interpreted, and a specific object in the world (the intersection) needs to be located in order to execute the instruction. In NL navigation studies, the representation of the world may be provided via visual sensors BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 or as a symbolic world representation. This work focuses on navigation based on a symbolic world representation (referred to as a map).
Previous datasets for NL navigation based on a symbolic world representation, HCRC BIBREF5, BIBREF6, BIBREF7 and SAIL BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 present relatively simple worlds, with a small fixed set of entities known to the navigator in advance. Such representations bypass the great complexity of real urban navigation, which consists of long paths and an abundance of previously unseen entities of different types.
In this work we introduce Realistic Urban Navigation (RUN), where we aim to interpret navigation instructions relative to a rich symbolic representation of the world, given by a real dense urban map. To address RUN, we designed and collected a new dataset based on OpenStreetMap, in which we align NL instructions to their corresponding routes. Using Amazon Mechanical Turk, we collected 2515 instructions over 3 regions of Manhattan, all specified (and verified) by (respective) sets of humans workers. This task raises several challenges. First of all, we assume a large world, providing long routes, vulnerable to error propagation; secondly, we assume a rich environment, with entities of various different types, most of which are unseen during training and are not known in advance; finally, we evaluate on the full route intended, rather than on last-position only.
We then propose a strong neural baseline for RUN where we augment a standard encoder-decoder architecture with an entity abstraction layer, attention over words and worlds, and a constantly updating world-state. Our experimental results and ablation study show that this architecture is indeed better-equipped to treat grounding in realistic urban settings than standard sequence-to-sequence architectures. Given this RUN benchmark, empirical results, and evaluation procedure, we hope to encourage further investigation into the topic of interpreting NL instructions in realistic and previously unseen urban domains.
[1]The task defined by BIBREF6 is of moving between entities only.
In this work we introduce Realistic Urban Navigation (RUN), where we aim to interpret navigation instructions relative to a rich symbolic representation of the world, given by a real dense urban map. | What's the aim of introducing RUN? | To interpret navigation instructions relative to a rich symbolic representation of the world, given by a real dense urban map. |
1910.03042 | false | null | From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets).
Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?"). | What is the sample size of people used to measure user satisfaction? | The answers are shown as follows:
* 34,432 user conversations
|
null | false | 90 | BioASQ is a biomedical document classification, document retrieval, and question answering competition, currently in its seventh year. We provide an overview of our submissions to semantic question answering task (7b, Phase B) of BioASQ 7 (except for 'ideal answer' test, in which we did not participate this year). In this task systems are provided with biomedical questions and are required to submit ideal and exact answers to those questions. We have used BioBERT BIBREF0 based system , see also Bidirectional Encoder Representations from Transformers(BERT) BIBREF1, and we fine tuned it for the biomedical question answering task. Our system scored near the top for factoid questions for all the batches of the challenge. More specifially, in the third test batch set, our system achieved highest ‘MRR’ score for Factoid Question Answering task. Also, for List-type question answering task our system achieved highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions and also highlight identified downsides for our current approach and ways to improve them in our future experiments. In last test batch results we placed 4th for List-type questions and 3rd for Factoid-type questions.)
The QA task is organized in two phases. Phase A deals with retrieval of the relevant document, snippets, concepts, and RDF triples, and phase B deals with exact and ideal answer generations (which is a paragraph size summary of snippets). Exact answer generation is required for factoid, list, and yes/no type question.
BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure.
We have used BioBERT based system , see also Bidirectional Encoder Representations from Transformers(BERT), and we fine tuned it for the biomedical question answering task. | What based system does the author use? | BioBERT based system |
null | false | null | Seinfeld (/ˈsaɪnfɛld/ SYNE-feld) is an American television sitcom created by Larry David and Jerry Seinfeld. It aired on NBC from July 5, 1989, to May 14, 1998, over nine seasons and 180 episodes. It stars Seinfeld as a fictionalized version of himself and focuses on his personal life with three of his friends: best friend George Costanza (Jason Alexander), former girlfriend Elaine Benes (Julia Louis-Dreyfus) and his neighbor from across the hall, Cosmo Kramer (Michael Richards). It is set mostly in an apartment building in Manhattan's Upper West Side in New York City. It has been described as "a show about nothing", often focusing on the minutiae of daily life. Interspersed in earlier episodes are moments of stand-up comedy from the fictional Jerry Seinfeld, frequently using the episode's events for material.
As a rising comedian in the late 1980s, Jerry Seinfeld was presented with an opportunity to create a show with NBC. He asked Larry David, a fellow comedian and friend, to help create a premise for a sitcom. The series was produced by West-Shapiro Productions and Castle Rock Entertainment and distributed by Columbia Pictures Television.[nb 1] It was largely written by David and Seinfeld, with script writers who included Larry Charles, Peter Mehlman, Gregg Kavet, Carol Leifer, David Mandel, Jeff Schaffer, Steve Koren, Jennifer Crittenden, Tom Gammill, Max Pross, Dan O'Keefe, Charlie Rubin, Marjorie Gross, Alec Berg, Elaine Pope and Spike Feresten. A favorite among critics, the series led the Nielsen ratings in Seasons 6 and 9 and finished among the top two (with NBC's ER) every year from 1994 to 1998. Only two other shows – I Love Lucy and The Andy Griffith Show – have finished their runs at the top of the ratings.
Seinfeld is widely regarded as one of the greatest and most influential sitcoms of all time. It has been ranked among television's best shows in publications such as Entertainment Weekly, Rolling Stone and TV Guide. Its most renowned episodes include "The Chinese Restaurant", "The Soup Nazi", "The Parking Garage", "The Marine Biologist" and "The Contest". In 2013, the Writers Guild of America voted it the No. 2 Best-Written TV Series of All Time (second to The Sopranos). E! named it the "Number 1 reason the '90s ruled", and quotes from numerous episodes have become catchphrases in popular culture. | Summarize the following Wikipedia entry in three sentences. | Seinfeld was an American television sitcom created by Jerry Seinfeld and Larry David, which ran for 9 seasons on NBC from July 5, 1989 - May 14, 1998. Seinfeld is widely regarded as one of the greatest and most influential sitcoms of all times and is one of only two other shows -- I love Lucy an The Andy Griffith Show -- to have finished their runs at the top of the ratings. It has often been described as "a show about nothing" that focuses on teh minituae of daily life. |
null | false | null | The 993 Turbo coupé was introduced in 1995. It featured a new twin-turbocharged engine displacing 3.6 liters and generating a maximum power output of 300 kW (408 PS; 402 hp). Air-to-air intercoolers, electronic engine management, redesigned cylinder heads, and other modified engine internals completed the new engine. The 993 Turbo was the first 911 Turbo with all-wheel drive, taken from the 959 flagship model. The Turbo's bodywork differs from the Carrera by widened rear wheel arches (about 6 cm), redesigned front and rear bumper moldings, and a fixed "whale tail" rear wing housing the intercoolers. New 18-inch (460 mm) alloy wheels with hollow spokes were standard. | What was the first Porsche 911 with AWD? | The 993 had a Turbo variant that started in 1995. It was the first 911 Turbo with AWD. This AWD system was taken from the 959 flagship model. |
null | false | 249 | While social media offer great communication opportunities, they also increase the vulnerability of young people to threatening situations online. Recent studies report that cyberbullying constitutes a growing problem among youngsters. Successful prevention depends on the adequate detection of potentially harmful messages and the information overload on the Web requires intelligent systems to identify potential risks automatically. The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. We describe the collection and fine-grained annotation of a training corpus for English and Dutch and perform a series of binary classification experiments to determine the feasibility of automatic cyberbullying detection. We make use of linear support vector machines exploiting a rich feature set and investigate which information sources contribute the most for this particular task. Experiments on a holdout test set reveal promising results for the detection of cyberbullying-related posts. After optimisation of the hyperparameters, the classifier yields an F1-score of 64% and 61% for English and Dutch respectively, and considerably outperforms baseline systems based on keywords and word unigrams.
The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. | What types are the bullying mainly like? | The types are bullies, victims, and bystanders. |
null | false | null | Why age 27 is given so much importance specially in Music industry? | Several renowned music artists died at age 27 which is also sometimes referred as the "27 Club". Often times, the cause of the death is attributed to drug and alcohol abuse or some form of violence like homicide, self harm or accidents. Jimi Hendrix, Janis Joplin, Jim Morrison, Kurt Cobain and Amy Winehouse are some of the few notable personalities in music industry who died at this age. |
|
null | false | null | What is sleep training? | Sleep training typically refers to the method by which babies and children are taught to sleep on their own through the night without the need for feeding, comfort or support from parents. There are different methods to accomplish this, but a doctor should be consulted before deciding to sleep train a young baby. |
|
null | false | 127 | There are a number of off-the-shelf neural models for sentiment classification BIBREF14, BIBREF15, most of which are based on long-short term memory networks (LSTM) BIBREF16 or convolutional neural networks (CNN) BIBREF14. In this paper, we pre-train three sentiment classifiers: BiLSTM, BiLSTM$+$A, and CNN. These classifiers are targeted by white-box attacking methods to generate adversarial examples (detailed in Section SECREF9). BiLSTM is composed of an embedding layer that maps individual words to pre-trained word embeddings; a number of bi-directional LSTMs that capture sequential contexts; and an output layer that maps the averaged LSTM hidden states to a binary output. BiLSTM$+$A is similar to BiLSTM except it has an extra self-attention layer which learns to attend to salient words for sentiment classification, and we compute a weighted mean of the LSTM hidden states prior to the output layer. Manual inspection of the attention weights show that polarity words such as awesome and disappointed are assigned with higher weights. Finally, CNN has a number of convolutional filters of varying sizes, and their outputs are concatenated, pooled and fed to a fully-connected layer followed by a binary output layer.
Recent development in transformer-based pre-trained models have produced state-of-the-art performance on a range of NLP tasks BIBREF17, BIBREF18. To validate the transferability of the attacking methods, we also fine-tune a BERT classifier for black-box tests. That is, we use the adversarial examples generated for attacking the three previous classifiers (BiLSTM, BiLSTM$+$A and CNN) as test data for BERT to measure its classification performance to understand whether these adversarial examples can fool BERT.
There are a number of off-the-shelf neural models for sentiment classification (Kim, 2014; Wang et al., 2016), most of which are based on long-short term memory networks (LSTM; Hochreiter and Schmidhuber (1997)) or convolutional neural networks (CNN; Kim (2014)). In this paper, we pre-train three sentiment classifiers: BiLSTM, BiLSTM+A, and CNN. | What sentiment classifiers are pre-trained in the text? | BiLSTM, BiLSTM+A, and CNN. |
null | false | 267 | We first elaborate into the details of the features derived to describe each user's tendency towards each class (Neutral, Racism or Sexism), as captured in their tweeting history. In total, we define the three features INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , representing a user's tendency towards posting Neutral, Racist and Sexist content, respectively. We let INLINEFORM3 denote the set of tweets by user INLINEFORM4 , and use INLINEFORM5 , INLINEFORM6 and INLINEFORM7 to denote the subsets of those tweets that have been labeled as Neutral, Racist and Sexist respectively. Now, the features are calculated as INLINEFORM8 , INLINEFORM9 ,and INLINEFORM10 .
Furthermore, we choose to model the input tweets in the form of vectors using word-based frequency vectorization. That is, the words in the corpus are indexed based on their frequency of appearance in the corpus, and the index value of each word in a tweet is used as one of the vector elements to describe that tweet. We note that this modelling choice provides us with a big advantage, because the model is independent of the language used for posting the message.
Furthermore, we choose to model the input tweets in the form of vectors using wordbased frequency vectorization. That is, the words in the corpus are indexed based on their frequency of appearance in the corpus, and the index value of each word in a tweet is used as one of the vector elements to describe that tweet. We note that this modelling choice provides us with a big advantage, because the model is independent of the language used for posting the message. | In what kind of form do they model the input tweets? | In the form of vectors using wordbased frequency vectorization. That is, the words in the corpus are indexed based on their frequency of appearance in the corpus, and the index value of each word in a tweet is used as one of the vector elements to describe that tweet. |
1710.00341 | false | null | The system starts with a claim to verify. First, we automatically convert the claim into a query, which we execute against a search engine in order to obtain a list of potentially relevant documents. Then, we take both the snippets and the most relevant sentences in the full text of these Web documents, and we compare them to the claim. The features we use are dense representations of the claim, of the snippets and of related sentences from the Web pages, which we automatically train for the task using Long Short-Term Memory networks (LSTMs). We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. We feed all these representations as features, together with pairwise similarities, into a Support Vector Machine (SVM) classifier using an RBF kernel to classify the claim as True or False.
We also use the final hidden layer of the neural network as a task-specific embedding of the claim, together with the Web evidence. | What data is used to build the task-specific embeddings? | The answers are shown as follows:
* embedding of the claim
* Web evidence
|
null | false | null | Why did the Allies invade Normandy during world war 2? | The Allies invaded Normandy to open a 2nd front against the Axis and to liberate Europe from the Germans. The invasion spot was chosen because it was weakly defended and gave great access to the interior of France. A diversion called Operation Fortitude diverted German resources to Calais, as they believed that to be the primary landing zone for invasion, enabling the Allies to score a decisive victory. |
|
null | false | 98 | Vietnamese, like many languages in continental East Asia, is an isolating language and one branch of Mon-Khmer language group. The most basic linguistic unit in Vietnamese is morpheme, similar with syllable or token in English and “hình vị” (phoneme) or “tiếng” (syllable) in Vietnamese. According to the structured rule of its, Vietnamese can have about 20,000 different syllables (tokens). However, there are about 8,000 syllables used the Vietnamese dictionaries. There are three methods to identify morphemes in Vietnamese text BIBREF10 .
Morpheme is the smallest meaningful unit of Vietnamese.
Morpheme is the basic unit of Vietnamese.
Morpheme is the smallest meaningful unit and is not used independently in the syntax factor.
In computational linguistics, morpheme is the basic unit of languages as Leonard Bloomfield mentioned for English BIBREF11 . In our research for Vietnamese, we consider the morpheme as syllable, called “tiếng” in Vietnamese (as Nguyen’s definition BIBREF12 ).
The next concept in linguistics is word which has fully grammar and meaning function in sentences. For Vietnamese, word is a single morpheme or a group of morphemes, which are fixed and have full meaning BIBREF12 . According to Nguyen, Vietnamese words are able classified into two types, (1) 1- syllable words with fully meaning and (2) n-syllables words whereas these group of tokens are fixed. Vietnamese syllable is not fully meaningful. However, it is also explained in the meaning and structure characteristics. For example, the token “kỳ” in “quốc kỳ” whereas “quốc” means national, “kỳ” means flag. Therefore, “quốc kỳ” means national flag.
Consider dictionary used for evaluating the corpus, extracting features for models, and evaluating the systems, there are many Vietnamese dictionaries, however we recommend the Vietnamese dictionary of Hoang Phe, so called Hoang Phe Dictionary. This dictionary has been built by a group of linguistical scientists at the Linguistic Institute, Vietnam. It was firstly published in 1988, reprinted and extended in 2000, 2005 and 2010. The dictionary currently has 45,757 word items with 15,901 Sino-Vietnamese word items (accounting for 34.75%) BIBREF13 .
The most basic linguistic unit in Vietnamese is morpheme, similar with syllable or token in English and “hinh vj” (phoneme) or “tieng^ (syllable) in Vietnamese. | Is morpheme the most basic linguistic unit in Vietnamese? | Yes. |
null | false | 0 | In recent years, zero-shot translation in NMT has attracted widespread attention in academic research. Existing methods are mainly divided into four categories: pivot-based method, transfer learning, multilingual NMT, and unsupervised NMT.
Pivot-based Method is a common strategy to obtain a source$\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15.
Transfer Learning is firstly introduced for NMT by BIBREF6, which leverages a high-resource parent model to initialize the low-resource child model. On this basis, BIBREF7 and BIBREF8 use shared vocabularies for source/target language to improve transfer learning, while BIBREF16 relieve the vocabulary mismatch by mainly using cross-lingual word embedding. Although these methods are successful in the low-resource scene, they have limited effects in zero-shot translation.
Multilingual NMT (MNMT) enables training a single model that supports translation from multiple source languages into multiple target languages, even those unseen language pairs BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Aside from simpler deployment, MNMT benefits from transfer learning where low-resource language pairs are trained together with high-resource ones. However, BIBREF22 point out that MNMT for zero-shot translation easily fails, and is sensitive to the hyper-parameter setting. Also, MNMT usually performs worse than the pivot-based method in zero-shot translation setting BIBREF23.
Unsupervised NMT (UNMT) considers a harder setting, in which only large-scale monolingual corpora are available for training. Recently, many methods have been proposed to improve the performance of UNMT, including using denoising auto-encoder, statistic machine translation (SMT) and unsupervised pre-training BIBREF24, BIBREF25, BIBREF26, BIBREF11. Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation.
Our proposed method belongs to the transfer learning, but it is different from traditional transfer methods which train a parent model as starting point. Before training a parent model, our approach fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation.
Recently, many methods have been proposed to improve the performance of UNMT, including using denoising auto-encoder, statistic machine translation (SMT) and unsupervised pre-training (Artetxe et al. 2017; Lample et al. 2018; Ren et al. 2019; Lample and Conneau 2019). Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation. | Why have many methods been proposed to improve the performance of UNMT in recent? | Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation. |
null | false | null | Make a bullet list of three different types of meat I could get in a taco | - Pollo Asado
- Carne asada
- Al Pastor |
|
null | false | null | What is a crossbody bag | A crossbody bag is a purse or sport bag that straddles over one shoulder and crosses over the front and back of the torso, so the bag can hang by the hip. |
|
null | false | 73 | For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts “positive” will be 60% accurate and will make mistakes 40% of the time.
In addition to the majority baseline, we also compare our results with a lexicon-based approach. We use SentiWordNet BIBREF29 to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review.
RNN outperforms both baselines in all four datasets (see Table TABREF9 ). Also for Spanish restaurant review, the lexicon-based baseline is below the majority baseline which shows that solely translating data and using lexicons is not sufficient to achieve good results in multilingual sentiment analysis.
Among the wrong classifications for each test set, we calculated the percentage of false positives and false negatives. Table TABREF10 shows the distribution of false positives and false negatives for each class. In all four classes, the number of false negatives are more than the number of false positives. This can be explained by the unbalanced training dataset where the number of positive reviews are more than the number of negative reviews (59,577 vs 17,132).
To be able to see the difference between baseline and RNN, we took each method's results as a group (4 values: one for each language) and compared the means. Post hoc comparisons using the Tukey HSD test indicated that the mean accuracies for baselines (majority and lexicon-based) are significantly different than RNN accuracies as can be seen in Table TABREF12 (family-wise error rate=0.06). When RNN is compared with lexicon-based baseline and majority baseline, the null hypothesis can be rejected meaning that each test is significant. In addition to these comparisons, we also calculated the effect sizes (using Cohen's d) between the baselines and our method. The results are aligning with Tukey HSD results such that while our method versus baselines have very large effect sizes, lexicon-based baseline and majority baseline have negligible effect size.
Figure FIGREF11 shows the differences in minimum and maximum values of all three approaches. As the figure shows, RNN significantly outperforms both baselines for the sentiment classification task.
We use SentiWordNet to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review. | In addition to most baselines, what comparisons did the authors make? | They use SentiWordNet to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, they accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review. |
1907.08937 | false | null | Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0
where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 .
Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0
where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 . | Which sampling method do they use to approximate similarity between the conditional probability distributions over entity pairs? | The answers are shown as follows:
* monte-carlo
* sequential sampling
|
null | false | null | Prove pythagoras theorem. | There is a geometrical proof that if we square the hypotenuse, it equals the sum of the squares of the sides of a triangle. The sum of the areas of the two squares on the legs (a and b) equals the area of the square on the hypotenuse (c).
Symbolic statement a^{2}+b^{2}=c^{2} |
|
null | false | null | Was Dracula a real person. | No. Dracula is fictitious character. The character first appeared in a book written by Bram Stoker and was inspired by a real life person called Vlad Dracul. Vlad Dracul lived in the 15th century. He was a Transylvanian prince known for their cruelty in punishing enemies and wrong doers. |
|
null | false | null | We are getting a new puppy today, I am writing this messages to you our beloved children that we need to keep the Cats away from the new puppy for at least the first 2 weeks. | The reason is that the Cat will feel intimated and attack the new puppy if we don't separate them and let them get used to each other for a period of time. |
|
null | false | null | Why are there so many rules about table manners? | Well one big reason is that eating among other people is a social activity. In any social activity, there are guidelines for behavior. Mostly these guidelines are intended to make the social activity pleasant or fun for all the people involved.
For table manners, the rules are noticeably different among different cultures. Still, there are some common themes across cultures, for what is considered polite manners while eating. One common theme is to ensure that every person at the meal gets to eat. You can see this in the American table manner rule that nobody should not start eating until every person has been served their meal. In Mexican culture, any child who is nearby an outdoor party should be offered some of the party food. Another common theme about manners is that you should enjoy the food and the company, and not appear to simply be gobbling down as much food as possible. There are a lot of negative words to describe behavior that looks like eating too much or too fast, to be considered polite: being piggish, wolfing your food, gobbling, scarfing.
One rule that helps enforce slow eating, in American table manners, is that you shouldn't butter a whole piece of bread before eating it. Instead, you should tear the bread into smaller pieces, and butter only the piece you will eat next. Breaking the bread in half is considered a minimum level of politeness. Also, it's considered polite to leave one bite of each dish on your plate. If you eat everything, it could signal that you were just hungry, and eating everything available! But if you leave one bite, it signals "I liked it, but had enough". Americans also slow down eating by using both their fork and their knife in their right hands. This means that whenever you need to cut your food, you have to put down your fork, pick it up in your left hand, and pick up your knife in your right hand to cut your food. Before taking the bite, you have to do the opposite: put down your knife, pick up your fork in your right hand, and take your bite. This slows down eating a lot. (This rule is in contrast to table manners for the English, who keep their fork in their left hand and their knife in their right hand). Also, when eating soup, it's polite to push the spoon away from you, after filling it. Wipe the bottom of the spoon on the far side of your bowl, before bringing the spoonful of soup to your mouth. If you eat soup this way, you avoid scooping spoonfuls directly into your mouth -- which again, could look like you are just wolfing down food as fast as possible, instead of enjoying each mouthful.
There are also some table manner rules that provide signals to the servers or the waitstaff, without requiring verbal communications. For example, if you put down your fork pointing at 4:00 (with the handle pointing to the lower right of your plate), then it communicates "I'm still eating". But if you put your fork down, pointing at 10:00 or 11:00, then it communicates "I'm finished with this food". Then your server can take away your plate without interrupting the group conversation, which again is an example of making the social activity more pleasant for the participants.
Overall, the rules of table manners evolved to make the social experience of eating a meal together a pleasant experience for everyone. |
|
null | false | null | How do I keep my plants alive? | To keep your plants alive you will first need to understand what type of nutrition and environment is best suited for them. For environment, some plants prefer full sun, whereas some prefer partial or full shade. There's also humidity and temperature to consider. For nutrition, determine what types or food and how much water to provide at what frequency. You can usually find this information by consulting plant expert resources online or at your local library. Once you understand the ideal climate and nutrition for your plant, adjust the climate to best suit it, while following the ideal schedule for feeding and watering. Also, once the plant outgrows the pot it is in, it's time to move it to a bigger pot. This should keep your plant happy and healthy. |
|
null | false | null | Identify which car manufacturer is German or American: Lotec, Aurica Motors | Lotec is German, Aurica Motors is American |
|
null | false | null | What are some good techniques for cooking a steak? | Most would agree that the most important element of cooking a steak to get optimal flavor is to be able to achieve very high heat on the cooking surface. High heat would be 500 degrees or more.
Where opinion varies is when to put the steak onto that high heat surface.
The more traditional technique calls for searing the steak on both sides first, and then cooking it until you hit your target temperature.
But more and more, people seem to prefer what many call a "reverse sear" technique, in which you would slow roast the steak until it is near done, and then placing the steak on the high heat surface to achieve a very dark, even sear all over the steak's surface. |
|
null | false | 137 | Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing.
Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train.
Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:
We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU.
We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT.
We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task.
Our main contributions in this work are: We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU. | What does the fast adaptation method for? | For obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU. |
null | false | 197 | The articles from the newspaper were fed in spaCy library into the proper format for training. Different parameters were tested, in order to get the optimal result. The dataset was shuffled, using the same seed for all the experiments and was split into a train set (70%), a test set (20%) and a validation set (10%). Information was passed through the training algorithm in batches with an increasing batch size from 4 to 32 and a step of 1.001. Additionally, a dropout rate was configured in every batch, initialized to 0.6 which dropped during the training process to 0.4. Most of the experiments were trained using 30 epochs.
The main area of study for the experiments focuses on three important components. At first, we investigate the difference in results between part of speech taggers that classify morphological features and taggers that detect only the part of speech. Moreover, we explore the significance of pretrained vectors used from a model and their effect on the extraction of better results. Most importantly, the usage of subwords of tokens from a tagger as embeddings is issued. For the experiments, precision, recall and f1 score are used as evaluation metrics.
For the experiments, precision, recall and f1 score are used as evaluation metrics. | What metrics are used as evaluation metrics? | Precision, recall and f1 score. |
null | false | 91 | We approach understanding NMT by investigating the word importance via a gradient-based method, which bridges the gap between word importance and translation performance. Empirical results show that the gradient-based method is superior to several black-box methods in estimating the word importance. Further analyses show that important words are of distinct syntactic categories on different language pairs, which might support the viewpoint that essential inductive bias should be introduced into the model design BIBREF28. Our study also suggests the possibility of detecting the notorious under-translation problem via the gradient-based method.
This paper is an initiating step towards the general understanding of NMT models, which may bring some potential improvements, such as
Interactive MT and Constraint Decoding BIBREF29, BIBREF26: The model pays more attention to the detected unimportant words, which are possibly under-translated;
Adaptive Input Embedding BIBREF30: We can extend the adaptive softmax BIBREF31 to the input embedding of variable capacity – more important words are assigned with more capacity;
NMT Architecture Design: The language-specific inductive bias (e.g., different behaviors on POS) should be incorporated into the model design.
We can also explore other applications of word importance to improve NMT models, such as more tailored training methods. In general, model interpretability can build trust in model predictions, help error diagnosis and facilitate model refinement. We expect our work could shed light on the NMT model understanding and benefit the model improvement.
There are many possible ways to implement the general idea of exploiting gradients for model interpretation. The aim of this paper is not to explore this whole space but simply to show that some fairly straightforward implementations work well. Our approach can benefit from advanced exploitation of the gradients or other useful intermediate information, which we leave to the future work.
Empirical results show that the gradient-based method is superior to several black-box methods in estimating the word importance. | What do the results show? | Empirical results show that the gradient-based method is superior to several black-box methods in estimating the word importance. |
null | false | null | What is the fastest time control chess? | Bullet chess |
|
null | false | null | Give me a list of some of the most popular song from 70s Japanese Pop singer, Mariya Takeuchi | 1. Plastic Love
2. Stay with Me
3. September
4. Miracle Love
5. Yume No Tsuzuki |
|
null | false | null | Why is merino wool a great material for clothing? | While traditional wool may be known as itchy and rough, merino wool is a soft fabric that offers many benefits over alternative materials such as cotton. Merino wool has moisture wicking abilities that work to keep you dry and sweat free. It may also have antibacterial properties that can help reduce body odor. These characteristics make merino wool a great choice for clothing, and an especially great choice for socks. |
|
null | false | null | Explain why Key West is a great location for a vacation. | Key West is a great place to go for various reasons. It provides a lot of the same allure that the Caribbean offers, however, still remains within the US. Therefore, you do not have to worry about a passport, customs, or anything that deals with traveling internationally. It sits as an island off the coast of mainland Florida and has some of the clearest blue water. While it is an island and has several beaches, it also has a great downtown. Duvall street specifically is filled with restaurants, bars, shops and other attractions for tourists. It is a ton of fun and has something for everyone. For some people who just want to sit near the water and relax, its great for that - for someone who likes to be active and enjoy going out at night it has that as well. With so many options, it's a great destination for couples as well as groups of people depending on the occasion. Between everything mentioned and not having to worry about going through customs, forgetting your passport (or anything else that can happen when traveling internationally), etc - Key West serves as a relatively lower maintenance excursion, while reaping the same benefits as going somewhere tropical. |
|
null | false | null | How to prepare without baking dessert recipe -Banana Pudding? | Take a casserole mold. Layer the mold's base with cookies of your choice. Slice up 1½ - 2 bananas and layer them above the cookies. The pudding mixture can be made by following the instructions on the box. Mix the tub of Cool Whip well before pouring it above the pudding mixture. Pour half of the pudding mix on the first layer of bananas and cookies. Add more bananas and cookies, followed by more pudding mixture. Repeat the process to add more layers. Use cookie crumbs and sprinkle on top. Refrigerate until serving. |
|
null | false | null | The first modern swim/bike/run event was held at Mission Bay, San Diego, California on September 25, 1974. The race was conceived and directed by two members of the San Diego Track Club, Jack Johnstone and Don Shanahan. Johnstone recalls that he was a part of the 70s jogging craze in America and that after entering a few races he was not regaining his "mediocre fitness" despite having been a member of the 1957 Collegiate and AAU All-American swim teams. Then in 1973, Johnstone learned of the Dave Pain Birthday Biathlon, a 7.2 km (4.5 mi) run followed by what was billed as a quarter-mile (0.4 km) swim (the actual distance was only between 200 and 300 yards). The following year, after competing in the event for the second time and placing in the top ten, Johnstone desired more of this style of race and with equal emphasis on the swim. So he petitioned the chairman of the San Diego Track Club who told him he would add a race to the club calendar. But the rest of the race was up to Johnstone to organise, and at the same time he was to contact Don Shanahan so there wouldn't be too many "weird" races on the club schedule. Shanahan told Johnstone that he wanted to include a biking leg to the race; whilst hesitant Johnstone agreed to the addition. When naming the event the pair used the unofficially agreed naming system for multisport events, already used for pentathlon, heptathlon, and decathlon. So they used the Greek prefix tri (three) for the number of events, followed by the already familiar athlon, hence naming the event the Mission Bay Triathlon. It is worthy of note that neither founder had heard of the French events; both believed their race was a unique idea. | Given this paragraph about modern triathlon, what year did the two founders form mission bay triathlon. | Modern day triathlon is believed to have occurred in 1974 when founded by two members of the San Diego track club, Don Shanahan and Jack Johnstone. Although the pair reportedly founded the sport, it's important to understand not everything is created and owned by americans. The sport can be traced back to france in 1901, originally called "Le Trois Sports", which included Bike, Run and Canoe disciplines. Other forms of multisport had occurred before 1974 including biathlon which included a run of 7.2km followed by a swim of 0.4km) but Nonetheless, on the 25th September 1974, triathlon mission bay in San Diego, California was born. |
null | false | null | Netflix views streaming video as a "sustaining" technology that extends its core mail DVD business. Do you agree with this, or do you think streaming video is a "disruptive" technology for Netflix that does not align with its core business and requires a different strategy? | Netflix did a strategic renewal by cannibalizing its own cash cow business of mail-in-DVD services. This was the right move as internet speeds were increasing and moving towards broadband. Netflix was even considering launching a TV box if internet speeds remained constant. I agree that streaming video is a disruptive technology for Netflix, requiring a different strategy. Adopting a disruptive technology, Netflix forced Blockbuster to offer the mail-in-DVD service. By the time Blockbuster offered the mail-in-DVD service, Netflix had already begun experimenting with streaming videos and had started offering the streaming video platform. Netflix made Blockbuster dance to its tunes. Netflix, currently valued at $150 billion, changed its business model twice by cannibalizing its core services, shifting from mail-in- DVD service to subscription service to streaming video subscription service. This strategic renewal is one of the critical reasons for Netflix to become a valuable company and one of the pioneers in cloud computing that defined the architecture for AWS and set the freedom and responsibility culture for other silicon valley firms to emulate. |
|
null | false | null | How do I know which bank is better? | You need to compare the benefits provided by each bank based on your needs. For example, if you are looking for easy access and no paper work, then might choose to go for digital banks where you can perform all your transactions from your mobile. If you are looking for more advanced banking products such as loans and mortgages then might have to go for a traditional bank, although some of the digital banks has started to provide these products as well. |
|
null | false | null | Spider-Man is a 2002 American superhero film based on the Marvel Comics superhero of the same name. Directed by Sam Raimi from a screenplay by David Koepp, it is the first installment in Raimi's Spider-Man trilogy, and stars Tobey Maguire as the titular character, alongside Willem Dafoe, Kirsten Dunst, James Franco, Cliff Robertson, and Rosemary Harris. The film chronicles Spider-Man's origin story and early superhero career. After being bitten by a genetically-altered spider, outcast teenager Peter Parker develops spider-like superhuman abilities and adopts a masked superhero identity to fight crime and injustice in New York City, facing the sinister Green Goblin (Dafoe) in the process.
Development on a live-action Spider-Man film began in the 1980s. Filmmakers Tobe Hooper, James Cameron, and Joseph Zito were all attached to direct the film at one point. However, the project would languish in development hell due to licensing and financial issues. After progress on the film stalled for nearly 25 years, it was licensed for a worldwide release by Columbia Pictures in 1999 after it acquired options from Metro-Goldwyn-Mayer (MGM) on all previous scripts developed by Cannon Films, Carolco, and New Cannon. Exercising its option on just two elements from the multi-script acquisition (a different screenplay was written by James Cameron, Ted Newsom, John Brancato, Barney Cohen, and Joseph Goldman), Sony hired Koepp to create a working screenplay (credited as Cameron's), and Koepp received sole credit in final billing. Directors Roland Emmerich, Ang Lee, Chris Columbus, Barry Sonnenfeld, Tim Burton, Michael Bay, Jan de Bont, M. Night Shyamalan, Tony Scott, and David Fincher were considered to direct the project before Raimi was hired as director in 2000. The Koepp script was rewritten by Scott Rosenberg during pre-production and received a dialogue polish from Alvin Sargent during production. Filming took place in Los Angeles and New York City from January to June 2001. Sony Pictures Imageworks handled the film's visual effects. | Name some of the cast members of the movie Spiderman. | Tobey Maguire, Willem Dafoe, Kirsten Dunst, James Franco, Cliff Robertson, and Rosemary Harris. |
null | false | null | As of 2022, the Eastern Conference/Division led the Western Conference/Division 40–36 in championships won. As of 2022, the Boston Celtics and the Minneapolis/Los Angeles Lakers have won a combined total of 34 NBA championships (with 17 apiece). As of 2022, the defending champions are the Golden State Warriors. | Which NBA team has won the most championships? | As of 2022, the Boston Celtics and the Minneapolis/Los Angeles Lakers have won a combined total of 34 NBA championships (with 17 apiece). |
null | false | null | What if a buff? | A buff is an item of clothing that wraps around the neck for warmth. |
|
null | false | null | Who ordered John the Baptists execution | King Herod |
|
null | false | null | Which cities can I visit on the west coast of Australia? | Fremantle and Perth are located on the west coast of the country. |
|
null | false | null | A digital twin is a digital representation of an intended or actual real-world physical product, system, or process (a physical twin) that serves as the effectively indistinguishable digital counterpart of it for practical purposes, such as simulation, integration, testing, monitoring, and maintenance. The digital twin has been intended from its initial introduction to be the underlying premise for Product Lifecycle Management and exists throughout the entire lifecycle (create, build, operate/support, and dispose) of the physical entity it represents. Since information is granular, the digital twin representation is determined by the value-based use cases it is created to implement. The digital twin can and does often exist before there is a physical entity. The use of a digital twin in the create phase allows the intended entity's entire lifecycle to be modeled and simulated. A digital twin of an existing entity can, but must not necessarily, be used in real time and regularly synchronized with the corresponding physical system.Though the concept originated earlier, the first practical definition of a digital twin originated from NASA in an attempt to improve physical-model simulation of spacecraft in 2010. Digital twins are the result of continual improvement in the creation of product design and engineering activities. Product drawings and engineering specifications have progressed from handmade drafting to computer-aided drafting/computer-aided design to model-based systems engineering and strict link to signal from the physical counterpart. | Extract what model originated the first practical definition of a digital twin as well as the name of the organization and the year of creation separated by dashes | physical-model simulation of a spacecraft - NASA - 2010 |
null | false | null | Which country was the first to introduce old age pensions | Germany |
|
null | false | 0 | Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors.
One common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\rightarrow $target model (parent) to initialize a low-resource source$\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable.
In this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting.
In this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target parallel data but no source$\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\rightarrow $target model and then test this model in source$\rightarrow $target direction directly. The main contributions of this paper are as follows:
We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation.
We propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces.
Our proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method.
We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation. | What new transfer learning approach do authors propose for NMT? | A new transfer approach which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation. |
null | false | null | What is a mutual fund? | A mutual fund is a type of investment that includes a well diversified pool of stocks across a broad range of industry verticals. Many investors invest in mutual funds and usually stay in them for the long term. It is a good idea to diversify a stock portfolio like a 401k with mutual funds that have annual percentage return on the average of about 10% - 12%. This is great way to get a good return on your money without having to actively manage your brokerage account. |
|
null | false | null | The Primacy of the Spains (Portuguese: Primaz das Espanhas; Spanish: Primado de las Españas, Catalan: Primat de les Espanyes) is the primacy of the Iberian Peninsula, historically known as Hispania or in the plural as the Spains. The Archbishop of Braga, in Portugal, has claimed this primacy over the whole Iberian Peninsula since the middle ages, however today his primacy is only recognized in Portugal. The Archbishop of Toledo in Spain has claimed the Primacy of Spain, as the primate above all other episcopal sees in Spain. In addition, the Archbishop of Tarragona in Catalonia also make use of the title. The Archbishops in Braga, Toledo and Tarragona, if raised to the rank of cardinal, are known as Cardinal-Primates. | Please explain the Primacy of the Spains | The Primacy of the Spains (Portuguese: Primaz das Espanhas; Spanish: Primado de las Españas, Catalan: Primat de les Espanyes) is the primacy of the Iberian Peninsula, historically known as Hispania or in the plural as the Spains.
The Archbishop of Braga, in Portugal, has claimed this primacy over the whole Iberian Peninsula since the middle ages, however today his primacy is only recognized in Portugal. The Archbishop of Toledo in Spain has claimed the Primacy of Spain, as the primate above all other episcopal sees in Spain.
In addition, the Archbishop of Tarragona in Catalonia also make use of the title. The Archbishops in Braga, Toledo and Tarragona, if raised to the rank of cardinal, are known as Cardinal-Primates. |
null | false | 52 | Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression") or evidence of depression (e.g., “depressed over disappointment"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps"), disturbed sleep (e.g., “another restless night"), or fatigue or loss of energy (e.g., “the fatigue is unbearable") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0.
Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. | What research methods does the author use? | Specifically, they conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. |
null | false | null | Classify the following cricketers as batsman, bowlers, or all rounders.
David Warner, Trent Boult, Rohit Sharma, Virat Kohli, Rashid Khan, Quinton de Kock | David Warner, Rohit Sharma, Virat Kohli, and Quinton de Kock are batsmen. Trent Boult is a bowler. Rashid Khan is an all rounder. |
|
null | false | null | When did google first start? | Google was founded on September 4, 1998. The first investor in Google was the Sun co-founder who wrote them their first check for $100,000. In the beginning they were just a search engine for internet browsing. As this gained popularity they have developed hundreds of other products used by billions of people and companies all over the world. Some technologies they produced after their initial launch that people use today are gmail, google workspaces, and google cloud platform. |
|
null | false | null | The 2021 Amstel Gold Race was a road cycling one-day race that took place on 18 April 2021 in the Netherlands. It was the 55th edition of the Amstel Gold Race and the 14th event of the 2021 UCI World Tour. Although Wout van Aert was declared to have defeated Tom Pidcock in a photofinish sprint with Maximilian Schachmann completing the podium, it was speculated that the photofinish camera had been focused on a point approximately 26.86 cm forward of the true finish line, making the true victor unclear. | Why was it unclear who won the race? | The photofinish camera was focused on a point approximately 26.86cm forward of the true finish line. |
null | false | 148 | In the first experiment, we evaluated the DL and ML methods on SNLI, multi-NLI, Quora, and Clinical-QE. For the datasets that did not have a development and test sets, we randomly selected two sets, each amounting to 10% of the data, for test and development, and used the remaining 80% for training. For MultiNLI, we used the dev1-matched set for validation and the dev2-mismatched set for testing.
Table TABREF28 presents the results of the first experiment. The DL model with GloVe word embeddings achieved better results on three datasets, with 82.80% Accuracy on SNLI, 78.52% Accuracy on MultiNLI, and 83.62% Accuracy on Quora. Logistic Regression achieved the best Accuracy of 98.60% on Clinical-RQE. We also performed a 10-fold cross-validation on the full Clinical-QE data of 8,588 question pairs, which gave 98.61% Accuracy.
In the second experiment, we used these datasets for training only and compared their performance on our test set of 850 consumer health questions. Table TABREF29 presents the results of this experiment. Logistic Regression trained on the clinical-RQE data outperformed DL models trained on all datasets, with 73.18% Accuracy.
To validate further the performance of the LR method, we evaluated it on question similarity detection. A typical approach to this task is to use an IR method to find similar question candidates, then a more sophisticated method to select and re-rank the similar questions. We followed a similar approach for this evaluation by combining the LR method with the IR baseline provided in the context of SemEval-cQA. The hybrid method combines the score provided by the Logistic Regression model and the reciprocal rank from the IR baseline using a weight-based combination:
INLINEFORM0
The weight INLINEFORM0 was set empirically through several tests on the cQA-2016 development set ( INLINEFORM1 ). Table TABREF30 presents the results on the cQA-2016 and cQA-2017 test datasets. The hybrid method (LR+IR) provided the best results on both datasets. On the 2016 test data, the LR+IR method outperformed the best system in all measures, with 80.57% Accuracy and 77.47% MAP (official system ranking measure in SemEval-cQA). On the cQA-2017 test data, the LR+IR method obtained 44.66% MAP and outperformed the cQA-2017 best system in Accuracy with 67.27%.
To validate further the performance of the LR method, we evaluated it on question similarity detection. | What did the authors do to validate further the performance of the LR method? | The authors evaluated the LR method on question similarity detection. |
null | false | null | Cauliflower rice is not that fun to eat in its bland form. Is there anything interesting and tasty I can cook with cauliflower rice? | Cauliflower rice can be cooked within a few mins. You can make the dish more tasty with scrambled eggs , okra, and maybe other vegetables like peas, carrots and onions. One can cook this by mixing these ingredients including raw eggs in a bowl and then cooking them in a skillet for 5 mins. Pour one tablespoon of oil, let it heat up, and then put in the mix from the bowl. Stir for 5 mins on medium. Add pepper and salt. Once ready you can eat it with a sauce of your liking it just with pepper and salt. |
|
null | false | null | What are some board games that involve strategy? | Catan, Diplomacy and Dune are some examples of board games that require strategy to win. |
|
null | false | null | Who were the Defense Against the Dark Arts teachers in the Harry Potter series? | While Harry Potter was a student, there were six Defense Against the Dark Arts teachers while he was at Hogwarts. They are Quirinus Quirrell, Gilderoy Lockhart, Remus Lupin, Alastor 'Mad Eye' Moody (who was impersonated for a good chunk of the year by Bartemius Crouch Jr), Dolores Umbridge, and Severus Snape. The final year of the book series, when Harry Potter was no longer in the school, the position of Defense Against the Dark Arts was held by Amycus Carrow. Athough he mostly taught the Dark Arts, not the defense against them. |
|
null | false | null | Choose the odd one out: Roger Federer, Roger Moore, Roger Binny | Roger Moore as he is an actor, whereas the other two people are former sportsmen |
|
null | false | null | Who are the best rappers in history? | Eminiem, Tupoc, Notorious BIG, Redman, Jay-Z, Andre 3000 |
|
null | false | null | I am planning a trip to Spain. I am interested in culture, museums and art, but I also want to go to the beach and have great food. What are 5 places that I should definitely visit? | Spain is a large country where you can find all of those things. This would be my top 5, but there are many other places that are worth it. Since you are not specifying the time you will spend in Spain, I will exclude the islands (Balearic Islands and Canary Islands) from the list:
- Madrid: the capital city of Spain has some of the top museums in Europe, like the Prado or the Reina Sofia Museums. It is a city with a great atmosphere, and has some of the best restaurants in the country.
- Barcelona: a mediterranean global city with a lot to offer - from Antonio Gaudi's architectural masterpieces to beaches and awesome views of the Mediterranean sea.
- Galicia: a region located in northwestern Spain. Not very touristic, but with amazing places like Santiago de Compostela, and amazing restaurants and gastronomy.
- Sevilla: the largest city in Southern Spain, full of history, museums and amazing places to eat.
- Costa del Sol: one of the best places in Spain to visit the beach and get some sunlight. As its name states (Sun Coast), you will get sunny weather all year round. |
|
null | false | null | Natural hydrogen (known as white hydrogen), is naturally occurring molecular hydrogen on or in Earth (as opposed to hydrogen produced in the laboratory or in industry). The name white hydrogen distinguishes it from green hydrogen, which is produced from renewable energy sources, and from grey, brown or black hydrogen, which is obtained from fossil sources or from the electrolysis of water. Natural hydrogen may be renewable, non-polluting and allows for lower cost operation compared to industrial hydrogen. Natural hydrogen has been identified in many source rocks in areas beyond the sedimentary basins where oil companies typically operate.
Origin of natural hydrogen
There are several sources of natural hydrogen:
- degassing of deep hydrogen from the Earth's crust and mantle;
- reaction of water with ultrabasic rocks (serpentinisation);
- contact of water with reducing agents in the Earth's mantle;
- interaction of water with freshly exposed rock surfaces (weathering);
- decomposition of hydroxyl ions in the structure of minerals;
- Natural radiolysis of water;
- decomposition of organic matter;
- biological activity
- Extraction
- Natural hydrogen is extracted from wells, mixed with other gases such as nitrogen or helium.
Several sources have been identified in France. Geologists Alain Prinzhofer and Eric Derville have demonstrated the existence of large reservoirs in a dozen countries, including Mali and the United States. However, their potential remains difficult to assess.
Numerous emanations on the ocean floor have been identified but are difficult to exploit. The discovery of a significant emergence in Russia in 2008 suggests the possibility of extracting native hydrogen in geological environments.
Geology
Natural hydrogen is generated continuously from a variety of natural sources. There are many known hydrogen emergences on mid-ocean ridges. Another of the known reactions, serpentinisation, occurs under the sea floor (in the oceanic crust).
Diagenetic origin (iron oxidation) in the sedimentary basins of cratons, notably in Russia. Other sources are being explored, such as mantle hydrogen, or hydrogen from radiolysis (natural electrolysis) or from bacterial activity. In France, the Alps and Pyrenees are suitable for exploitation. New Caledonia has hyperalkaline sources that show dihydrogen emissions. A large accumulation of natural hydrogen was discovered in Bourakebougou (Mali).
Characteristics
Dihydrogen is very soluble in fresh water, especially at depth (solubility increases with pressure).
https://en.wikipedia.org/wiki/Natural_hydrogen | Given these paragraphs about Natural hydrogen, what are some of its sources? | degassing of deep hydrogen from the Earth's crust and mantle; reaction of water with ultrabasic rocks (serpentinisation); contact of water with reducing agents in the Earth's mantle; interaction of water with freshly exposed rock surfaces (weathering); decomposition of hydroxyl ions in the structure of minerals; Natural radiolysis of water; decomposition of organic matter; biological activity; Extraction; Natural hydrogen is extracted from wells, mixed with other gases such as nitrogen or helium. |
null | false | 48 | Table-to-text generation is an important and challenging task in natural language processing, which aims to produce the summarization of numerical table BIBREF0, BIBREF1. The related methods can be empirically divided into two categories, pipeline model and end-to-end model. The former consists of content selection, document planning and realisation, mainly for early industrial applications, such as weather forecasting and medical monitoring, etc. The latter generates text directly from the table through a standard neural encoder-decoder framework to avoid error propagation and has achieved remarkable progress. In this paper, we particularly focus on exploring how to improve the performance of neural methods on table-to-text generation.
Recently, ROTOWIRE, which provides tables of NBA players' and teams' statistics with a descriptive summary, has drawn increasing attention from academic community. Figure FIGREF1 shows an example of parts of a game's statistics and its corresponding computer generated summary. We can see that the tables has a formal structure including table row header, table column header and table cells. “Al Jefferson” is a table row header that represents a player, “PTS” is a table column header indicating the column contains player's score and “18” is the value of the table cell, that is, Al Jefferson scored 18 points. Several related models have been proposed . They typically encode the table's records separately or as a long sequence and generate a long descriptive summary by a standard Seq2Seq decoder with some modifications. Wiseman explored two types of copy mechanism and found conditional copy model BIBREF3 perform better . Puduppully enhanced content selection ability by explicitly selecting and planning relevant records. Li improved the precision of describing data records in the generated texts by generating a template at first and filling in slots via copy mechanism. Nie utilized results from pre-executed operations to improve the fidelity of generated texts. However, we claim that their encoding of tables as sets of records or a long sequence is not suitable. Because (1) the table consists of multiple players and different types of information as shown in Figure FIGREF1. The earlier encoding approaches only considered the table as sets of records or one dimensional sequence, which would lose the information of other (column) dimension. (2) the table cell consists of time-series data which change over time. That is to say, sometimes historical data can help the model select content. Moreover, when a human writes a basketball report, he will not only focus on the players' outstanding performance in the current match, but also summarize players' performance in recent matches. Lets take Figure FIGREF1 again. Not only do the gold texts mention Al Jefferson's great performance in this match, it also states that “It was the second time in the last three games he's posted a double-double”. Also gold texts summarize John Wall's “double-double” performance in the similar way. Summarizing a player's performance in recent matches requires the modeling of table cell with respect to its historical data (time dimension) which is absent in baseline model. Although baseline model Conditional Copy (CC) tries to summarize it for Gerald Henderson, it clearly produce wrong statements since he didn't get “double-double” in this match.
To address the aforementioned problems, we present a hierarchical encoder to simultaneously model row, column and time dimension information. In detail, our model is divided into three layers. The first layer is used to learn the representation of the table cell. Specifically, we employ three self-attention models to obtain three representations of the table cell in its row, column and time dimension. Then, in the second layer, we design a record fusion gate to identify the more important representation from those three dimension and combine them into a dense vector. In the third layer, we use mean pooling method to merge the previously obtained table cell representations in the same row into the representation of the table's row. Then, we use self-attention with content selection gate BIBREF4 to filter unimportant rows' information. To the best of our knowledge, this is the first work on neural table-to-text generation via modeling column and time dimension information so far. We conducted experiments on ROTOWIRE. Results show that our model outperforms existing systems, improving baseline BLEU from 14.19 to 16.85 ($+18.75\%$), P% of relation generation (RG) from 74.80 to 91.46 ($+22.27\%$), F1% of content selection (CS) from 32.49 to 41.21 ($+26.84\%$) and content ordering (CO) from 15.42 to 20.86 ($+35.28\%$) on test set. It also exceeds the state-of-the-art model in terms of those metrics.
Results show that our model outperforms existing systems, improving baseline BLEU from 14.19 to 16.85 ($+18.75\%$), P% of relation generation (RG) from 74.80 to 91.46 ($+22.27\%$), F1% of content selection (CS) from 32.49 to 41.21 ($+26.84\%$) and content ordering (CO) from 15.42 to 20.86 ($+35.28\%$) on test set. | What is the performance of the model in this paper? | The model outperforms existing systems, improving baseline BLEU from 14.19 to 16.85 ($+18.75%$), P% of relation generation (RG) from 74.80 to 91.46 ($+22.27%$), F1% of content selection (CS). |
null | false | null | Give me a list of funny American sitcoms that had their first season in the 2000s | Some funny American sitcoms from the 2000s are The Office, Parks and Rec, Community and Arrested Development |
|
null | false | null | Command & Conquer: Red Alert takes place in a parallel universe. At the Trinity Site in New Mexico in 1946, Albert Einstein prepares to travel back in time. He activates his experimental time machine to find himself in Landsberg, Germany, on December 20, 1924, where he meets a young Adolf Hitler just after the latter's release from Landsberg Prison. Following a brief conversation between the two, Einstein shakes Hitler's hand, which erases him from the timeline.
Hitler's death prevents him from rising to power as leader of Nazi Germany, effectively creating a new timeline. Without Nazi Germany, the Soviet Union (USSR) grows powerful under the rule of Joseph Stalin. The USSR seizes land from China and then invades Eastern Europe, to achieve Joseph Stalin's vision of a Soviet Union stretching across the entire Eurasian landmass. In response, the countries of Western Europe (including an already-rearmed Germany) form the Allied Nations and start a guerrilla war against the invading Soviet Army. Over the course of the game's story, the Allies and Soviets fight for control over the European mainland in an alternate World War II | Extract what factions fight in the video game Command & Conquer: Red Alert | The Allies and the Soviets fight for control over Europe in the game Command & Conquer: Red Alert |
null | false | 218 | The standard approach to multilingual NLP is to design a single architecture, but tune and train a separate model for each language. While this method allows for customizing the model to the particulars of each language and the available data, it also presents a problem when little data is available: extensive language-specific annotation is required. The reality is that most languages have very little annotated data for most NLP tasks.
ammar2016malopa found that using training data from multiple languages annotated with Universal Dependencies BIBREF1 , and represented using multilingual word vectors, outperformed monolingual training. Inspired by this, we apply the idea of training one model on multiple languages—which we call polyglot training—to PropBank-style semantic role labeling (SRL). We train several parsers for each language in the CoNLL 2009 dataset BIBREF0 : a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset. To our knowledge, this is the first multilingual SRL approach to combine supervision from several languages.
The CoNLL 2009 dataset includes seven different languages, allowing study of trends across the same. Unlike the Universal Dependencies dataset, however, the semantic label spaces are entirely language-specific, making our task more challenging. Nonetheless, the success of polyglot training in this setting demonstrates that sharing of statistical strength across languages does not depend on explicit alignment in annotation conventions, and can be done simply through parameter sharing. We show that polyglot training can result in better labeling accuracy than a monolingual parser, especially for low-resource languages. We find that even a simple combination of data is as effective as more complex kinds of polyglot training. We include a breakdown into label categories of the differences between the monolingual and polyglot models. Our findings indicate that polyglot training consistently improves label accuracy for common labels.
Inspired by this, we apply the idea of training one model on multiple languages—which we call polyglot training—to PropBank-style semantic role labeling (SRL). | What do they want to apply training one model on multiple languages to? | PropBank-style semantic role labeling. |
null | false | null | Classify these objects based on their shape.
wheel, coin, CD, stamp, chess board | Round - wheel, coin, CD
Square - stamp, chess board |
|
null | false | 226 | The goal of multi-document summarization (MDS) is to automatically generate a brief, well-organized summary for a topic which describes an event with a set of documents from different sources. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . In the typical setting of MDS, the input is a set of news documents about the same topic. The output summary is a piece of short text document containing several sentences, generated only based on the input original documents.
With the development of social media and mobile equipments, more and more user generated content is available. Figure FIGREF2 is a snapshot of reader comments under the news report “The most important announcements from Google's big developers' conference”. The content of the original news report talks about some new products based on AI techniques. The news report generally conveys an enthusiastic tone. However, while some readers share similar enthusiasms, some others express their worries about new products and technologies and these comments can also reflect their interests which may not be very salient in the original news reports. Unfortunately, existing MDS approaches cannot handle this issue. We investigate this problem known as reader-aware multi-document summarization (RA-MDS). Under the RA-MDS setting, one should jointly consider news documents and reader comments when generating the summaries.
One challenge of the RA-MDS problem is how to conduct salience estimation by jointly considering the focus of news reports and the reader interests revealed by comments. Meanwhile, the model should be insensitive to the availability of diverse aspects of reader comments. Another challenge is that reader comments are very noisy, not fully grammatical and often expressed in informal expressions. Some previous works explore the effect of comments or social contexts in single document summarization such as blog summarization BIBREF7 , BIBREF8 . However, the problem setting of RA-MDS is more challenging because the considered comments are about an event which is described by multiple documents spanning a time period. Another challenge is that reader comments are very diverse and noisy. Recently, BIBREF9 employed a sparse coding based framework for RA-MDS jointly considering news documents and reader comments via an unsupervised data reconstruction strategy. However, they only used the bag-of-words method to represent texts, which cannot capture the complex relationship between documents and comments.
Recently, BIBREF6 proposed a sentence salience estimation framework known as VAESum based on a neural generative model called Variational Auto-Encoders (VAEs) BIBREF10 , BIBREF11 . During our investigation, we find that the Gaussian based VAEs have a strong ability to capture the salience information and filter the noise from texts. Intuitively, if we feed both the news sentences and the comment sentences into the VAEs, commonly existed latent aspect information from both of them will be enhanced and become salient. Inspired by this consideration, to address the sentence salience estimation problem for RA-MDS by jointly considering news documents and reader comments, we extend the VAESum framework by training the news sentence latent model and the comment sentence latent model simultaneously by sharing the neural parameters. After estimating the sentence salience, we employ a phrase based compressive unified optimization framework to generate a final summary.
There is a lack of high-quality dataset suitable for RA-MDS. Existing datasets from DUC and TAC are not appropriate. Therefore, we introduce a new dataset for RA-MDS. We employed some experts to conduct the tasks of data collection, aspect annotation, and summary writing as well as scrutinizing. To our best knowledge, this is the first dataset for RA-MDS.
Our contributions are as follows: (1) We investigate the RA-MDS problem and introduce a new dataset for the problem of RA-MDS. To our best knowledge, it is the first dataset for RA-MDS. (2) To tackle the RA-MDS, we extend a VAEs-based MDS framework by jointly considering news documents and reader comments. (3) Experimental results show that reader comments can improve the summarization performance, which also demonstrates the usefulness of the dataset.
Recently, BIBREF6 proposed a sentence salience estimation framework known as VAESum based on a neural generative model called Variational Auto-Encoders (VAEs). | What are the Variational Auto-Encoders? | It is a sparse coding based framework called Variational Auto-Encoders (VAEs). |
null | false | null | Any tips for figuring out where to enter the water when surfing a new beach? | There is a lot of local knowledge required to be efficient at entering the water of a new beach. It helps to know what kind of a wave break it is - is it a surf break or a reef break for example. But if you see others in the water and it looks like a reasonable surf spot then here are a couple of pointers. First, stand on the beach and watch what is happening for 3-5 minutes. You'll soak up a lot just taking the time to really observe. Second, check-out where the other surfers are entering and where they are swimming out to. Obvious, I know, but surprisingly people will sometimes skip this step. Third, see if you can figure out where the channels are - he channel is where the waves are breaking towards. If you see two waves coming toward the shore in synchronization with each other, but separated by a patch of calmer water in between - the channel is typically below that calmer water. |
|
null | false | null | Cobb, Arthur, Saito, Eames, Yusuf, Ariadne | Extract the names of the team involved to perform inception on Robert. Separate them with a comma. | Cobb and Arthur are "extractors"; they perform corporate espionage using experimental dream-sharing technology to infiltrate their targets' subconscious and extract information. Their latest target, Saito, is impressed with Cobb's ability to layer multiple dreams within each other. He offers to hire Cobb for the supposedly impossible job of implanting an idea into a person's subconscious; performing "inception" on Robert, the son of Saito's competitor Maurice Fischer, with the idea to dissolve his father's company. Saito promises to clear Cobb's criminal status, allowing him to return home to his children.
Cobb accepts the offer and assembles his team: a forger named Eames, a chemist named Yusuf, and a college student named Ariadne. Ariadne is tasked with designing the dream's architecture, something Cobb himself cannot do for fear of being sabotaged by a projection of his late wife Mal. Maurice dies, and the team sedates Robert into a three-layer shared dream on an airplane to America. Time on each layer runs slower than the layer above, with one member staying behind on each to perform a music-synchronized "kick" to awaken dreamers on all three levels simultaneously. |
2003.00576 | true | null | Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5).
Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. | Is there any evidence that encoders with latent structures work well on other tasks? | Yes. |
null | false | null | what are the different ways to set up a stereo music system at home? | There are a few parameters / criteria that you will have to think through:
1. Where will this be located
2. Would you want to share it with others or have a personal listening session
3. Would your stereo setup play a dual role as part of your Home Theater system
4. How large is the space
6. What is your budget
There are different components to a music system
1. Music source - Streaming (e.g. Spotify), CD, LP record player
2. Digital to Analog converter (DAC)
3. Preamplifier - Required for volume control and if you have multiple sources
4. Stereo Amplifier
5. Speakers or Headphones
-- Speakers - Can be bookshelf speakers (for small spaces) or floorstanding speakers (for large spaces)
-- Headphones - Can be open back (great for quiet places and no other people around who may get disturbed) or closed back (noisy environment or other people present). You will also need to think through if you want noise cancellation or not (usage on flights etc will require a closed back headphone with Active Noise Cancellation) |
|
1911.03562 | false | null | Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP.
Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP. | Which NLP area have the highest average citation for woman author? | The answers are shown as follows:
* sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation
|
1909.03526 | false | null | Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here:
Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags.
LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments.
Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon.
Our multi-task BERT models involve six different Arabic classification tasks.
Author profiling and deception detection in Arabic (APDA).
LAMA+DINA Emotion detection.
Sentiment analysis in Arabic tweets. | What are the tasks used in the mulit-task learning setup? | The answers are shown as follows:
* Author profiling and deception detection in Arabic
* LAMA+DINA Emotion detection
* Sentiment analysis in Arabic tweets
|