paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
262
Training Language Models (LMs) is a straightforward way to collect a set of rules by utilizing the fact that words do not appear in an arbitrary order; we in fact can gain useful information about a word by knowing the company it keeps BIBREF7 . A statistical language model estimates the probability of a sequence of words or an upcoming word. An N-gram is a contiguous sequence of N words: a unigram is a single word, a bigram is a two-word sequence, and a trigram is a three-word sequence. For example, in the tweet tears in Ramen #SingleLifeIn3Words “tears”, “in”, “Ramen” and “#SingleLifeIn3Words” are unigrams; “tears in”, “in Ramen” and “Ramen #SingleLifeIn3Words” are bigrams and “tears in Ramen” and “in Ramen #SingleLifeIn3Words” are trigrams. An N-gram model can predict the next word from a sequence of N-1 previous words. A trigram Language Model (LM) predicts the conditional probability of the next word using the following approximation: DISPLAYFORM0 The assumption that the probability of a word depends only on a small number of previous words is called a Markov assumption BIBREF8 . Given this assumption the probability of a sentence can be estimated as follows: DISPLAYFORM0 In a study on how phrasing affects memorability, BIBREF9 take a language model approach to measure the distinctiveness of memorable movie quotes. They do this by evaluating a quote with respect to a “common language” model built from the newswire sections of the Brown corpus BIBREF10 . They find that movie quotes which are less like “common language” are more distinctive and therefore more memorable. The intuition behind our approach is that humor should in some way be memorable or distinct, and so tweets that diverge from a “common language” model would be expected to be funnier. In order to evaluate how funny a tweet is, we train language models on two datasets: the tweet data and the news data. Tweets that are more probable according to the tweet data language model are ranked as being funnier. However, tweets that have a lower probability according to the news language model are considered the funnier since they are the least like the (unfunny) news corpus. We relied on both bigrams and trigrams when training our models. We use KenLM BIBREF11 as our language modeling tool. Language models are estimated using modified Kneser-Ney smoothing without pruning. KenLM also implements a back-off technique so if an N-gram is not found, KenLM applies the lower order N-gram's probability along with its back-off weights. In a study on how phrasing affects memorability, (Danescu-Niculescu-Mizil et al., 2012) take a language model approach to measure the distinctiveness of memorable movie quotes.They do this by evaluating a quote with respect to a “common language” model built from the newswire sections of the Brown corpus (Kucera and Francis, 1967).
In the work of Danescu-Niculescu-Mizil et al., how do they adopt to measure the distinctiveness of memorable movie quotes?
They take a language model approach, by evaluating a quote with respect to a “common language” model built from the newswire sections of the Brown corpus.
null
false
88
Given an entity in most of the existing knowledge bases, there is always an available corresponding text description with valuable semantic information for this entity, which can provide beneficial supplement for entity representation. To encode the representation of a entity from its text description, we need to encode the variable-length sentence to a fixed-length vector. There are several kinds of neural models used in sentence modeling. These models generally consist of a projection layer that maps words, sub-word units or n-grams to vector representations (often trained beforehand with unsupervised methods), and then combine them with the different architectures of neural networks, such as neural bag-of-words (NBOW), recurrent neural network (RNN) BIBREF9 , BIBREF10 , BIBREF11 and convolutional neural network (CNN) BIBREF12 , BIBREF13 . In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions. In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions.
What encoders are used in the task?
They use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions.
null
false
null
Name the main crochet stitches
The main crochet stitches include the single crochet stitch, the double crochet stitch, the half double crochet stitch, the treble stitch and the slip stitch
null
false
null
What is the Masters?
The Masters Tournament is a golf tournament held annually in the first week of April at Augusta National Golf Club in Augusta, Georgia. The Masters is one of four Major golf tournaments and the only one to be played at the same course every year. The course is renowned for its iconic holes, impeccable groundskeeping, and colorful flowers that are typically in bloom. The winner earns a coveted Green Jacket and a lifetime invitation back to compete. Many players and fans consider The Masters to be their favorite tournament given these traditions and the historical moments that have occurred in past tournaments.
null
false
null
Tell me which continent the following cities are in: - Seattle - Helsinki - Lagos - Osaka - Istanbul
Seattle is located in North America Helsinki is in Europe Lagos is in Africa Osaka is in Asia Istanbul is partially located in both Europe and Asia
null
false
null
I’m playing $1/$2 poker with friends. A friend had AK and I had AA on the button. I raised to $15 after a small raise before me and my friend with AK called. The other player folded. The flop came K, 8, 4 with no common suits. My friend checked and I bet 1/2 pot. He called. The next card was a 10 and I bet full pot (about $70) and he raised me all in to about $200. I called. The next card was meaningless and I won the pot. Did he play the hand well?
He played the hand aggressively, but not poorly. He had to assume he had the lead after the flop. His best move would have been to call your aggressive pot bet instead of go all in, but given he had top pair with top kicker, it’s not a bad move. He was unlucky to be going against aces.
null
false
321
The past decade witnessed rapid growth and widespread usage of social media platforms by generating a significant amount of user-generated text. The user-generated texts contain high information content in the form of news, expression, or knowledge. Automatically mining information from user-generated data is unraveling a new field of research in Natural Language Processing (NLP) and has been a difficult task due to unstructured and noisy nature. In spite of the existing challenges, much research has been conducted on user-generated data in the field of information extraction, sentiment analysis, event extraction, user profiling and many more. According to Census of India, there are 22 scheduled languages and more than 100 non scheduled languages in India. There are 462 million internet users in India and most people know more than one language. They express their feelings or emotions using more than one languages, thus generating a new code-mixed/code-switched language. The problem of code-mixing and code-switching are well studied in the field of NLP BIBREF0 , BIBREF1 . Information extraction from Indian internet user-generated texts become more difficult due to this multilingual nature. Much research has been conducted in this field such as language identification BIBREF2 , BIBREF3 , part-of-speech tagging BIBREF4 . Joshi et al. JoshiPSV16 have performed sentiment analysis in Hindi-English (HI-EN) code-mixed data and almost no work exists on sentiment analysis of Bengali-English (BN-EN) code-mixed texts. The Sentiment Analysis of Indian Language (Code-Mixed) (SAIL _Code-Mixed) is a shared task at ICON-2017. Two most popular code-mixed languages namely Hindi and Bengali mixed with English were considered for the sentiment identification task. A total of 40 participants registered for the shared task and only nine teams have submitted their predicted outputs. Out of nine unique submitted systems for evaluation, eight teams submitted fourteen runs for HI-EN dataset whereas seven teams submitted nine runs for BN-EN dataset. The training and test dataset were provided after annotating the languages and sentiment (positive, negative, and neutral) tags. The language tags were automatically annotated with the help of different dictionaries whereas the sentiment tags were manually annotated. The submitted systems are ranked using the macro average f-score. The paper is organized as following manner. Section SECREF2 describes the NLP in Indian languages mainly related to code-mixing and sentiment analysis. The detailed statistics of the dataset and evaluation are described in Section SECREF3 . The baseline systems and participant's system description are described in Section SECREF4 . Finally, conclusion and future research are drawn in Section SECREF5 . A total of 40 participants registered for the shared task and only nine teams have submitted their predicted outputs.
How many participants registered for the shared task?
40 participants.
null
false
129
Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 . This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions: (1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part. (2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before. (3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before.
Has the ranking loss function introduced in this paper been investigated in the literature for relation classification before?
No.
null
false
null
Asian Games Main article: Kabaddi at the Asian Games (video) Kabaddi being played in Japan, 2015 Kabaddi was played as a demonstration event at the First Asian Games in 1951, and again in 1982, before becoming a medal event for the first time in 1990. The Indian national team won every men's and women's kabaddi competition in the Asian Games from 2002 through 2014. At the 2018 Asian Games, Iran became the first country other than India to win gold medals in kabaddi, with India's men's team winning bronze, and India's women's team being beaten by Iran to win silver. Pro Kabaddi League Main article: Pro Kabaddi League The Pro Kabaddi League was established in 2014. The league modeled its business upon that of the Indian Premier League of Twenty20 cricket, with a large focus on marketing, the backing of local broadcaster Star Sports, and changes to the sport's rules and its presentation to make it more suitable for a television audience. The Pro Kabaddi League quickly became a ratings success on Indian television; the 2014 season was watched by at least 435 million viewers over the course of the season, and the inaugural championship match was seen by 98.6 million viewers. Additional rules are used in the Pro Kabaddi League to encourage scoring: when a defensive side has three or fewer players remaining, tackles are worth two points instead of one. Furthermore, if a team performs two empty raids in a row, the next raider must score a point, or else they will be declared out and the opposing team will score a point.
From the passage provided, extract the year Pro Kabaddi League was established.
The Pro Kabaddi League was established in 2014
1909.00694
false
null
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
How are relations used to propagate polarity?
based on the relation between events, the suggested polarity of one event can determine the possible polarity of the other event
null
false
null
What are the five romance languages?
The 5 romance languages in order of number of speakers are: Spanish, Portuguese, French, Italian and Romanian.
1910.11204
false
null
Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task. FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model. Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. FLOAT SELECTED: Table 7: SRL results on the Chinese test set. We choose the best settings for each configuration of our model.
What is new state-of-the-art performance on CoNLL-2009 dataset?
In closed setting 84.22 F1 and in open 87.35 F1.
null
false
156
Modern topic identification (topic ID) systems for speech use automatic speech recognition (ASR) to produce speech transcripts, and perform supervised classification on such ASR outputs. However, under resource-limited conditions, the manually transcribed speech required to develop standard ASR systems can be severely limited or unavailable. In this paper, we investigate alternative unsupervised solutions to obtaining tokenizations of speech in terms of a vocabulary of automatically discovered word-like or phoneme-like units, without depending on the supervised training of ASR systems. Moreover, using automatic phoneme-like tokenizations, we demonstrate that a convolutional neural network based framework for learning spoken document representations provides competitive performance compared to a standard bag-of-words representation, as evidenced by comprehensive topic ID evaluations on both single-label and multi-label classification tasks. Modern topic identification (topic ID) systems for speech use automatic speech recognition (ASR) to produce speech transcripts, and perform supervised classification on such ASR outputs. However, under resource-limited conditions, the manually transcribed speech required to develop standard ASR systems can be severely limited or unavailable.
Why do they want unsupervised solutions to modern topic identification systems?
Because if the resource is limited, the manually transcribed speech required by developing standard ASR systems may also be severely limited or unavailable.
2004.02929
false
null
A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24. A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24.
Does the paper motivate the use of CRF as the baseline model?
The answers are shown as follows: * the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data
null
false
null
Why is the United States Military Academy called West Point?
The United States Military Academy is called West Point because it is on the western point of an S-turn on the Hudson River. During the American Revolution, the Revolutionary Army stretched an iron chain across the Hudson at this point because British ships would have to slow down to navigate the S-turn, making them susceptible to cannon fire. The British never tested the chain, and their Naval forces remained separated in Canada and New York. Due to its success on the western point of the river, the United States Military Academy has been commonly referred to as "West Point," since the military academy became the first training institution for U.S. Army officers in 1802.
null
false
42
Our underlying model architecture is a standard attentional encoder-decoder BIBREF1 . Let INLINEFORM0 and INLINEFORM1 denote the source and target sentences, respectively. We use a Bi-LSTM encoder to represent the source words as a matrix INLINEFORM2 . The conditional probability of the target sentence is given as DISPLAYFORM0 where INLINEFORM0 is computed by a softmax output layer that receives a decoder state INLINEFORM1 as input. This state is updated by an auto-regressive LSTM, INLINEFORM2 , where INLINEFORM3 is an input context vector. This vector is computed as INLINEFORM4 , where INLINEFORM5 is a probability distribution that represents the attention over the source words, commonly obtained as DISPLAYFORM0 where INLINEFORM0 is a vector of scores. We follow BIBREF14 and define INLINEFORM1 as a bilinear transformation of encoder and decoder states, where INLINEFORM2 is a model parameter. Our underlying model architecture is a standard attentional encoder-decoder.
What is their underlying model architecture?
A standard attentional encoder-decoder.
null
false
null
What is a Google Pixel?
A Google Pixel is a smartphone whose hardware and software - or Operating System - is manufactured by Google Inc. The OS (Operating System) is based on Android and does not include bloatware or 3rd party applications upon setup. Google frequently releases new security updates and new features which are not necessarily part of the general Android update cycle.
null
false
null
What is the Canada's national sport?
There are two national sports in Canada. Lacrosse was the only national sport from 1859 - 1994. However, in 1994 parliament passed Canada's National Sport Act, which made lacrosse the national summer sport and ice hockey the national winter sport.
null
false
null
What is the best way to reduce my carbon emissions?
A great way to reduce your carbon emissions is reducing the amount of car trips in favour of walking, biking, or taking public transit. Other ways could be buying items used instead of new to save on emissions from the manufacturing process. You could also consider reducing airplane trips by reducing business travel, or taking a train instead.
null
false
null
Brown v. Board of Education of Topeka, 347 U.S. 483 (1954), was a landmark decision by the U.S. Supreme Court, which ruled that U.S. state laws establishing racial segregation in public schools are unconstitutional, even if the segregated schools are otherwise equal in quality. The decision partially overruled the Court's 1896 decision Plessy v. Ferguson, which had held that racial segregation laws did not violate the U.S. Constitution as long as the facilities for each race were equal in quality, a doctrine that had come to be known as "separate but equal".[note 1] The Court's decision in Brown paved the way for integration and was a major victory of the civil rights movement, and a model for many future impact litigation cases. The underlying case began in 1951 when the public school system in Topeka, Kansas, refused to enroll local black resident Oliver Brown's daughter at the elementary school closest to their home, instead requiring her to ride a bus to a segregated black school farther away. The Browns and twelve other local black families in similar situations filed a class-action lawsuit in U.S. federal court against the Topeka Board of Education, alleging that its segregation policy was unconstitutional. A special three-judge court of the U.S. District Court for the District of Kansas rendered a verdict against the Browns, relying on the precedent of Plessy v. Ferguson and its "separate but equal" doctrine. The Browns, represented by NAACP chief counsel Thurgood Marshall, then appealed the ruling directly to the Supreme Court. In May 1954, the Supreme Court issued a unanimous 9–0 decision in favor of the Browns. The Court ruled that "separate educational facilities are inherently unequal", and therefore laws that impose them violate the Equal Protection Clause of the Fourteenth Amendment of the U.S. Constitution. However, the decision's 14 pages did not spell out any sort of method for ending racial segregation in schools, and the Court's second decision in Brown II (349 U.S. 294 (1955)) only ordered states to desegregate "with all deliberate speed". In the Southern United States, especially the "Deep South", where racial segregation was deeply entrenched, the reaction to Brown among most white people was "noisy and stubborn". Many Southern governmental and political leaders embraced a plan known as "Massive Resistance", created by Virginia Senator Harry F. Byrd, in order to frustrate attempts to force them to de-segregate their school systems. Four years later, in the case of Cooper v. Aaron, the Court reaffirmed its ruling in Brown, and explicitly stated that state officials and legislators had no power to nullify its ruling.
Summarize Brown vs. Board of Education of Topeka in one sentence.
Brown vs. the Board of Education of Topeka was a unanimous 1954 ruling by the U.S. Supreme Court that racial segregation in U.S. public schools was unconstitutional, which paved the way for civil rights and racial integration.
null
false
null
What's the signifying traits of Neapolitan tailoring?
Neapolitan tailoring, when compared to its British counterpart, is generally more relaxed and informal. The armholes are usually larger, offering a wider range of movement and the shoulders less padded providing a more natural look. Most famously, however, Neapolitan tailors traditionally use a "spalla camicia" (a "shirt shoulder") on their suit jackets. This means the jacket sleeve is attached into the armhole in the same way a shirt sleeve would be, giving it some light crumpling at the top where it meets the shoulder.
null
false
null
Select the dishes that are Chinese from this menu: hamburger, pizza, fried rice, soup dumplings, spaghetti, scallion pancakes.
From this list, the dishes that can be found in a Chinese menu are fried rice, soup dumplings, and scallion pancakes.
null
false
null
Academic dress is a traditional form of clothing for academic settings, mainly tertiary (and sometimes secondary) education, worn mainly by those who have obtained a university degree (or similar), or hold a status that entitles them to assume them (e.g., undergraduate students at certain old universities). It is also known as academical dress, academicals, and, in the United States, as academic regalia. Contemporarily, it is commonly seen only at graduation ceremonies, but formerly academic dress was, and to a lesser degree in many ancient universities still is, worn daily. Today, the ensembles are distinctive in some way to each institution, and generally consist of a gown (also known as a robe) with a separate hood, and usually a cap (generally either a square academic cap, a tam, or a bonnet). Academic dress is also worn by members of certain learned societies and institutions as official dress.
What parts are there in an academic dress? Extract them and place them in a numbered list.
1. Gown 2. Hood 3. Cap
null
false
null
What Breaking Bad actor guest starred on Season 6, Episode 2 "Drive" of The X-Files?
Bryan Cranston guest starred as Patrick Crump.
null
false
null
Identify which animal species is alive or extinct: Theropod, Pronghorn
Pronghorn is alive, Theropod is extinct.
null
false
null
I really like flying and am thinking about joining the U.S. military. If I want to be a pilot, which part of the military should I join?
If you join the military and want to be a pilot, you can join any branch of the military, but you will have different options in each branch. The Air Force primarily trains jet pilots and a smaller number of helicopter pilots. The Navy also trains mostly jet pilots and a fewer number of helicopter pilots. The Marine Corps is part of the Navy but has separate aviation units that are nearly half helicopters and half jet pilots. The Army trains mostly helicopter pilots and a handful of jet pilots for specialized missions and executive transport. So, you have options in each branch of the military, but it depends on what type of pilot you want to be.
null
false
null
With the rise of plastics in the environment are legos still a good toy for kids?
Legos are popular among kids. They simulate the mind and make them focus on the details. They help exercise fine motor skills and enhance the child's ability to think creatively. Legos also help facilitate group activities through collaborative thinking. They are very adaptable and available for all ages and skill levels. Said that legos do come with their fair set of challenges. To begin with the material is made of plastic which is not environment friendly. They are also very expensive and there is a lack of buy back or exchange programs. This makes legos inaccessible for several kids and increases the amount of plastic waste. In conclusion while legos are a good play medium we need to continue making them more accessible while reducing the amount of environment waste resulting from their production.
null
false
null
List Toddler favorite TV shows in 2023
1. Doc McStuffins 2. PJ Masks 3. Blippi 4. Cocomelon 5. Spidey and his amazing friends 6. Bluey 7. Baby Shark
null
false
null
Identify which animal species is alive or extinct: Plesiosaur, Loggerhead Turtle
Plesiosaur is extinct, Loggerhead Turtle is alive.
null
false
38
We consider the dataset consisting of the entire collection of articles of the Wikipedia Medicine Portal, updated at the end of 2014. Wikipedia articles are written according to the Media Wiki markup language, a HTML-like language. Among the structural elements of one page, which differs from standard HTML pages, there are i) the internal links, i.e., links to other Wikipedia pages, different from links to external resources); ii) categories, which represent the Media Wiki categories a page belongs to: they are encoded in the part of text within the Media Wiki “categories" tag in the page source, and iii) informative boxes, so called “infoboxes", which summarize in a structured manner some peculiar pieces of information related the topic of the article. The category values for the articles in the medical portal span over the ones listed at https://en.wikipedia.org/wiki/Portal:Medicine. Examples of categories, which appear at the bottom of each Wikipedia page, are in Fig. 1 . Infoboxes of the medical portal feature medical content and standard coding. As an example, Fig. 2 shows the infobox in the Alzheimer's disease page of the portal. The infobox contains explanatory figures and text denoting peculiar characteristics of the disease and the value for the standard code of such disease (ICD9, as for the international classification of the disease). Thanks to WikiProject Medicine, the dataset of articles we collected from the Wikipedia Medicine Portal has been manually labeled into seven quality classes. They are ordered as Stub, Start, C, B, A, Good Article (GA), Featured Article (FA). The Featured and Good article classes are the highest ones: to have those labels, an article requires a community consensus and an official review by selected editors, while the other labels can be achieved with reviews from a larger, even controlled, set of editors. Actually, none of the articles in the dataset is labeled as A, thus, in the following, we do not consider that class, restricting the investigation to six classes. At the date of our study, we were able to gather 24,362 rated documents. Remarkably, only a small percentage of them (1%) is labeled as GA and FA. Indeed, the distribution of the articles among the classes is highly skewed. There are very few (201) articles for the highest quality classes (FA and GA), while the vast majority (19,108) belongs to the lowest quality ones (Stub and Start). This holds not only for the medical portal. Indeed, it is common in all Wikipedia, where, on average, only one article in every thousand is a Featured one. In Section "Experiments and results" , we will adopt a set of machine-learning classifiers to automatically label the articles into the quality classes. Dealing with imbalanced classes is a common situation in many real applications of classification learning: healthy patients over the population, fraudulent actions over daily genuine transactions, and so on. Without any countermeasure, common classifiers tend to correctly identify only articles belonging to the majority classes, clearly leading to severe mis-classification of the minority classes, since typical learning algorithms strive to maximize the overall prediction accuracy. To reduce the disequilibrium among the size of the classes, we have first randomly sampled the articles belonging to the most populated classes. Then, we have performed some further elaboration, as shown in the following. Many studies have been conducted to improve learning algorithms accuracy in presence of imbalanced data BIBREF8 . For the current work, we have considered one of the most popular approaches, namely the Synthetic Sampling with Data Generation, detailed in BIBREF9 . It consists in generating synthetic instances from the minority classes, to balance the overall dataset. The approach has been broadly applied to problems relying on NLP features, see, e.g., BIBREF10 . In our case, we resampled the input data set by applying the Synthetic Minority Oversampling TEchnique (SMOTE), with percentage 40% for GA and 180%, for FA. In particular, the steps to oversample are the following: Table 1 shows the number of articles in the dataset, divided per class, as well as the random samples we have considered for our study. The experiments presented in Section "Experiments and results" are based on the articles of the right-hand column in the table. At the date of our study, we were able to gather 24,362 rated documents.
How many rated documents did the authors gather?
They gathered 24,362 rated documents.
null
false
null
Why would someone want to drink sparkling water instead of flat?
That's a great question! There are a lot of people who have grown up drinking beverages that taste good, but aren't necessarily good for their health, such as soda. As we age, we often start to consider our health more seriously. Sparkling water, like soda, has bubbles and gives a similar sensation, but has fewer calories and sugar, and therefore better for your health.
1911.03705
true
null
It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators. We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes. The crowd-sourced sentences correlate well with the associated captions, meaning that it is reasonable to use caption sentences as training data although they can be partly noisy. Additionally, we utilize a search engine over the OMCS corpus BIBREF16 for retrieving relevant propositions as distant rationales in training data. It is true that the above-mentioned associated caption sentences for each concept-set are human-written and do describe scenes that cover all given concepts. However, they are created under specific contexts (i.e. an image or a video) and thus might be less representative for common sense. To better measure the quality and interpretability of generative reasoners, we need to evaluate them with scenes and rationales created by using concept-sets only as the signals for annotators. We collect more human-written scenes for each concept-set in dev and test set through crowd-sourcing via the Amazon Mechanical Turk platform. Each input concept-set is annotated by at least three different humans. The annotators are also required to give sentences as the rationales, which further encourage them to use common sense in creating their scenes.
Are the sentences in the dataset written by humans who were shown the concept-sets?
Yes.
1911.08829
false
null
We use only the written part of the BNC. From this, we extract a set of documents with the aim of having as much genre variation as possible. To achieve this, we select the first document in each genre, as defined by the classCode attribute (e.g. nonAc, commerce, letters). The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43). The documents are split across a development and test set, as specified at the end of Section SECREF46. We exclude documents with IDs starting with A0 from all annotation and evaluation procedures, as these were used during development of the extraction tool and annotation guidelines. The resulting set of 46 documents makes up our base corpus. Note that these documents vary greatly in size, which means the resulting corpus is varied, but not balanced in terms of size (Table TABREF43).
How big PIE datasets are obtained from dictionaries?
The answers are shown as follows: * 46 documents makes up our base corpus
null
false
411
In this subsection, we perform GTA experiments on Cifar-100, which means we try to perturb Cifar-100 images to fool the target victim models. The models MobileNet-V3, VGG-16, ResNet-18, ResNet-34, SeResNet-26, and DenseNet-26 trained on Cifar-100 are used as victims to calculate the GTA success rate. All experimental results are reported in Table. In the first experiment (the first row of Table), we use ResNet-18 and Cifar-10 as the source model and the source dataset, respectively, and conduct GTA on the testing images from Cifar-100. It is observed that among all baselines, FGSM performs the best. A possible underlying reason is that adversarial perturbations generated by multi-step gradient ascent tends to overfit the source model and source dataset. It can also be seen that the proposed ICE outperforms existing methods on the GTA problem. For instance, compared with FGSM, the average attack success rate on the six target models is improved by about 11.5%. In the second experiment (the second row of Table), we consider the case where there are two source models -ResNet-18 and MobileNet-V1 trained on Cifar-10. Experimental results indicate that most of the baselines cannot leverage the additional source model MobileNet-V1 to boost their performances. In contrast, the proposed ICE can efficiently make use of the additional source model to improve its performances. For instance, by adding the MobileNet-V1 source model, the average attack success rate across 6 models of FGSM is decreased by about 0.7% while the success rate of ICE is improved by about 11.6%. In the third experiment, we use two ResNet-18 models trained on Cifar-10 and Tiered T 84 respectively as the source models, and use Cifar-10 and Tiered T 84 as the source datasets. It is interesting that the baselines' performances in this experiment are commonly worse than their performances in the first experiment. The possible reason for this result is that the resolution of the images from Tiered T 84 is 84×84, which differs greatly from the resolution of Cifar-100. As a comparison, ICE's performances in this experiment are much better than its performances in the first experiment, which indicates that ICE can efficiently make use of all the resources to improve the performance in spite of the difference among the source datasets. In the fourth experiment, we use two ResNet-18 models respectively trained on Cifar-10 and Tiered T 56 as the source models. It is observed that most of the baselines' performances in this experiment are slightly better than their performances in the third experiment. For instance, compared with the performances in the third experiment, the performance of FGSM in this experiment is improved by about 1.7%. The possible reason for this result is that compared with the resolution of Tiered T 84 , the resolution of Tiered V 56 is more closer to the resolution of Cifar-100. In this experiment, the proposed ICE still outperforms all baselines with clear margins. In the fifth experiment, we use three ResNet-18 models respectively trained on Cifar-10, Tiered T 84 and Tiered V 56 as the source models. It is clear that ICE's performances can be further improved by using more source datasets, while the baselines cannot efficiently utilize the additional source models to achieve better performances. AEG performs the best among all baselines with the average attack success rate of 58.3%. Compared with AEG, ICE promotes the average attack success rate by about 39.9%, which is a bigger margin than the margin in the first experiment. This experiment further indicates that ICE is more effective than baselines to leverage all the resource to solve the GTA problem. 4.4 GENERALIZED TRANSFERABLE ATTACK TO CIFAR-10, TIERED T 84 , AND TIERED V 56 We have performed GTA on Cifar-100 in the previous subsection. Now we show ICE still outperforms baselines when using other datasets as target images. Table reports the experimental results. There are four datasets in total (Cifar-10, Cifar-100, Tiered T 84 , and Tiered V 56 ), and each row (denoted as -target) shows the experiment when conducting GTA on the target dataset by using ResNet-18 trained on the other three datasets as source models. For example, the '-Cifar10' row denotes the experiment that utilizing the datasets Cifar-100, Tiered T 84 , and Tiered V 56 and the three respectively trained ResNet-18 models to conduct GTA to the images from Cifar-10. Table does not show the '-Cifar100' row because the corresponding results have been shown in the last row of Table. It is clear that given three datasets and the models trained on the three datasets, ICE performs the best to attack unknown images from other datasets. Table 2: GTA success rates on Cifar-100
What is ICE’s advantage over the baseline methods when using a single model and dataset (i.e. the first set of rows in Table 2)?
Table 2 shows that when given a single source model and dataset, ICE also outperforms the baselines. The possible reason is that ICE is trained to disturb source images without knowing label, which forces ICE to learn more generalizable perturbation pattern, and this ensures the advantage of ICE no matter the number of source models and datasets.
null
false
null
What is Lisp?
Lisp is a family of programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in 1960, Lisp is the second-oldest high-level programming language still in common use, after Fortran.
null
false
null
Identify which instrument is string or percussion: Bonang, Chiwang
Chiwang is string, Bonang is percussion.
null
false
null
"Ghouli" is the fifth episode of the eleventh season of the American science fiction television series The X-Files. The episode was written and directed by James Wong. The tagline for this episode is "You see what I want you to see". Though not originally billed as a mythology episode, "Ghouli" helps to explore the series' overarching mythology and serves as the second of three mythology episodes of the season following the season premiere. The show centers on FBI special agents who work on unsolved paranormal cases called X-Files; focusing on the investigations of Fox Mulder (David Duchovny), and Dana Scully (Gillian Anderson) after their reinstatement in the FBI. In this episode, A pair of teenage girls attack one another, each believing the other to be a monster, known as "Ghouli." Mulder and Scully find that their investigation leads back to their long-lost son, William.
Given a reference text about "Ghouli" tell me what happens in the episode.
In the episode "Ghouli", Fox Mulder and Dana Scully investigate a pair of teenage girls that attack one another believing the other is a monster.
null
false
null
Muckle Water is a long, narrow fresh water loch on Ward Hill on Rousay, Orkney, Scotland. It is the biggest loch on the island and is popular for fishing. It can be reached by a track from the roadside. The Suso Burn on the north eastern shore drains the loch into the Sound of Rousay.
Where is Muckle Water?
Muckle water is located in Rousay, Orkney, Scotland.
1609.01962
false
null
We use Gaussian Processes as this probabilistic kernelised framework avoids the need for expensive cross-validation for hyperparameter selection. Instead, the marginal likelihood of the data can be used for hyperparameter selection. We use Gaussian Processes as this probabilistic kernelised framework avoids the need for expensive cross-validation for hyperparameter selection
Why is a Gaussian process an especially appropriate method for this classification problem?
The answers are shown as follows: * avoids the need for expensive cross-validation for hyperparameter selection
null
false
null
What is paella?
Paella is a dish that is very popular in Spain. It is made of rice, saffron, and usually comes with fish or chicken.
null
false
84
Our experiments evaluate the effect of shallow syntax, via contextualization (mSynC, §SECREF2) and features (§SECREF3). We provide comparisons with four baselines—ELMo-transformer BIBREF0, our reimplementation of the same, as well as two cwr-free baselines, with and without shallow syntactic features. Both ELMo-transformer and mSynC are trained on the 1B word benchmark corpus BIBREF19; the latter also employs chunk annotations (§SECREF2). Experimental settings are detailed in Appendix §SECREF22. Fine-grained NER The same architecture and tagging scheme from above is also used to predict fine-grained entity annotations from OntoNotes 5.0 (Weischedel et al., 2011). There are 18 fine-grained NER labels in the dataset, including regular named entitities as well as entities such as date, time and common numerical entries.
Are there 18 fine-grained NER labels in fine-grained NER?
Yes.
null
false
null
Why Spark is faster than MapReduce?
The main difference between Spark and MapReduce is that Spark processes and retains data in memory for subsequent steps, whereas MapReduce processes data on disk.
null
false
null
What color is a banana when overripe?
A banana will become splotchy yellow and brown when overripe
null
false
null
Ukrainian Chorus Dumka of New York was founded in 1949 with the goal "to preserve and cultivate the rich musical heritage of Ukraine", both for the church and for secular occasions. In the beginning, the chorus was a men's chorus of Ukrainian immigrants who met to sing music they loved. The first music director was L. Krushelnycky. The group became a mixed choir in 1959. They have performed in New York at locations including in Alice Tully Hall, Avery Fisher Hall, Brooklyn Academy of Music, Carnegie Hall, Madison Square Garden, St. Patrick's Cathedral, and Town Hall. They toured to the Kennedy Center in Washington, and in several European capitals. In 1990, the chorus toured Ukraine for the first time, singing in Kyiv, Lviv, Poltava, and Kaniv. They made recordings of both church and secular music.
Using the two paragraphs below, when was the Ukrainian Chorus Dumka of NY founded, and when did it play in Ukraine for the first time?
The Ukrainian Chorus Dumka of New York was founded in 1949. It toured Ukraine for the first time in 1990.
null
false
null
What creates moon phases and how is it different from an eclipse?
The moonlight we see on Earth is sunlight reflecting off the Moon surface. How much of the Moon we see changes day-to-day and is called lunar phases. The Moon orbits Earth and Earth orbits the Sun. Everything is moving. The phases of the moon are actually just a result of our perception of the moon's half-illuminated surface. When the moon does pass through Earth's shadow the result is a lunar eclipse. The phases of the moon are the changes in the amount of the surface of the moon that are illuminated by the Sun from the perspective of earth. When the surface of the side of the moon facing Earth is completely illuminated, we see a full moon. When none of the surface is illuminated and we can’t see the moon at all, the phase is a new moon. When half the side of the moon facing Earth is lit up, the phases are called the first and third quarters. When the visible moon appears to be getting bigger, we say it is waxing; when the visible lighted surface seems to be getting smaller, we say the moon is waning. When less than half of the visible moon is lit, it’s called a crescent, and when it’s more than half, it’s called a gibbous. During a lunar eclipse, Earth comes between the Sun and the Moon, blocking the sunlight falling on the Moon. Earth's shadow covers all or part of the lunar surface.
null
false
null
Despite its theoretical importance, critics of MPT question whether it is an ideal investment tool, because its model of financial markets does not match the real world in many ways The risk, return, and correlation measures used by MPT are based on [expected values](https://en.wikipedia.org/wiki/Expected_value), which means that they are statistical statements about the future (the expected value of returns is explicit in the above equations, and implicit in the definitions of (https://en.wikipedia.org/wiki/Variance)  and (https://en.wikipedia.org/wiki/Covariance)). Such measures often cannot capture the true statistical features of the risk and return which often follow highly skewed distributions (e.g. the [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution)) and can give rise to, besides reduced (https://en.wikipedia.org/wiki/Volatility_(finance)), also inflated growth of return. In practice, investors must substitute predictions based on historical measurements of asset return and volatility for these values in the equations. Very often such expected values fail to take account of new circumstances that did not exist when the historical data were generated. More fundamentally, investors are stuck with estimating key parameters from past market data because MPT attempts to model risk in terms of the likelihood of losses, but says nothing about why those losses might occur. The risk measurements used are (https://en.wikipedia.org/wiki/Probability) in nature, not structural. This is a major difference as compared to many engineering approaches to [risk management](https://en.wikipedia.org/wiki/Risk_management). Mathematical risk measurements are also useful only to the degree that they reflect investors' true concerns—there is no point minimizing a variable that nobody cares about in practice. In particular, (https://en.wikipedia.org/wiki/Variance) is a symmetric measure that counts abnormally high returns as just as risky as abnormally low returns. The psychological phenomenon of [loss aversion](https://en.wikipedia.org/wiki/Loss_aversion) is the idea that investors are more concerned about losses than gains, meaning that our intuitive concept of risk is fundamentally asymmetric in nature. There many other risk measures (like [coherent risk measures](https://en.wikipedia.org/wiki/Coherent_risk_measure)) might better reflect investors' true preferences. Modern portfolio theory has also been criticized because it assumes that returns follow a [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution). Already in the 1960s, [Benoit Mandelbrot](https://en.wikipedia.org/wiki/Benoit_Mandelbrot) and [Eugene Fama](https://en.wikipedia.org/wiki/Eugene_Fama) showed the inadequacy of this assumption and proposed the use of more general [stable distributions](https://en.wikipedia.org/wiki/Stable_distributions) instead. [Stefan Mittnik](https://en.wikipedia.org/wiki/Stefan_Mittnik) and [Svetlozar Rachev](https://en.wikipedia.org/wiki/Svetlozar_Rachev) presented strategies for deriving optimal portfolios in such settings. [Contrarian investors](https://en.wikipedia.org/wiki/Contrarian_investing) and [value investors](https://en.wikipedia.org/wiki/Value_investing) typically do not subscribe to Modern Portfolio Theory. One objection is that the MPT relies on the [efficient-market hypothesis](https://en.wikipedia.org/wiki/Efficient-market_hypothesis)  and uses fluctuations in share price as a substitute for risk. [Sir John Templeton](https://en.wikipedia.org/wiki/Sir_John_Templeton)  believed in diversification as a concept, but also felt the theoretical foundations of MPT were questionable, and concluded (as described by a biographer): "the notion that building portfolios on the basis of unreliable and irrelevant statistical inputs, such as historical volatility, was doomed to failure.
Extract the criticisms that modern portfolio theory faces from this link https://en.wikipedia.org/wiki/Modern_portfolio_theory, place them in a bullet list
- The risk, return, and correlation measures used by Modern portfolio theory (MPT) are based on expected values, such expected values fail to take account of new circumstances that did not exist when the historical data were generated - The risk measurements used in MPT are probabilistic because MPT models risk in terms of the likelihood of losses based on past market data and not why the losses occur. - MPT attempts to minimize risks in the form of variance. However, this metric doesn’t reflect investors’ true concerns in practice. Variance is symmetric, so it punishes high returns the same way as high losses. However, investors care more about loss than gain, so the perception of risk is asymmetric in nature. Coherent risk measures should be a better metric that reflects investors’ preference - MPT assumes that returns follow Gaussian distribution. However, many scholars suggest that returns might follow other distributions such as stable distributions - MPT relies on the efficient-market hypothesis and assumes that share price fluctuation is a risk measure. However, building a portfolio based on historical volatility is a questionable premise criticized by Sir John Templeton
null
false
null
Enrico Silvestrin (born 30 May 1972) is an Italian actor, television and radio presenter and musician. Life and career Born in Rome, after a supporting role in Michele Placido's Close Friends, in 1994 Silvestrin started working as a VJ on MTV Europe, and later on MTV Italy. In 2000, he refused the Gabriele Muccino's offer of starring in The Last Kiss preferring a role in the Italia 1 sitcom Via Zanardi 33, which eventually turned to be a flop. In 2005 Silvestrin left MTV as to focus on his acting career. He then appeared in a number of films and TV-series, getting a personal success with the role of Alessandro Berti in the Canale 5 crime series Distretto di Polizia. In 2007 he presented the Festivalbar. In 2018 he competed in Grande Fratello VIP, the Italian adaptation of Celebrity Big Brother. He was the third contestant to be evicted on day 22. Silvestrin is also active as a musician. He has been part of the musical groups Agony in Cage, Tempus Fugit, Ilor and Silv3man. He is nephew on his mother's side of actor and playwright Peppino De Filippo.
given this article about Enrico Silvestrin, what year did he get evicted from Celebrity Big Brother?
2018
null
false
null
Is cacao good for you?
Cacao is very good for the body. It is a good source of antioxidants, and other minerals. It has manganese, iron, magnesium, phosphorous, potassium, zinc, and selenium. These are also elements the body needs. There is 5 grams of protein in one ounce of cacao, and some of the amino acids in the cacao protein help with regulating mood and sleep. Cacao may also help with cognitive function, cardiovascular and gut health.
null
false
null
Radiative heat transfer is the transfer of energy via thermal radiation, i.e., electromagnetic waves. It occurs across vacuum or any transparent medium (solid or fluid or gas). Thermal radiation is emitted by all objects at temperatures above absolute zero, due to random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles (protons and electrons), their movement results in the emission of electromagnetic radiation which carries away energy. Radiation is typically only important in engineering applications for very hot objects, or for objects with a large temperature difference.
From the passage provided, provide the two scenarios in bullet form where radiative heat transfer is an important consideration.
Radiative heat transfer is an important consideration when: - Engineering applications for very hot objects - Engineering applications for objects with a large temperature difference
null
false
null
What is the best hand in poker?
The best hand possible in poker is a Royal Flush. This hand consists of a Ten, Jack, Queen, King, and Ace, all from the same suit.
null
false
null
Identify which animal species is alive or extinct: Simbakubwa, Arctic Wolf
Arctic Wolf is alive, Simbakubwa is extinct.
null
false
null
What is MACRO-economics?
Macroeconomics can be thought of as the “big picture” of economics. It focuses on aggregate production and consumption in an economy. Topics that macroeconomists might study include: Effects of general taxes such as income and sales taxes on output and prices Causes of economic upswings and downturns Effects of monetary and fiscal policy on economic health Effects of and process for determining interest rates Causes for some economies growing faster than others
null
false
null
Where is the best place to go to relax?
Some places you can go to relax are the beach or an all inclusive resort. Leaving work at home when you are on your staycation is another way to relax. Do activities that help you have fun and take your mind off of work like boating, whale watching, deep sea fishing, or riding a jet ski. Quiet time is another way to relax by relaxing in a chair on a beach with a good book too.
null
false
null
Identify which instrument is string or percussion: Derkach, Lyra
Lyra is string, Derkach is percussion.
null
false
null
What is the best place to visit in India
Sikim Munnar Ladakh Jaipur Alyppie
null
false
null
The Maurice "Rocket" Richard Trophy, also known as the Rocket Richard Trophy, is awarded annually to the leading goal scorer in the National Hockey League (NHL). It was donated to the NHL by the Montreal Canadiens in 1998–99 and is named in honour of legendary Montreal Canadiens right winger Maurice "Rocket" Richard. First won by Teemu Selanne, it is currently held by Auston Matthews, who scored 60 goals during the 2021–22 NHL season.
What is the Maurice Richard Trophy
The Maurice "Rocket" Richard Trophy, also known as the Rocket Richard Trophy, is awarded annually to the leading goal scorer in the National Hockey League (NHL). It was donated to the NHL by the Montreal Canadiens in 1998–99 and is named in honour of legendary Montreal Canadiens right winger Maurice "Rocket" Richard. First won by Teemu Selanne, it is currently held by Auston Matthews, who scored 60 goals during the 2021–22 NHL season.
null
false
null
Samuel Wesley (24 February 1766 – 11 October 1837) was an English organist and composer in the late Georgian period. Wesley was a contemporary of Mozart (1756–1791) and was called by some "the English Mozart". Born in Bristol, he was the son of noted Methodist and hymnodist Charles Wesley, the grandson of Samuel Wesley (a poet of the late Stuart period) and the nephew of John Wesley, the founder of the Methodist Church.
Was Samuel Wesley the founder of the Methodist Church?
No, John Wesley, his uncle, was the founder of the Methodist Church.
null
false
null
What is the KPMG company?
KPMG stands for Klynveld Peat Marwick Goerdeler. They are one of the world's leading accounting firms. They were first found in 1881, and they have more than 200,000 professionals operating in offices all over the world in over 153 different countries. KPMG offers audit services and tax advisory services to large companies, medium sized companies and many companies in the public sector.
null
false
328
The whole network is trained through back propagation with stochastic gradient descent BIBREF19 . We obtain a pre-trained lookup table by applying Word2Vec BIBREF20 on the Google News corpus, which contains more than 100B words with a vocabulary size of about 3M. All involved parameters are randomly initialized from a truncated normal distribution with zero mean and standard deviation. For each task INLINEFORM0 , we conduct TOS with INLINEFORM1 to improve its performance. After training our model on the generated sample collections, we evaluate the performance of task INLINEFORM2 by comparing INLINEFORM3 and INLINEFORM4 on the test set. We apply 10-fold cross-validation and different combinations of hyperparameters are investigated, of which the best one, as shown in Table TABREF41 , is reserved for comparisons with state-of-the-art models. We obtain a pretrained lookup table by applying Word2Vec [Mikolov et al., 2013a] on the Google News corpus, which contains more than 100B words with a vocabulary size of about 3M.
How to obtain a pre-trained lookup table?
The authors obtain a pretrained lookup table by applying Word2Vec on the Google News corpus.
null
false
null
The International Cricket Council (ICC) is the global governing body of cricket. It was founded as the Imperial Cricket Conference in 1909 by representatives from Australia, England and South Africa. It was renamed as the International Cricket Conference in 1965, and took up its current name in 1987. The ICC has its headquarters in Dubai, United Arab Emirates. The ICC has 108 member nations currently: 12 Full Members that play Test matches, and 96 Associate Members. The ICC is responsible for the organisation and governance of cricket's major international tournaments, most notably the Cricket World Cup and the T20 World Cup. It also appoints the umpires and referees that officiate at all sanctioned Test matches, One Day Internationals and Twenty20 Internationals. It promulgates the ICC Code of Conduct, which sets professional standards of discipline for international cricket, and also co-ordinates action against corruption and match-fixing through its Anti-Corruption and Security Unit (ACSU).
What is the ICC ?
ICC stands for International Cricket Council and is governing body of Cricket. The ICC is responsible for the governance of all Cricketing tournaments.
null
false
129
Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 . This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions: (1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part. (2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before. (3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset. Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset.
By which method do the authors combine CNNs and RNNs and what they have achieved?
Using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset
null
false
null
I love nature. What can I do in Boppard, Germany?
Boppard is a beautiful town in the Upper Middle Rhine Valley between Bingen and Koblenz. It is surrounded by stunning natural landscapes. If you love nature, there are a few things you can do in Boppard: 1. Do the “Klettersteig Boppard”. The “Klettersteig Boppard” is a via ferrata climbing route that lets you explore the Rhine Valley in a challenging but unique way. 2. Climb the “Ripp”: The “Ripp” is a trail that starts in Boppard and goes up all the way to a lookout point called “Vierseenblick” (see 3). 3. Visit the “Vierseenblick”: The Vierseenblick is a lookout point from which the Rhine appears to be divided into four lakes, hence the German name “vier Seen” (“four lakes”). 4. Hike the “Rheinburgenweg” (“Rhine castles trail”) Trail: The “Rheinburgenweg” is a scenic hiking trail that runs along the Rhine River and offers breathtaking views of the Rhine Valley, including old castles on the other side of the valley. You can start the trail in Boppard and hike north or south. These are just a few ideas, but there are many other nature-related activities and attractions in Boppard that you might enjoy. For more input, you can search for "Fünfseenblick lookout", "The Loreley Rock", "Geierlay suspension bridge", or "Bacharach".
null
false
null
What was Angie Thomas's first published novel?
"The Hate U Give" which was released in 2017 and debuted at number one on The New York Times Best Seller list.
null
false
null
Heroes Hank, the Ranger (voiced by Willie Aames): At 15 years of age, he is the leader of the group. Hank is brave and noble, maintaining a focus and determination even when presented with grave danger. Hank is a Ranger, with a magical energy bow that shoots arrows of glowing energy. These arrows can be used in many different ways such as a climbing tool, to hurt enemies, to bind them, to create light, or to form temporary makeshift cages. Eric, the Cavalier (voiced by Don Most): The Cavalier, age 15, is the spoiled child, originating from a rich home. On the surface, Eric is a big-mouthed comic relief coward. Eric has a heroic core, and frequently saves his friends from danger with his magical Griffon Shield, which can project a force field. Despite his aloofness and several instances of selfishness, Eric shares the common camaraderie of the group, and occasionally steps to the fore as a substitute leader in Hank's absence. Diana, the Acrobat (voiced by Tonia Gayle Smith): Diana is a brave, athletic, and outspoken 14-year-old girl. She is an Acrobat who carries the Javelin Staff, which can change size to suit her needs and be easily reconstructed if broken. Diana is also known to provide inspiration, guidance and support for her friends at times of peril or worry. Presto, the Magician (voiced by Adam Rich): The 14-year-old Wizard of the team. Friendly and fiercely loyal to all in the group, Presto fulfills the role of the well-meaning, diligent magic user whose spells frequently—though not always—either fail or produce unintended results. Sheila, the Thief (voiced by Katie Leigh): As the Thief, Sheila, aged 13, has the Cloak of Invisibility which makes her invisible when the hood is raised over her head. Although occasionally emotionally vulnerable and with a great fear of being alone in the realm, Sheila regularly utilizes the stealth attributes of her cloak at great peril to herself for the benefit of the common goals of her group. Bobby, the Barbarian (voiced by Ted Field III): Bobby is the youngest member of the team at nine years old and the younger brother of Sheila. He is the Barbarian, as indicated by his fur pants and boots, horned helmet, and cross belt harness. Brash, brave and selfless but occasionally impulsive, Bobby's personality frequently puts himself and his friends in danger. His weapon saves the protagonists from peril on numerous occasions.
Given the list below, extract the heroes' names, ages, and who voices them, in the format {Hero name} ({age in digits}) - {voiced by name}. Separate them by a newline.
Hank (15) - Willie Aames Eric (15) - Don Most Diana (14) - Tonia Gayle Smith Presto (14) - Adam Rich Sheila (13) - Katie Leigh Bobby (9) - Ted Field III
null
false
null
Who started the Reformation?
The reformation was started by Martin Luther
null
false
null
The origins of personality assessment date back to the 18th and 19th centuries, when personality was assessed through phrenology, the measurement of bumps on the human skull, and physiognomy, which assessed personality based on a person's outer appearances. Sir Francis Galton took another approach to assessing personality late in the 19th century. Based on the lexical hypothesis, Galton estimated the number of adjectives that described personality in the English dictionary. Galton's list was eventually refined by Louis Leon Thurstone to 60 words that were commonly used for describing personality at the time. Through factor analyzing responses from 1300 participants, Thurstone was able to reduce this severely restricted pool of 60 adjectives into seven common factors. This procedure of factor analyzing common adjectives was later utilized by Raymond Cattell (7th most highly cited psychologist of the 20th Century—based on the peer-reviewed journal literature), who subsequently utilized a data set of over 4000 affect terms from the English dictionary that eventually resulted in construction of the Sixteen Personality Factor Questionnaire (16PF) which also measured up to eight second-stratum personality factors. Of the many introspective (i.e., subjective) self-report instruments constructed to measure the putative Big Five personality dimensions, perhaps the most popular has been the Revised NEO Personality Inventory (NEO-PI-R) However, the psychometric properties of the NEO-PI-R (including its factor analytic/construct validity) has been severely criticized.
What is a personality test?
Personality test is a test of measuring human personality constructs. Its origin can be dated back to 18th century. There are many different types of personality assessment measures. These tests are designed to draw knowledge about an individual's interest, interactive style and many other things.
null
false
null
Classify each of the following as Toy or Tool: Garden Shovel, Golf Club, Baseball Bat, Hammer, Saw.
A garden shovel is a tool. A golf Club is a toy. A baseball bat is a toy. A hammer is a tool. A saw is a tool.
null
false
null
Should you buy new golf clubs when you are a beginner?
It is recommended that you get custom fitted for your golf clubs. However, there are many used golf clubs available to purchase and it can be a great option if you are just exploring the sport. You can also consider renting golf clubs from your local golf course if you are not ready for the expense of buying new or used golf clubs.
null
false
211
Events are a kind of important objective information of the world. Structuralizing and representing such information as machine-readable knowledge are crucial to artificial intelligence BIBREF0, BIBREF1. The main idea is to learn distributed representations for structured events (i.e. event embeddings) from text, and use them as the basis to induce textual features for downstream applications, such as script event prediction and stock market prediction. Parameterized additive models are among the most widely used for learning distributed event representations in prior work BIBREF2, BIBREF3, which passes the concatenation or addition of event arguments' word embeddings to a parameterized function. The function maps the summed vectors into an event embedding space. Furthermore, BIBREF4 ding2015deep and BIBREF5 weber2018event propose using neural tensor networks to perform semantic composition of event arguments, which can better capture the interactions between event arguments. This line of work only captures shallow event semantics, which is not capable of distinguishing events with subtle differences. On the one hand, the obtained event embeddings cannot capture the relationship between events that are syntactically or semantically similar, if they do not share similar word vectors. For example, as shown in Figure FIGREF2 (a), “PersonX threw bomb” and “PersonZ attacked embassy”. On the other hand, two events with similar word embeddings may have similar embeddings despite that they are quite unrelated, for example, as shown in Figure FIGREF2 (b), “PersonX broke record” and “PersonY broke vase”. Note that in this paper, similar events generally refer to events with strong semantic relationships rather than just the same events. One important reason for the problem is the lack of the external commonsense knowledge about the mental state of event participants when learning the objective event representations. In Figure FIGREF2 (a), two event participants “PersonY” and “PersonZ” may carry out a terrorist attack, and hence, they have the same intent: “to bloodshed”, which can help representation learning model maps two events into the neighbor vector space. In Figure FIGREF2 (b), a change to a single argument leads to a large semantic shift in the event representations, as the change of an argument can result in different emotions of event participants. Who “broke the record” is likely to be happy, while, who “broke a vase” may be sad. Hence, intent and sentiment can be used to learn more fine-grained semantic features for event embeddings. Such commonsense knowledge is not explicitly expressed but can be found in a knowledge base such as Event2Mind BIBREF6 and ATOMIC BIBREF7. Thus, we aim to incorporate the external commonsense knowledge, i.e., intent and sentiment, into the learning process to generate better event representations. Specifically, we propose a simple and effective model to jointly embed events, intents and emotions into the same vector space. A neural tensor network is used to learn baseline event embeddings, and we define a corresponding loss function to incorporate intent and sentiment information. Extensive experiments show that incorporating external commonsense knowledge brings promising improvements to event embeddings, achieving 78% and 200% improvements on hard similarity small and big dataset, respectively. With better embeddings, we can achieve superior performances on script event prediction and stock market prediction compared to state-of-the-art baseline methods. Specifically, we propose a simple and effective model to jointly embed events, intents and emotions into the same vector space.
What kind of model do they propose?
A model that can jointly embed events, intents and emotions into the same vector space.
null
false
null
Can you explain the lyrics “His whole back's to these ropes, it don't matter, he's dope” in the song lose yourself by Eminem?
“Back’s to these ropes” is another way of saying that Rabbit, the main character of the song, is in a difficult situation; his back is against the wall. Eminem is comparing the rap battle that Rabbit takes part in at the end of 8 Mile to the sport of boxing, where the ring is surrounded by ropes.
null
false
null
What is a shogi?
Shogi is a board game called Japanese chess. Unlike chess, you can use the pieces you take as your own. There are 81 squares on the board (9x9), and there are more patterns than in chess. There are professional players in Japan, and research using AI is becoming very active.
null
false
null
Mars (1.382–1.666 AU (206.7–249.2 million km; 128.5–154.9 million mi) from the Sun) is smaller than Earth and Venus (0.107 MEarth). It has an atmosphere of mostly carbon dioxide with a surface pressure of 6.1 millibars (0.088 psi; 0.18 inHg); roughly 0.6% of that of Earth but sufficient to support weather phenomena. Its surface, peppered with volcanoes, such as Olympus Mons, and rift valleys, such as Valles Marineris, shows geological activity that may have persisted until as recently as 2 million years ago. Its red colour comes from iron oxide (rust) in its soil. Mars has two tiny natural satellites (Deimos and Phobos) thought to be either captured asteroids, or ejected debris from a massive impact early in Mars's history.
From the passage provided, extract the names of moons/natural satellites of Mars.
Mars has two moons/natural satellites : Deimos and Phobos
1908.01060
true
null
We first describe our corpus collection. Table. TABREF3 lists all corpora we used in the experiments. There are 16 corpora from 10 languages. To increase the variety of corpus, we selected 4 English corpora and 4 Mandarin corpora in addition to the low resource language corpora. As the target of this experiment is low resource speech recognition, we only randomly select 100,000 utterances even if there are more in each corpus. All corpora are available in LDC, voxforge, openSLR or other public websites. Each corpus is manually assigned one domain based on its speech style. Specifically, the domain candidates are telephone, read and broadcast. To increase the variety of corpus, we selected 4 English corpora and 4 Mandarin corpora in addition to the low resource language corpora.
Do they test their approach on large-resource tasks?
Yes.
null
false
18
Neural Machine Translation (NMT) has shown its effectiveness in translation tasks when NMT systems perform best in recent machine translation campaigns BIBREF0 , BIBREF1 . Compared to phrase-based Statistical Machine Translation (SMT) which is basically an ensemble of different features trained and tuned separately, NMT directly modeling the translation relationship between source and target sentences. Unlike SMT, NMT does not require much linguistic information and large monolingual data to achieve good performances. An NMT consists of an encoder which recursively reads and represents the whole source sentence into a context vector and a recurrent decoder which takes the context vector and its previous state to predict the next target word. It is then trained in an end-to-end fashion to learn parameters which maximizes the likelihood between the outputs and the references. Recently, attention-based NMT has been featured in most state-of-the-art systems. First introduced by BIBREF2 , attention mechanism is integrated in decoder side as feedforward layers. It allows the NMT to decide which source words should take part in the predicting process of the next target words. It helps to improve NMTs significantly. Nevertheless, since the attention mechanism is specific to a particular source sentence and the considering target word, it is also specific to particular language pairs. Some recent work has focused on extending the NMT framework to multilingual scenarios. By training such network using parallel corpora in number of different languages, NMT could benefit from additional information embedded in a common semantic space across languages. Basically, the proposed NMT are required to employ multiple encoders or multiple decoders to deal with multilinguality. Furthermore, in order to avoid the tight dependency of the attention mechanism to specific language pairs, they also need to modify their architecture to combine either the encoders or the attention layers. These modifications are specific to the purpose of the tasks as well. Thus, those multilingual NMTs are more complicated, much more free parameters to learn and more difficult to perform standard trainings compared to the original NMT. In this paper, we introduce a unified approach to seamlessly extend the original NMT to multilingual settings. Our approach allows us to integrate any language in any side of the encoder-decoder architecture with only one encoder and one decoder for all the languages involved. Moreover, it is not necessary to do any network modification to enable attention mechanism in our NMT systems. We then apply our proprosed framework in two demanding scenarios: under-resourced translation and zero-resourced translation. The results show that bringing multilinguality to NMT helps to improve individual translations. With some insightful analyses of the results, we set our goal toward a fully multilingual NMT framework. The paper starts with a detailed introduction to attention-based NMT. In Section SECREF3 , related work about multi-task NMT is reviewed. Section SECREF5 describes our proposed approach and thorough comparisons to the related work. It is followed by a section of evaluating our systems in two aforementioned scenarios, in which different strategies have been employed under a unified approach (Section SECREF4 ). Finally, the paper ends with conclusion and future work. This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ The paper starts with a detailed introduction to attention-based NMT. In Section 3.1, related work about multi-task NMT is reviewed. Section 3.2 describes our proposed approach and thorough comparisons to the related work. It is followed by a section of evaluating our systems in two aforementioned scenarios, in which different strategies have been employed under a unified approach (Section 4). Finally, the paper ends with conclusion and future work.
Is there a comparison in the paper between their approach and others'?
Yes.
null
false
null
List of greek goddesses starting by 'L':
Here is the list of greek goddesses starting by 'L': - Lampad - Lampetia - Lampsace - Lethe - Leto - Libya - Limos - Litae - Lyssa
null
false
null
What should we conserve electricity?
When you conserve electricity you are actually helping the environment. You are helping the environment when you consider the way electricity is produced. Electricity is produced by steam generation, solar panels, dam power generators, windmills, gas turbines, and nuclear. Most of these methods use fossil fuels or other fuels that release greenhouse gas emissions during the electricity production process.
null
false
null
Modern Talking was a German pop music duo consisting of arranger, songwriter and producer Dieter Bohlen and singer Thomas Anders. They have been referred to as Germany's most successful pop duo, and have had a number of hit singles, reaching the top five in many countries. Their most popular singles are "You're My Heart, You're My Soul", "You Can Win If You Want", "Cheri, Cheri Lady", "Brother Louie", "Atlantis Is Calling (S.O.S. for Love)" and "Geronimo's Cadillac". Modern Talking worked together from 1983 to 1987, then the band disbanded. In 1998, they reunited and made a successful comeback, recording and releasing music from 1998 to 2003. The duo released singles (many of which involved American rapper Eric Singleton) which again entered the top ten in Germany and abroad, one of which was the re-recorded version of "You're My Heart, You're My Soul '98". After the duo's second and final break-up in 2003, their global sales had reached 120 million singles and albums combined.
what is Modern Talking?
First formed in West Berlin in early 1983, they unexpectedly became popular in the beginning of 1985 with "You're My Heart, You're My Soul", with which they occupied top ten positions in 35 countries including their homeland where the single perched at the top for six consecutive weeks, the single eventually went on to sell eight million copies worldwide. The track was then followed by another number-one hit, "You Can Win If You Want", which was released in the middle of 1985 from the debut album The 1st Album. The album was certified platinum in Germany for selling over 500,000 units. Soon after their second hit, Modern Talking released the single "Cheri, Cheri Lady" which also quickly climbed to the top of the charts in West Germany, Switzerland, Austria and Norway, meanwhile entering the top ten in Sweden and the Netherlands. The single, being the only track released from their second album Let's Talk About Love, managed to push the album to a platinum status in West Germany for sales of over 500,000. The success continued with another two number one singles, "Brother Louie" and "Atlantis Is Calling (S.O.S. for Love)", both from the third album, Ready for Romance. The duo also charted high with their sixth single "Geronimo's Cadillac" from the fourth album In the Middle of Nowhere, and "Jet Airliner" from their fifth album Romantic Warriors. Due to their lackluster received sixth album, Bohlen announced the end of the project during an interview, while Anders was in Los Angeles. This sparked further animosities between the two, who had had a tumultuous and quarreling relationship even when they were together. According to Bohlen, the main reason for breaking up the group was Anders' then-wife Nora, who refused to have her husband interviewed by female reporters, and constantly demanded huge changes made to shows, videos or recordings, a fact that Anders later admitted in his biography. After a final phone call during which both men heavily insulted each other, they refused to speak with each other for over 10 years. During this era, Modern Talking were successful in Europe, Asia, South America, the Middle East and in Iran. In the United Kingdom, they entered the top five only once, with the song "Brother Louie". In 1985, RCA signed Modern Talking for a US deal and released their first album in the US, but they remained almost unknown in North America, never appearing on the US charts. They released two albums each year between 1985 and 1987, while also promoting their singles on television all over Europe, eventually selling sixty-five million records within three years. Notably, Modern Talking were one of the first Western bloc bands allowed to sell their records in the Soviet Union. After four decades of Cold War censorship and import restrictions, Mikhail Gorbachev's Glasnost reforms in 1986 opened the Soviet sphere to Western bands, including Modern Talking at the height of their popularity. As a result, they still maintain a large fanbase in Eastern Europe. Between 1987 and 1997 Immediately after the duo split in mid-1987, Bohlen formed his own project called Blue System and enjoyed several high chart positions, with tracks like "Sorry Little Sarah", "My Bed Is Too Big", "Under My Skin", "Love Suite", "Laila" and "Déjà vu". Meanwhile, Anders went solo, touring under the name of Modern Talking on several continents until the beginning of 1989, when he started to record some of his new pop-like material in LA and London, and also in his native country. Anders recorded five solo albums in English, Different, Whispers, Down on Sunset, When Will I See You Again and Souled, and one of his albums was also recorded in Spanish Barcos de Cristal. He was more successful in foreign countries than his own country, yet he also reached several hits in Germany. Despite all the quarrels and disagreements that Bohlen and Anders got into with each other in the past, they began staying in touch again after Anders moved back to Koblenz, Germany in 1994. 1998–2003: Reunion In the beginning of 1998, the duo reunited and had their first performance together in March on the German TV show Wetten, dass..?. They released a remixed version of their 1984 single "You're My Heart, You're My Soul", which features Eric Singleton on the rap vocals. Their first comeback album Back for Good, which included four new tracks, as well as all of the previous hits remixed with modern techniques, stayed at number one in Germany for five consecutive weeks and managed to top the charts in fifteen countries, eventually selling three million copies in Europe alone. The duo won the award for Best Selling German Band at the World Music Awards that year. The follow-up album Alone also went straight to number one in Germany and was very successful elsewhere; the album managed to sell over one million units in Europe alone. Bohlen and Anders followed the 1980s Modern Talking pattern as they chose to release two singles from each album, as in the past. A series of singles and four more albums followed such as Year of the Dragon, America, Victory and Universe. After reaching the point where they had already collected over 400 gold and platinum awards worldwide, Bohlen and Anders decided to disband again in 2003 just before the release of another best-of compilation. The second break-up seemed to have been triggered mainly from the negative portrayal of Anders by Bohlen written in his autobiography published on 4 October 2003. In the course of all the disputes which eventually drove the duo to its break-up, Bohlen was involved in Deutschland sucht den Superstar (DSDS), the German edition of the UK's Pop Idol. The compilation album was released in 2003 entitled The Final Album which contained all of the duo's singles. Thomas Anders started another solo career immediately after the end of Modern Talking, while Bohlen had by now begun devoting most of his time to new talents, especially those he discovered on DSDS.
null
false
null
What is pleitropy?
Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits. Such a gene that exhibits multiple phenotypic expression is called a pleiotropic gene. Mutation in a pleiotropic gene may have an effect on several traits simultaneously, due to the gene coding for a product used by a myriad of cells or different targets that have the same signaling function. Pleiotropy can arise from several distinct but potentially overlapping mechanisms, such as gene pleiotropy, developmental pleiotropy, and selectional pleiotropy. Gene pleiotropy occurs when a gene product interacts with multiple other proteins or catalyzes multiple reactions. Developmental pleiotropy occurs when mutations have multiple effects on the resulting phenotype. Selectional pleiotropy occurs when the resulting phenotype has many effects on fitness (depending on factors such as age and gender). An example of pleiotropy is phenylketonuria, an inherited disorder that affects the level of phenylalanine, an amino acid that can be obtained from food, in the human body. Phenylketonuria causes this amino acid to increase in amount in the body, which can be very dangerous. The disease is caused by a defect in a single gene on chromosome 12 that codes for enzyme phenylalanine hydroxylase, that affects multiple systems, such as the nervous and integumentary system. Pleiotropic gene action can limit the rate of multivariate evolution when natural selection, sexual selection or artificial selection on one trait favors one allele, while selection on other traits favors a different allele. Some gene evolution is harmful to an organism. Genetic correlations and responses to selection most often exemplify pleiotropy.
null
false
null
Group the following into manmade and non-manmade (naturally occurring) items: computer, salt, fence, tree stump, hillock, lightbulb, terracotta planter, snow, wolves, tablecloth, periodicals, museum
Manmade: computer, fence, lightbulb, terracotta planter, tablecloth, periodicals, museum Non-manmade: salt, hillock, tree stump, snow, wolves
null
false
null
What are 5 ways to eat apples?
1. Raw - you can eat apples just as they are (but usually after washing) 2. Sliced to share 3. Peeled and sliced to be baked (apple crisp anyone?) 4. Mashed into apple sauce 5. Candied with a sweet glaze
null
false
null
Is beauty objective or subjective?
The opinions on this question vary from person to person. Statistically, a pattern of preference is easily visible on what people would describe as beautiful. An evident example of this phenomenon is a beauty standard for women and men. Society seems to agree on what features, both physical and internal, can deem a person beautiful. The standard of beauty is also projected beyond humans, onto animal breeds, for example, in cat shows. A link between beauty and how closely it resembles perfection appears to exist. For example, a rose with many petals and no tears or scratches is generally seen as more beautiful than one with fewer, visibly ruined petals. However, the question of perfection also raises a problem as it is difficult and arguably impossible to define perfection. Beauty does not necessarily mean perfection. For example, large eyes are statistically considered to be more attractive however, in terms of their biological purpose they possess no superiority to small eyes. Therefore, both beauty and perfection depend on the perspective or purpose with which you observe an object. Furthermore, the clear fluctuations through centuries and cultures on what is considered to be beautiful should also be taken into account. Fashion trends and art styles, for instance, are constantly changing. Additionally, the diversity in opinions suggests there is nothing that is considered beautiful by everyone, so even if the majority deem a particular object or thing as beautiful, it’s beauty remains subjective because it remains based on an ever-changing preference rather than objective truth.
null
false
null
The race was held over the "international distance" and consisted of 1,500 metres (0.93 mi) swimming, 40 kilometres (25 mi), road bicycling, and 10 kilometres (6.2 mi) road running. The winner Reinaldo Colucci of Brazil qualifies to compete in the triathlon competitions at the 2012 Summer Olympics in London, Great Britain.
What was the race consisted of?
The race was held over the "international distance" and consisted of 1,500 metres (0.93 mi) swimming, 40 kilometres (25 mi), road bicycling, and 10 kilometres (6.2 mi) road running.
null
false
242
Automatic Text Summarization deals with the task of condensing documents into a summary, whose level is similar to a human-generated summary. It is mostly distributed into two distinct domains, i.e., Abstractive Summarization and Extractive Summarization. Abstractive summarization( Dejong et al. ,1978) involves models to deduce the crux of the document. It then presents a summary consisting of words and phrases that were not there in the actual document, sometimes even paraphrasing BIBREF1 . A state of art method proposed by Wenyuan Zeng BIBREF2 produces such summaries with length restricted to 75. There have been many recent developments that produce optimal results, but it is still in a developing phase. It highly relies on natural language processing techniques, which is still evolving to match human standards. These shortcomings make abstractive summarization highly domain selective. As a result, their application is skewed to the areas where NLP techniques have been superlative. Extractive Summarization, on the other hand, uses different methods to identify the most informative/dominant sentences through the text, and then present the results, ranking them accordingly. In this paper, we have proposed two novel stand-alone summarization methods.The first method is based on Glove Model BIBREF3 ,and other is based on Facebook's InferSent BIBREF4 . We have also discussed how we can effectively subdue shortcomings of one model by using it in coalition with models which capture the view that other faintly held. Additional experiments on a large Natural Language Inference (NLI) task illustrate that our method can be easily applied to more NLP tasks with only a minor adjustment.
Can their method be applied to more NLP tasks?
Yes, only a minor adjustment is needed.
null
false
142
We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations of a context-agnostic MT system. It does not use any states of a trained MT model whose outputs it corrects and therefore can in principle be trained to correct translations from any black-box MT system. The DocRepair model requires only monolingual document-level data in the target language. It is a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. Consistent groups come from monolingual document-level data. To obtain inconsistent groups, each sentence in a group is replaced with its round-trip translation produced in isolation from context. More formally, forming a training minibatch for the DocRepair model involves the following steps (see also Figure FIGREF9): sample several groups of sentences from the monolingual data; for each sentence in a group, (i) translate it using a target-to-source MT model, (ii) sample a translation of this back-translated sentence in the source language using a source-to-target MT model; using these round-trip translations of isolated sentences, form an inconsistent version of the initial groups; use inconsistent groups as input for the DocRepair model, consistent ones as output. At test time, the process of getting document-level translations is two-step (Figure FIGREF10): produce translations of isolated sentences using a context-agnostic MT model; apply the DocRepair model to a sequence of context-agnostic translations to correct inconsistencies between translations. In the scope of the current work, the DocRepair model is the standard sequence-to-sequence Transformer. Sentences in a group are concatenated using a reserved token-separator between sentences. The Transformer is trained to correct these long inconsistent pseudo-sentences into consistent ones. The token-separator is then removed from corrected translations. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations of a context-agnostic MT system.
What model do we propose to correct inconsistencies between sentence-level translations in context-free machine translation systems?
Monolingual DocRepair model.
2004.04228
true
null
We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details. We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18.
Do they use crowdsourcing to collect human judgements?
Yes.
null
false
null
Which is a species of fish? Walu or Whitehead
Walu
null
false
null
Infant Annihilator are an English deathcore band formed in Hull, East Riding of Yorkshire in 2012 by drummer Aaron Kitcher and guitarist Eddie Pickard. The band are known for their technical, eclectic and extreme musical style; parodistic and satirically graphic lyrical content and shock humour; and music videos that feature ribald themes. Their debut album The Palpable Leprosy of Pollution, which features the American vocalist Dan Watson, was released in late 2012. After replacing their vocalist with Massachusetts-native Dickie Allen, their second album The Elysian Grandeval Galèriarch was recorded and mixed by Jesse Kirkbride at his home studio Kirkbride Recordings and was released in 2016. Their third album, The Battle of Yaldabaoth, was released on 11 September 2019. Infant Annihilator were described by the Hysteria Magazine as an internet band and even though they have stated that touring is a possibility, they have performed only as a studio project so far.
When was The Palpable Leprosy of Pollution released?
The album The Palpable Leprosy of Pollution was released in late 2012 featuring the American vocalist Dan Watson.
null
false
206
Keyphrases are single or multi-word lexical units that best summarise a document BIBREF0. As such, they are of great importance for indexing, categorising and browsing digital libraries BIBREF1. Yet, very few documents have keyphrases assigned, thus raising the need for automatic keyphrase generation systems. This task falls under the task of automatic keyphrase extraction which can also be the subtask of finding keyphrases that only appear in the input document. Generating keyphrases can be seen as a particular instantiation of text summarization, where the goal is not to produce a well-formed piece of text, but a coherent set of phrases that convey the most salient information. Those phrases may or may not appear in the document, the latter requiring some form of abstraction to be generated. State-of-the-art systems for this task rely on recurrent neural networks BIBREF2, BIBREF3, BIBREF4, and hence require large amounts of annotated training data to achieve good performance. As gold annotated data is expensive and difficult to obtain BIBREF5, previous works focused on readily available scientific abstracts and used author-assigned keyphrases as a proxy for expert annotations. However, this poses two major issues: 1) neural models for keyphrase generation do not generalize well across domains, thus limiting their use in practice; 2) author-assigned keyphrases exhibit strong consistency issues that negatively impacts the model's performance. There is therefore a great need for annotated data from different sources, that is both sufficiently large to support the training of neural-based models and that comprises gold-standard labels provided by experts. In this study, we address this need by providing KPTimes, a dataset made of 279 923 news articles that comes with editor-assigned keyphrases. Online news are particularly relevant to keyphrase generation since they are a natural fit for faceted navigation BIBREF6 or topic detection and tracking BIBREF7. Also, and not less importantly, they are available in large quantities and are sometimes accompanied by metadata containing human-assigned keyphrases initially intended for search engines. Here, we divert these annotations from their primary purpose, and use them as gold-standard labels to automatically build our dataset. More precisely, we collect data by crawling selected news websites and use heuristics to draw texts paired with gold keyphrases. We then explore the resulting dataset to better understand how editors tag documents, and how these expert annotations differ from author-assigned keyphrases found in scholarly documents. Finally, we analyse the performance of state-of-the-art keyphrase generation models and investigate their transferability to the news domain and the impact of domain shift. More precisely, we collect data by crawling selected news websites and use heuristics to draw texts paired with gold keyphrases.
What do the authors use to draw texts paired with gold keyphrases?
Heuristics.
null
false
null
Provide a list of ten countries that don't have any ocean borders, in alphabetical order.
The following countries are landlocked and have no coastal borders: Afghanistan, Austra, Bhutan, Bolivia, Hungary, Laos, Nepal, Serbia, Switzerland and Zimbabwe.
1912.03234
false
null
The proposed models use binary classifiers to perform point-wise ranking, and therefore require a labelled dataset. To generate it, we explore two implicit user-feedback labelling strategies: five-minute reuse and one-day return. Online A/B testing is used to determine if these labelling strategies are suited to optimize the desired user-satisfaction metrics, and offline data to evaluated and compared the system's performance. To generate it, we explore two implicit user-feedback labelling strategies: five-minute reuse and one-day return.
What feedback labels are used?
The answers are shown as follows: * five-minute reuse and one-day return
null
false
63
In this article, we describe some discursive segmentation methods as well as a preliminary evaluation of the segmentation quality. Although our experiment were carried for documents in French, we have developed three discursive segmentation models solely based on resources simultaneously available in several languages: marker lists and a statistic POS labeling. We have also carried out automatic evaluations of these systems against the Annodis corpus, which is a manually annotated reference. The results obtained are very encouraging. In this article, we describe some discursive segmentation methods as well as an evaluation of the segmentation quality.
What is the research field of their paper?
Discursive segmentation.
null
false
24
Here we discuss the case that the training data are translated. We consider each result is affected by at least three factors: (1) training corpus, (2) data size, (3) whether the source corpus is translated into the target language. To study the effect of data-size, we conducted an extra experiment where we down-sampled the size of English data to be the same as Chinese corpus, and used the down-sampled corpus to train. Then We carried out one-way ANOVA test and found out the significance of the three factors are ranked as below: (1) > (2) >> (3). The analysis supports that the characteristics of training data is more important than translated into target language or not. Therefore, although translation degrades the performance, whether translating the corpus into the target language is not critical. Here we discuss the case that the training data are translated. We consider each result is affected by at least three factors: (1) training corpus, (2) data size, (3) whether the source corpus is translated into the target language.
What factors will influence the results in the case that the training data are translated?
(1) training corpus, (2) data size, (3) whether the source corpus is translated into the target language.
null
false
null
Give me a list of 5 cool starter projects a student new to computer programming could have fun making.
Here are some great starter projects for those learning to code: 1. Conway's Game of Life 2. Rock Paper Scissors 3. Connect 4 4. Choose Your Own Adventure style text-based game 5. Blackjack Simulator
null
false
null
Current industry trends and how to capitalize on them?
1. Increase focus on sustainability 2. Cutting unnecessary costs 3. Emphasis on digital transformation
null
false
null
Is this a list of universities or companies: Alphabet, Microsoft, Dell, John Deere, and JP Morgan Chase?
That is a list of companies.
null
false
null
What are the 3 largest oil companies?
The 3 largest oil companies are Saudi Aramco, ExxonMobil, and Chevron.
null
false
null
Bath & Body Works was founded in 1990 in New Albany, Ohio. The company's first store opened in a Cambridge, Massachusetts mall in September 1990. In 1997, a secondary brand called Bath & Body Works at Home was launched. Two years later the company launched the White Barn Candle Company, a division specializing in the sale of scented candles.
Extract some details about the company 'Bath & Body Works' from the given passage
1. In 1990, New Albany, Ohio, Bath & Body Works was founded. 2. In the same year Sept 1990 , the company's first store was opened in a Cambridge, Massachusetts mall. 3. In year 1997, a secondary brand called Bath & Body Works at Home was also launched. 4. Two years after the lauch, they launched another company named the White Barn Candle Company, a division specializing in the sale of scented candles.
null
false
null
Where is a good place to go on holiday in Cornwall, England?
Without a doubt, the best place in Cornwall is Rock. Situated in close proximity to both Daymer Bay and Polzeath beach, you have easy access to great beaches and breathtaking clliff top walks. In addition, the Camel estuary is close by which is great for dinghy sailing and there is a frequent water taxi that will take you over the estuary from Rock to Padstow (which is full of good pubs and restaurants). If one is partial golf, St Enedoc's golf course can be found in Rock which boasts one of the most beautiful Links in the South West. On the Links one will find St Enedoc's Church, an enigmatic church sunken into the dunes and also the burial place of Sir John Betjeman CBE, who was Poet Laureate from 1972 until his death on 19 May 1984.
null
false
null
In economics, the Gini coefficient, also known as the Gini index or Gini ratio, is a measure of statistical dispersion intended to represent the income inequality or the wealth inequality or the consumption inequality within a nation or a social group. It was developed by statistician and sociologist Corrado Gini. The Gini coefficient measures the inequality among values of a frequency distribution, such as levels of income. A Gini coefficient of 0 reflects perfect equality, where all income or wealth values are the same, while a Gini coefficient of 1 (or 100%) reflects maximal inequality among values. For example, if everyone has the same income, the Gini coefficient will be 0. In contrast, a Gini coefficient of 1 indicates that within a group of people, a single individual has all the income or consumption, while all others have none. The Gini coefficient was proposed by Corrado Gini as a measure of inequality of income or wealth. For OECD countries, in the late 20th century, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 and 0.49, with Slovenia being the lowest and Mexico the highest. African countries had the highest pre-tax Gini coefficients in 2008–2009, with South Africa having the world's highest, estimated to be 0.63 to 0.7, although this figure drops to 0.52 after social assistance is taken into account, and drops again to 0.47 after taxation. The global income Gini coefficient in 2005 has been estimated to be between 0.61 and 0.68 by various sources.
What does the Gini coefficient measure?
The Gini coefficient measures the inequality among values of a frequency distribution, such as levels of income.
null
false
null
Create a beginner-intermediate push-pull-legs workout routine that I can do 6 times a week.
Day 1 (push): 5x5 Bench Press 3x8 Incline Bench Press 3x8 Overhead Press 3x8-12 Triceps Pushdown 3x8-12 Triceps Extension superset with 3x8-12 Lateral Raise Day 2 (pull): 1x5 Deadlift 3x8 Lat Pulldown 3x8 Seated Row 3x12-15 Face Pull 4x8-12 Incline Curl 4x8-12 Hammer Curl Day 3 (legs): 2x5 Squat 3x8 Romanian Deadlift 3x8 Leg Press 3x8-12 Seated Leg Curl 3x8-12 Standing Calf Raise Day 4: (push) 5x5 Overhead Press 3x8 Incline Bench Press 3x8 Bench Press 3x8-12 Triceps Pushdown 3x8-12 Triceps Extension superset with 3x8-12 Lateral Raise Day 5 (pull): 5x5 Bent Over Row 3x8 Lat Pulldown 3x8 Seated Row 3x12-15 Face Pull 4x8-12 Incline Curl 4x8-12 Hammer Curl Day 6 (legs): 2x5 Squat 3x8 Romanian Deadlift 3x8 Leg Press 3x8-12 Seated Leg Curl 3x8-12 Standing Calf Raise